NASA Technical Reports Server (NTRS)
1989-01-01
An overview of the five volume set of Information System Life-Cycle and Documentation Standards is provided with information on its use. The overview covers description, objectives, key definitions, structure and application of the standards, and document structure decisions. These standards were created to provide consistent NASA-wide structures for coordinating, controlling, and documenting the engineering of an information system (hardware, software, and operational procedures components) phase by phase.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-02
... the structure of their address information but have not defined the elements that constitute an address. Knowledge of structure, content, and quality is required to successfully share information in a... discrete elements of address information and provides standardized terminology and definitions to alleviate...
ERIC Educational Resources Information Center
Ohio State Univ., Columbus. National Center for Research in Vocational Education.
"Classification Structures for Career Information" was created to provide Career Information Delivery Systems (CIDS) staff with pertinent and useful occupational information arranged according to the Standard Occupational Classification (SOC) structure. Through this publication, the National Occupational Information Coordinating…
ERIC Educational Resources Information Center
Ohio State Univ., Columbus. National Center for Research in Vocational Education.
"Classification Structures for Career Information" was created to provide Career Information Delivery Systems (CIDS) staff with pertinent and useful occupational information arranged according to the Standard Occupational Classification (SOC) structure. Through this publication, the National Occupational Information Coordinating…
ERIC Educational Resources Information Center
Ohio State Univ., Columbus. National Center for Research in Vocational Education.
"Classification Structures for Career Information" was created to provide Career Information Delivery Systems (CIDS) staff with pertinent and useful occupational information arranged according to the Standard Occupational Classification (SOC) structure. Through this publication, the National Occupational Information Coordinating…
Using the Structured Product Labeling format to index versatile chemical data (ACS Spring meeting)
Structured Product Labeling (SPL) is a document markup standard approved by the Health Level Seven (HL7) standards organization and adopted by the FDA as a mechanism for exchanging product and facility information. Product information provided by companies in SPL format may be ac...
Yu, Kaijun
2010-07-01
This paper Analys the design goals of Medical Instrumentation standard information retrieval system. Based on the B /S structure,we established a medical instrumentation standard retrieval system with ASP.NET C # programming language, IIS f Web server, SQL Server 2000 database, in the. NET environment. The paper also Introduces the system structure, retrieval system modules, system development environment and detailed design of the system.
NASA GSFC Mechanical Engineering Latest Inputs for Verification Standards (GEVS) Updates
NASA Technical Reports Server (NTRS)
Kaufman, Daniel
2003-01-01
This viewgraph presentation provides information on quality control standards in mechanical engineering. The presentation addresses safety, structural loads, nonmetallic composite structural elements, bonded structural joints, externally induced shock, random vibration, acoustic tests, and mechanical function.
Recent advances in standards for collaborative Digital Anatomic Pathology
2011-01-01
Context Collaborative Digital Anatomic Pathology refers to the use of information technology that supports the creation and sharing or exchange of information, including data and images, during the complex workflow performed in an Anatomic Pathology department from specimen reception to report transmission and exploitation. Collaborative Digital Anatomic Pathology can only be fully achieved using medical informatics standards. The goal of the international integrating the Healthcare Enterprise (IHE) initiative is precisely specifying how medical informatics standards should be implemented to meet specific health care needs and making systems integration more efficient and less expensive. Objective To define the best use of medical informatics standards in order to share and exchange machine-readable structured reports and their evidences (including whole slide images) within hospitals and across healthcare facilities. Methods Specific working groups dedicated to Anatomy Pathology within multiple standards organizations defined standard-based data structures for Anatomic Pathology reports and images as well as informatic transactions in order to integrate Anatomic Pathology information into the electronic healthcare enterprise. Results The DICOM supplements 122 and 145 provide flexible object information definitions dedicated respectively to specimen description and Whole Slide Image acquisition, storage and display. The content profile “Anatomic Pathology Structured Report” (APSR) provides standard templates for structured reports in which textual observations may be bound to digital images or regions of interest. Anatomic Pathology observations are encoded using an international controlled vocabulary defined by the IHE Anatomic Pathology domain that is currently being mapped to SNOMED CT concepts. Conclusion Recent advances in standards for Collaborative Digital Anatomic Pathology are a unique opportunity to share or exchange Anatomic Pathology structured reports that are interoperable at an international level. The use of machine-readable format of APSR supports the development of decision support as well as secondary use of Anatomic Pathology information for epidemiology or clinical research. PMID:21489187
Information System Life-Cycle And Documentation Standards (SMAP DIDS)
NASA Technical Reports Server (NTRS)
1990-01-01
Although not computer program, SMAP DIDS written to provide systematic, NASA-wide structure for documenting information system development projects. Each DID (data item description) outlines document required for top-quality software development. When combined with management, assurance, and life cycle standards, Standards protect all parties who participate in design and operation of new information system.
Telecommunications and information technology standard-setting in Japan: A preliminary survey
NASA Astrophysics Data System (ADS)
Besen, Stanley M.
This note describes and analyzes telecommunications and information technology standard setting in Japan, compares the organization and structure of a private Japanese voluntary standard organization with those of similar European and American organizations, and examines the prospects for cooperating among these groups.
The geometrical structure of quantum theory as a natural generalization of information geometry
NASA Astrophysics Data System (ADS)
Reginatto, Marcel
2015-01-01
Quantum mechanics has a rich geometrical structure which allows for a geometrical formulation of the theory. This formalism was introduced by Kibble and later developed by a number of other authors. The usual approach has been to start from the standard description of quantum mechanics and identify the relevant geometrical features that can be used for the reformulation of the theory. Here this procedure is inverted: the geometrical structure of quantum theory is derived from information geometry, a geometrical structure that may be considered more fundamental, and the Hilbert space of the standard formulation of quantum mechanics is constructed using geometrical quantities. This suggests that quantum theory has its roots in information geometry.
NASA Astrophysics Data System (ADS)
Nikolaev, V. N.; Titov, D. V.; Syryamkin, V. I.
2018-05-01
The comparative assessment of the level of channel capacity of different variants of the structural organization of the automated information processing systems is made. The information processing time assessment model depending on the type of standard elements and their structural organization is developed.
ERIC Educational Resources Information Center
CAUSE, Boulder, CO.
Seven papers and one abstract of a paper are presented from the 1995 CAUSE conference track on policies and standards issues faced by managers of information technology at colleges and universities. The papers include: (1) "University/College Information System Structures and Policies: Do They Make a Difference? An Initial Assessment"…
Documenting the information content of images.
Bidgood, W. D.
1997-01-01
A standards-based message and terminology architecture has been specified to enable large-scale open and non-proprietary interchange of imaging-procedure descriptions and image-interpretation reports providing semantically-rich linkage of linguistic and non-linguistic information. The DICOM Structured Reporting Supplement, now available for trial use, embodies this interdependent message/terminology architecture. A DICOM structured report object is a self-describing information structure that can be tailored to support diverse clinical observation reporting applications by utilization of templates and context-dependent terminology from an external message/terminology mapping resource such as the SNOMED DICOM Microglossary (SDM), HL7 Vocabulary, or Terminology Resource for Message Standards (TeRMS). PMID:9357661
2016-08-01
estimate should have a standardized structure that breaks costs into discrete elements with sufficient detail to ensure that cost elements are...FORCE STRUCTURE Better Information Needed to Support Air Force A-10 and Other Future Divestment Decisions Report...Accountability Office Highlights of GAO-16-816, a report to congressional committees August 2016 FORCE STRUCTURE Better Information Needed to Support
The geometrical structure of quantum theory as a natural generalization of information geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reginatto, Marcel
2015-01-13
Quantum mechanics has a rich geometrical structure which allows for a geometrical formulation of the theory. This formalism was introduced by Kibble and later developed by a number of other authors. The usual approach has been to start from the standard description of quantum mechanics and identify the relevant geometrical features that can be used for the reformulation of the theory. Here this procedure is inverted: the geometrical structure of quantum theory is derived from information geometry, a geometrical structure that may be considered more fundamental, and the Hilbert space of the standard formulation of quantum mechanics is constructed usingmore » geometrical quantities. This suggests that quantum theory has its roots in information geometry.« less
Structuring Legacy Pathology Reports by openEHR Archetypes to Enable Semantic Querying.
Kropf, Stefan; Krücken, Peter; Mueller, Wolf; Denecke, Kerstin
2017-05-18
Clinical information is often stored as free text, e.g. in discharge summaries or pathology reports. These documents are semi-structured using section headers, numbered lists, items and classification strings. However, it is still challenging to retrieve relevant documents since keyword searches applied on complete unstructured documents result in many false positive retrieval results. We are concentrating on the processing of pathology reports as an example for unstructured clinical documents. The objective is to transform reports semi-automatically into an information structure that enables an improved access and retrieval of relevant data. The data is expected to be stored in a standardized, structured way to make it accessible for queries that are applied to specific sections of a document (section-sensitive queries) and for information reuse. Our processing pipeline comprises information modelling, section boundary detection and section-sensitive queries. For enabling a focused search in unstructured data, documents are automatically structured and transformed into a patient information model specified through openEHR archetypes. The resulting XML-based pathology electronic health records (PEHRs) are queried by XQuery and visualized by XSLT in HTML. Pathology reports (PRs) can be reliably structured into sections by a keyword-based approach. The information modelling using openEHR allows saving time in the modelling process since many archetypes can be reused. The resulting standardized, structured PEHRs allow accessing relevant data by retrieving data matching user queries. Mapping unstructured reports into a standardized information model is a practical solution for a better access to data. Archetype-based XML enables section-sensitive retrieval and visualisation by well-established XML techniques. Focussing the retrieval to particular sections has the potential of saving retrieval time and improving the accuracy of the retrieval.
NASA Astrophysics Data System (ADS)
Pittaway, Jeff; Archer, Norm
Medical interventions are often delayed or erroneous when information needed for diagnosing or prescribing is missing or unavailable. In support of increased information flows, the healthcare industry has invested substantially in standards intended to specify, routinize, and make uniform the type and format of medical information in clinical healthcare information systems such as Electronic Medical Record systems (EMRs). However, fewer than one in four Canadian physicians have adopted EMRs. Deeper analysis illustrates that physicians may perceive value in standardized EMRs when they need to exchange information in highly structured situations among like participants and like environments. However, standards present restrictive barriers to practitioners when they face equivocal situations, unforeseen contingencies, or exchange information across different environments. These barriers constitute a compelling explanation for at least part of the observed low EMR adoption rates. Our recommendations to improve the perceived value of standardized clinical information systems espouse re-conceptualizing the role of standards to embrace greater flexibility in some areas.
NASA Astrophysics Data System (ADS)
Liu, G.; Wu, C.; Li, X.; Song, P.
2013-12-01
The 3D urban geological information system has been a major part of the national urban geological survey project of China Geological Survey in recent years. Large amount of multi-source and multi-subject data are to be stored in the urban geological databases. There are various models and vocabularies drafted and applied by industrial companies in urban geological data. The issues such as duplicate and ambiguous definition of terms and different coding structure increase the difficulty of information sharing and data integration. To solve this problem, we proposed a national standard-driven information classification and coding method to effectively store and integrate urban geological data, and we applied the data dictionary technology to achieve structural and standard data storage. The overall purpose of this work is to set up a common data platform to provide information sharing service. Research progresses are as follows: (1) A unified classification and coding method for multi-source data based on national standards. Underlying national standards include GB 9649-88 for geology and GB/T 13923-2006 for geography. Current industrial models are compared with national standards to build a mapping table. The attributes of various urban geological data entity models are reduced to several categories according to their application phases and domains. Then a logical data model is set up as a standard format to design data file structures for a relational database. (2) A multi-level data dictionary for data standardization constraint. Three levels of data dictionary are designed: model data dictionary is used to manage system database files and enhance maintenance of the whole database system; attribute dictionary organizes fields used in database tables; term and code dictionary is applied to provide a standard for urban information system by adopting appropriate classification and coding methods; comprehensive data dictionary manages system operation and security. (3) An extension to system data management function based on data dictionary. Data item constraint input function is making use of the standard term and code dictionary to get standard input result. Attribute dictionary organizes all the fields of an urban geological information database to ensure the consistency of term use for fields. Model dictionary is used to generate a database operation interface automatically with standard semantic content via term and code dictionary. The above method and technology have been applied to the construction of Fuzhou Urban Geological Information System, South-East China with satisfactory results.
The Development of Clinical Document Standards for Semantic Interoperability in China
Yang, Peng; Pan, Feng; Wan, Yi; Tu, Haibo; Tang, Xuejun; Hu, Jianping
2011-01-01
Objectives This study is aimed at developing a set of data groups (DGs) to be employed as reusable building blocks for the construction of the eight most common clinical documents used in China's general hospitals in order to achieve their structural and semantic standardization. Methods The Diagnostics knowledge framework, the related approaches taken from the Health Level Seven (HL7), the Integrating the Healthcare Enterprise (IHE), and the Healthcare Information Technology Standards Panel (HITSP) and 1,487 original clinical records were considered together to form the DG architecture and data sets. The internal structure, content, and semantics of each DG were then defined by mapping each DG data set to a corresponding Clinical Document Architecture data element and matching each DG data set to the metadata in the Chinese National Health Data Dictionary. By using the DGs as reusable building blocks, standardized structures and semantics regarding the clinical documents for semantic interoperability were able to be constructed. Results Altogether, 5 header DGs, 48 section DGs, and 17 entry DGs were developed. Several issues regarding the DGs, including their internal structure, identifiers, data set names, definitions, length and format, data types, and value sets, were further defined. Standardized structures and semantics regarding the eight clinical documents were structured by the DGs. Conclusions This approach of constructing clinical document standards using DGs is a feasible standard-driven solution useful in preparing documents possessing semantic interoperability among the disparate information systems in China. These standards need to be validated and refined through further study. PMID:22259722
A Journey in Standard Development: The Core Manufacturing Simulation Data (CMSD) Information Model.
Lee, Yung-Tsun Tina
2015-01-01
This report documents a journey "from research to an approved standard" of a NIST-led standard development activity. That standard, Core Manufacturing Simulation Data (CMSD) information model, provides neutral structures for the efficient exchange of manufacturing data in a simulation environment. The model was standardized under the auspices of the international Simulation Interoperability Standards Organization (SISO). NIST started the research in 2001 and initiated the standardization effort in 2004. The CMSD standard was published in two SISO Products. In the first Product, the information model was defined in the Unified Modeling Language (UML) and published in 2010 as SISO-STD-008-2010. In the second Product, the information model was defined in Extensible Markup Language (XML) and published in 2013 as SISO-STD-008-01-2012. Both SISO-STD-008-2010 and SISO-STD-008-01-2012 are intended to be used together.
A Pilot Standard National Course Classification System for Secondary Education.
ERIC Educational Resources Information Center
Bradby, Denise; And Others
This publication is the culmination of a major effort to help establish a common terminology, descriptions, and coding structure for course information at the secondary level of education. There had previously been no standard system for collecting, maintaining, reporting, and exchanging comparable information about student course taking patterns.…
Geo3DML: A standard-based exchange format for 3D geological models
NASA Astrophysics Data System (ADS)
Wang, Zhangang; Qu, Honggang; Wu, Zixing; Wang, Xianghong
2018-01-01
A geological model (geomodel) in three-dimensional (3D) space is a digital representation of the Earth's subsurface, recognized by geologists and stored in resultant geological data (geodata). The increasing demand for data management and interoperable applications of geomodelscan be addressed by developing standard-based exchange formats for the representation of not only a single geological object, but also holistic geomodels. However, current standards such as GeoSciML cannot incorporate all the geomodel-related information. This paper presents Geo3DML for the exchange of 3D geomodels based on the existing Open Geospatial Consortium (OGC) standards. Geo3DML is based on a unified and formal representation of structural models, attribute models and hierarchical structures of interpreted resultant geodata in different dimensional views, including drills, cross-sections/geomaps and 3D models, which is compatible with the conceptual model of GeoSciML. Geo3DML aims to encode all geomodel-related information integrally in one framework, including the semantic and geometric information of geoobjects and their relationships, as well as visual information. At present, Geo3DML and some supporting tools have been released as a data-exchange standard by the China Geological Survey (CGS).
Technologies and standards in the information systems of the soil-geographic database of Russia
NASA Astrophysics Data System (ADS)
Golozubov, O. M.; Rozhkov, V. A.; Alyabina, I. O.; Ivanov, A. V.; Kolesnikova, V. M.; Shoba, S. A.
2015-01-01
The achievements, problems, and challenges of the modern stage of the development of the Soil-Geographic Database of Russia (SGDBR) and the history of this project are outlined. The structure of the information system of the SGDBR as an internet-based resource to collect data on soil profiles and to integrate the geographic and attribute databases on the same platform is described. The pilot project in Rostov oblast illustrates the inclusion of regional information in the SGDBR and its application for solving practical problems. For the first time in Russia, the GeoRSS standard based on the structured hypertext representation of the geographic and attribute information has been applied in the state system for the agromonitoring of agricultural lands in Rostov oblast and information exchange through the internet.
Videotex: Chimera or Dream Machine.
ERIC Educational Resources Information Center
Ball, A. J. S.
1981-01-01
Describes three current two-way public information systems representing the major technologies which are being proposed as international standards: Prestel (United Kingdom), Teletel (France), and Telidon (Canada). Information retrieval structures are compared, and difficulties for both the information provider and the information user are…
A review of medical terminology standards and structured reporting.
Awaysheh, Abdullah; Wilcke, Jeffrey; Elvinger, François; Rees, Loren; Fan, Weiguo; Zimmerman, Kurt
2018-01-01
Much effort has been invested in standardizing medical terminology for representation of medical knowledge, storage in electronic medical records, retrieval, reuse for evidence-based decision making, and for efficient messaging between users. We only focus on those efforts related to the representation of clinical medical knowledge required for capturing diagnoses and findings from a wide range of general to specialty clinical perspectives (e.g., internists to pathologists). Standardized medical terminology and the usage of structured reporting have been shown to improve the usage of medical information in secondary activities, such as research, public health, and case studies. The impact of standardization and structured reporting is not limited to secondary activities; standardization has been shown to have a direct impact on patient healthcare.
Educational Standards for Chiropractic Colleges.
ERIC Educational Resources Information Center
Council on Chiropractic Education, Des Moines, IA.
Contents include: background information on the historical development, purpose, structure, and function of chiropractic accreditation; accreditation policy (eligibility, procedures, classifications, commission actions, and reports); standards for chiropractic colleges (organization, administration, scholastic regulations curriculum, faculty,…
DSSTox and Chemical Information Technologies in Support of PredictiveToxicology
The EPA NCCT Distributed Structure-Searchable Toxicity (DSSTox) Database project initially focused on the curation and publication of high-quality, standardized, chemical structure-annotated toxicity databases for use in structure-activity relationship (SAR) modeling. In recent y...
ERIC Educational Resources Information Center
Rasmussen, Ashley B.
2017-01-01
This study utilized a semi-structured interview approach to identify how college methods professors in Nebraska are engaging pre-service K-12 teachers with the Next Generation Science Standards and to determine if this information is being carried over to Nebraska K-12 classrooms. The study attempted to address these items by answering the…
NASA Operational Environment Team (NOET) - NASA's key to environmental technology
NASA Technical Reports Server (NTRS)
Cook, Beth
1993-01-01
NOET is a NASA-wide team which supports the research and development community by sharing information both in person and via a computerized network, assisting in specification and standard revisions, developing cleaner propulsion systems, and exploring environmentally compliant alternatives to current processes. NOET's structure, dissemination of materials, electronic information, EPA compliance, specifications and standards, and environmental research and development are discussed.
A Multistate Review of Professional Teaching Standards. Summary. Issues & Answers. REL 2009-No. 075
ERIC Educational Resources Information Center
White, Melissa Eiler; Makkonen, Reino; Stewart, Kari Becker
2009-01-01
This document presents a summary of a larger report that reviews teaching standards in six states (California, Florida, Illinois, North Carolina, Ohio, and Texas) focuses on the structure, target audience, and selected content of the standards to inform California's revision of its teaching standards. The report was developed at the request of key…
A Multistate Review of Professional Teaching Standards. Issues & Answers. REL 2009-No. 075
ERIC Educational Resources Information Center
White, Melissa Eiler; Makkonen, Reino; Stewart, Kari Becker
2009-01-01
This review of teaching standards in six states (California, Florida, Illinois, North Carolina, Ohio, and Texas) focuses on the structure, target audience, and selected content of the standards to inform California's revision of its teaching standards. The report was developed at the request of key education agencies in California. The review…
Students using visual thinking to learn science in a Web-based environment
NASA Astrophysics Data System (ADS)
Plough, Jean Margaret
United States students' science test scores are low, especially in problem solving, and traditional science instruction could be improved. Consequently, visual thinking, constructing science structures, and problem solving in a web-based environment may be valuable strategies for improving science learning. This ethnographic study examined the science learning of fifteen fourth grade students in an after school computer club involving diverse students at an inner city school. The investigation was done from the perspective of the students, and it described the processes of visual thinking, web page construction, and problem solving in a web-based environment. The study utilized informal group interviews, field notes, Visual Learning Logs, and student web pages, and incorporated a Standards-Based Rubric which evaluated students' performance on eight science and technology standards. The Visual Learning Logs were drawings done on the computer to represent science concepts related to the Food Chain. Students used the internet to search for information on a plant or animal of their choice. Next, students used this internet information, with the information from their Visual Learning Logs, to make web pages on their plant or animal. Later, students linked their web pages to form Science Structures. Finally, students linked their Science Structures with the structures of other students, and used these linked structures as models for solving problems. Further, during informal group interviews, students answered questions about visual thinking, problem solving, and science concepts. The results of this study showed clearly that (1) making visual representations helped students understand science knowledge, (2) making links between web pages helped students construct Science Knowledge Structures, and (3) students themselves said that visual thinking helped them learn science. In addition, this study found that when using Visual Learning Logs, the main overall ideas of the science concepts were usually represented accurately. Further, looking for information on the internet may cause new problems in learning. Likewise, being absent, starting late, and/or dropping out all may negatively influence students' proficiency on the standards. Finally, the way Science Structures are constructed and linked may provide insights into the way individual students think and process information.
Standards Based Reform. Abbott Implementation Resource Guide
ERIC Educational Resources Information Center
Passantino, Claire; Kenyon, Susan
2004-01-01
The goal of this guide is to provide information, support and practical tools that may help educators design, implement, and evaluate their school's standards-based education program. In order to work, a comprehensive, standards-based educational program must, by definition, be the organizing structure upon which the school program operates.…
Tirado-Ramos, Alfredo; Hu, Jingkun; Lee, K.P.
2002-01-01
Supplement 23 to DICOM (Digital Imaging and Communications for Medicine), Structured Reporting, is a specification that supports a semantically rich representation of image and waveform content, enabling experts to share image and related patient information. DICOM SR supports the representation of textual and coded data linked to images and waveforms. Nevertheless, the medical information technology community needs models that work as bridges between the DICOM relational model and open object-oriented technologies. The authors assert that representations of the DICOM Structured Reporting standard, using object-oriented modeling languages such as the Unified Modeling Language, can provide a high-level reference view of the semantically rich framework of DICOM and its complex structures. They have produced an object-oriented model to represent the DICOM SR standard and have derived XML-exchangeable representations of this model using World Wide Web Consortium specifications. They expect the model to benefit developers and system architects who are interested in developing applications that are compliant with the DICOM SR specification. PMID:11751804
CHEMICAL STRUCTURE INDEXING OF TOXICITY DATA ON ...
Standardized chemical structure annotation of public toxicity databases and information resources is playing an increasingly important role in the 'flattening' and integration of diverse sets of biological activity data on the Internet. This review discusses public initiatives that are accelerating the pace of this transformation, with particular reference to toxicology-related chemical information. Chemical content annotators, structure locator services, large structure/data aggregator web sites, structure browsers, International Union of Pure and Applied Chemistry (IUPAC) International Chemical Identifier (InChI) codes, toxicity data models and public chemical/biological activity profiling initiatives are all playing a role in overcoming barriers to the integration of toxicity data, and are bringing researchers closer to the reality of a mineable chemical Semantic Web. An example of this integration of data is provided by the collaboration among researchers involved with the Distributed Structure-Searchable Toxicity (DSSTox) project, the Carcinogenic Potency Project, projects at the National Cancer Institute and the PubChem database. Standardizing chemical structure annotation of public toxicity databases
Integration of DICOM and openEHR standards
NASA Astrophysics Data System (ADS)
Wang, Ying; Yao, Zhihong; Liu, Lei
2011-03-01
The standard format for medical imaging storage and transmission is DICOM. openEHR is an open standard specification in health informatics that describes the management and storage, retrieval and exchange of health data in electronic health records. Considering that the integration of DICOM and openEHR is beneficial to information sharing, on the basis of XML-based DICOM format, we developed a method of creating a DICOM Imaging Archetype in openEHR to enable the integration of DICOM and openEHR. Each DICOM file contains abundant imaging information. However, because reading a DICOM involves looking up the DICOM Data Dictionary, the readability of a DICOM file has been limited. openEHR has innovatively adopted two level modeling method, making clinical information divided into lower level, the information model, and upper level, archetypes and templates. But one critical challenge posed to the development of openEHR is the information sharing problem, especially in imaging information sharing. For example, some important imaging information cannot be displayed in an openEHR file. In this paper, to enhance the readability of a DICOM file and semantic interoperability of an openEHR file, we developed a method of mapping a DICOM file to an openEHR file by adopting the form of archetype defined in openEHR. Because an archetype has a tree structure, after mapping a DICOM file to an openEHR file, the converted information is structuralized in conformance with openEHR format. This method enables the integration of DICOM and openEHR and data exchange without losing imaging information between two standards.
42 CFR 438.218 - Enrollee information.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 4 2014-10-01 2014-10-01 false Enrollee information. 438.218 Section 438.218... (CONTINUED) MEDICAL ASSISTANCE PROGRAMS MANAGED CARE Quality Assessment and Performance Improvement Structure and Operation Standards § 438.218 Enrollee information. The requirements that States must meet under...
42 CFR 438.218 - Enrollee information.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 4 2012-10-01 2012-10-01 false Enrollee information. 438.218 Section 438.218... (CONTINUED) MEDICAL ASSISTANCE PROGRAMS MANAGED CARE Quality Assessment and Performance Improvement Structure and Operation Standards § 438.218 Enrollee information. The requirements that States must meet under...
42 CFR 438.218 - Enrollee information.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 4 2011-10-01 2011-10-01 false Enrollee information. 438.218 Section 438.218... (CONTINUED) MEDICAL ASSISTANCE PROGRAMS MANAGED CARE Quality Assessment and Performance Improvement Structure and Operation Standards § 438.218 Enrollee information. The requirements that States must meet under...
42 CFR 438.218 - Enrollee information.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 4 2013-10-01 2013-10-01 false Enrollee information. 438.218 Section 438.218... (CONTINUED) MEDICAL ASSISTANCE PROGRAMS MANAGED CARE Quality Assessment and Performance Improvement Structure and Operation Standards § 438.218 Enrollee information. The requirements that States must meet under...
The Swedish strategy and method for development of a national healthcare information architecture.
Rosenälv, Jessica; Lundell, Karl-Henrik
2012-01-01
"We need a precise framework of regulations in order to maintain appropriate and structured health care documentation that ensures that the information maintains a sufficient level of quality to be used in treatment, in research and by the actual patient. The users shall be aided by clearly and uniformly defined terms and concepts, and there should be an information structure that clarifies what to document and how to make the information more useful. Most of all, we need to standardize the information, not just the technical systems." (eHälsa - nytta och näring, Riksdag report 2011/12:RFR5, p. 37). In 2010, the Swedish Government adopted the National e-Health - the national strategy for accessible and secure information in healthcare. The strategy is a revision and extension of the previous strategy from 2006, which was used as input for the most recent efforts to develop a national information structure utilizing business-oriented generic models. A national decision on healthcare informatics standards was made by the Swedish County Councils, which decided to follow and use EN/ISO 13606 as a standard for the development of a universally applicable information structure, including archetypes and templates. The overall aim of the Swedish strategy for development of National Healthcare Information Architecture is to achieve high level semantic interoperability for clinical content and clinical contexts. High level semantic interoperability requires consistently structured clinical data and other types of data with coherent traceability to be mapped to reference clinical models. Archetypes that are formal definitions of the clinical and demographic concepts and some administrative data were developed. Each archetype describes the information structure and content of overarching core clinical concepts. Information that is defined in archetypes should be used for different purposes. Generic clinical process model was made concrete and analyzed. For each decision-making step in the process where information is processed, the amount and type of information and its structure were defined in terms of reference templates. Reference templates manage clinical, administrative and demographic types of information in a specific clinical context. Based on a survey of clinical processes at the reference level, the identification of specific clinical processes such as diabetes and congestive heart failure in adults were made. Process-specific templates were defined by using reference templates and populated with information that was relevant to each health problem in a specific clinical context. Throughout this process, medical data for knowledge management were collected for each health problem. Parallel with the efforts to define archetypes and templates, terminology binding work is on-going. Different strategies are used depending on the terminology binding level.
Standardizing the information architecture for spacecraft operations
NASA Technical Reports Server (NTRS)
Easton, C. R.
1994-01-01
This paper presents an information architecture developed for the Space Station Freedom as a model from which to derive an information architecture standard for advanced spacecraft. The information architecture provides a way of making information available across a program, and among programs, assuming that the information will be in a variety of local formats, structures and representations. It provides a format that can be expanded to define all of the physical and logical elements that make up a program, add definitions as required, and import definitions from prior programs to a new program. It allows a spacecraft and its control center to work in different representations and formats, with the potential for supporting existing spacecraft from new control centers. It supports a common view of data and control of all spacecraft, regardless of their own internal view of their data and control characteristics, and of their communications standards, protocols and formats. This information architecture is central to standardizing spacecraft operations, in that it provides a basis for information transfer and translation, such that diverse spacecraft can be monitored and controlled in a common way.
Comparing a Japanese and a German hospital information system.
Jahn, F; Issler, L; Winter, A; Takabayashi, K
2009-01-01
To examine the architectural differences and similarities of a Japanese and German hospital information system (HIS) in a case study. This cross-cultural comparison, which focuses on structural quality characteristics, offers the chance to get new insights into different HIS architectures, which possibly cannot be obtained by inner-country comparisons. A reference model for the domain layer of hospital information systems containing the typical enterprise functions of a hospital provides the basis of comparison for the two different hospital information systems. 3LGM(2) models, which describe the two HISs and which are based on that reference model, are used to assess several structural quality criteria. Four of these criteria are introduced in detail. The two examined HISs are different in terms of the four structural quality criteria examined. Whereas the centralized architecture of the hospital information system at Chiba University Hospital causes only few functional redundancies and leads to a low implementation of communication standards, the hospital information system at the University Hospital of Leipzig, having a decentralized architecture, exhibits more functional redundancies and a higher use of communication standards. Using a model-based comparison, it was possible to detect remarkable differences between the observed hospital information systems of completely different cultural areas. However, the usability of 3LGM(2) models for comparisons has to be improved in order to apply key figures and to assess or benchmark the structural quality of health information systems architectures more thoroughly.
Automated workflows for data curation and standardization of chemical structures for QSAR modeling
Large collections of chemical structures and associated experimental data are publicly available, and can be used to build robust QSAR models for applications in different fields. One common concern is the quality of both the chemical structure information and associated experime...
[The standardization of medical care and the training of medical personnel].
Korbut, V B; Tyts, V V; Boĭshenko, V A
1997-09-01
The medical specialist training at all levels (medical orderly, doctor's assistant, general practitioner, doctors) should be based on the medical care standards. Preliminary studies in the field of military medicine standards have demonstrated that the medical service of the Armed Forces of Russia needs medical resources' standards, structure and organization standards, technology standards. Military medical service resources' standards should reflect the requisitions for: all medical specialists' qualification, equipment and material for medical set-ups, field medical systems, drugs, etc. Standards for structures and organization should include requisitions for: command and control systems in military formations' and task forces' medical services and their information support; health-care and evacuation functions, sanitary control and anti-epidemic measures and personnel health protection. Technology standards development could improve and regulate the health care procedures in the process of evacuation. Standards' development will help to solve the problem of the data-base for the military medicine education system and medical research.
Applied and implied semantics in crystallographic publishing
2012-01-01
Background Crystallography is a data-rich, software-intensive scientific discipline with a community that has undertaken direct responsibility for publishing its own scientific journals. That community has worked actively to develop information exchange standards allowing readers of structure reports to access directly, and interact with, the scientific content of the articles. Results Structure reports submitted to some journals of the International Union of Crystallography (IUCr) can be automatically validated and published through an efficient and cost-effective workflow. Readers can view and interact with the structures in three-dimensional visualization applications, and can access the experimental data should they wish to perform their own independent structure solution and refinement. The journals also layer on top of this facility a number of automated annotations and interpretations to add further scientific value. Conclusions The benefits of semantically rich information exchange standards have revolutionised the scholarly publishing process for crystallography, and establish a model relevant to many other physical science disciplines. PMID:22932420
Visualization of conserved structures by fusing highly variable datasets.
Silverstein, Jonathan C; Chhadia, Ankur; Dech, Fred
2002-01-01
Skill, effort, and time are required to identify and visualize anatomic structures in three-dimensions from radiological data. Fundamentally, automating these processes requires a technique that uses symbolic information not in the dynamic range of the voxel data. We were developing such a technique based on mutual information for automatic multi-modality image fusion (MIAMI Fuse, University of Michigan). This system previously demonstrated facility at fusing one voxel dataset with integrated symbolic structure information to a CT dataset (different scale and resolution) from the same person. The next step of development of our technique was aimed at accommodating the variability of anatomy from patient to patient by using warping to fuse our standard dataset to arbitrary patient CT datasets. A standard symbolic information dataset was created from the full color Visible Human Female by segmenting the liver parenchyma, portal veins, and hepatic veins and overwriting each set of voxels with a fixed color. Two arbitrarily selected patient CT scans of the abdomen were used for reference datasets. We used the warping functions in MIAMI Fuse to align the standard structure data to each patient scan. The key to successful fusion was the focused use of multiple warping control points that place themselves around the structure of interest automatically. The user assigns only a few initial control points to align the scans. Fusion 1 and 2 transformed the atlas with 27 points around the liver to CT1 and CT2 respectively. Fusion 3 transformed the atlas with 45 control points around the liver to CT1 and Fusion 4 transformed the atlas with 5 control points around the portal vein. The CT dataset is augmented with the transformed standard structure dataset, such that the warped structure masks are visualized in combination with the original patient dataset. This combined volume visualization is then rendered interactively in stereo on the ImmersaDesk in an immersive Virtual Reality (VR) environment. The accuracy of the fusions was determined qualitatively by comparing the transformed atlas overlaid on the appropriate CT. It was examined for where the transformed structure atlas was incorrectly overlaid (false positive) and where it was incorrectly not overlaid (false negative). According to this method, fusions 1 and 2 were correct roughly 50-75% of the time, while fusions 3 and 4 were correct roughly 75-100%. The CT dataset augmented with transformed dataset was viewed arbitrarily in user-centered perspective stereo taking advantage of features such as scaling, windowing and volumetric region of interest selection. This process of auto-coloring conserved structures in variable datasets is a step toward the goal of a broader, standardized automatic structure visualization method for radiological data. If successful it would permit identification, visualization or deletion of structures in radiological data by semi-automatically applying canonical structure information to the radiological data (not just processing and visualization of the data's intrinsic dynamic range). More sophisticated selection of control points and patterns of warping may allow for more accurate transforms, and thus advances in visualization, simulation, education, diagnostics, and treatment planning.
Tirado-Ramos, Alfredo; Hu, Jingkun; Lee, K P
2002-01-01
Supplement 23 to DICOM (Digital Imaging and Communications for Medicine), Structured Reporting, is a specification that supports a semantically rich representation of image and waveform content, enabling experts to share image and related patient information. DICOM SR supports the representation of textual and coded data linked to images and waveforms. Nevertheless, the medical information technology community needs models that work as bridges between the DICOM relational model and open object-oriented technologies. The authors assert that representations of the DICOM Structured Reporting standard, using object-oriented modeling languages such as the Unified Modeling Language, can provide a high-level reference view of the semantically rich framework of DICOM and its complex structures. They have produced an object-oriented model to represent the DICOM SR standard and have derived XML-exchangeable representations of this model using World Wide Web Consortium specifications. They expect the model to benefit developers and system architects who are interested in developing applications that are compliant with the DICOM SR specification.
Large collections of chemical structures and associated experimental data are publicly available, and can be used to build robust QSAR models for applications in different fields. One common concern is the quality of both the chemical structure information and associated experime...
24 CFR 3285.903 - Permits, alterations, and on-site structures.
Code of Federal Regulations, 2010 CFR
2010-04-01
... HOUSING AND URBAN DEVELOPMENT MODEL MANUFACTURED HOME INSTALLATION STANDARDS Optional Information for... from property lines and public roads are met. (b) Alterations. Prior to making any alteration to a home...) Installation of on-site structures. Each accessory building and structure is designed to support all of its own...
Library Statistical Data Base Formats and Definitions.
ERIC Educational Resources Information Center
Jones, Dennis; And Others
Represented are the detailed set of data structures relevant to the categorization of information, terminology, and definitions employed in the design of the library statistical data base. The data base, or management information system, provides administrators with a framework of information and standardized data for library management, planning,…
A survey of nursing documentation, terminologies and standards in European countries
Thoroddsen, Asta; Ehrenberg, Anna; Sermeus, Walter; Saranto, Kaija
2012-01-01
A survey was carried out to describe the current state of art in the use of nursing documentation, terminologies, standards and education. Key informants in European countries were targeted by the Association for Common European Nursing Diagnoses, Interventions and Outcomes (ACENDIO). Replies were received from key informants in 20 European countries. Results show that the nursing process was most often used to structure nursing documentation. Many standardized nursing terminologies were used in Europe with NANDA, NIC, NOC and ICF most frequently used. In 70% of the countries minimum requirements were available for electronic health records (EHR), but nursing not addressed specifically. Standards in use for nursing terminologies and information systems were lacking. The results should be a major concern to the nursing community in Europe. As a European platform, ACENDIO can play a role in enhancing standardization activities, and should develop its role accordingly. PMID:24199130
The Agent of extracting Internet Information with Lead Order
NASA Astrophysics Data System (ADS)
Mo, Zan; Huang, Chuliang; Liu, Aijun
In order to carry out e-commerce better, advanced technologies to access business information are in need urgently. An agent is described to deal with the problems of extracting internet information that caused by the non-standard and skimble-scamble structure of Chinese websites. The agent designed includes three modules which respond to the process of extracting information separately. A method of HTTP tree and a kind of Lead algorithm is proposed to generate a lead order, with which the required web can be retrieved easily. How to transform the extracted information structuralized with natural language is also discussed.
Learn about the NESHAP regulation for brick and structural clay products by reading the rule summary, rule history, code of federal regulations, and the additional resources like fact sheets and background information documents
Classification of Chemicals Based On Structured Toxicity Information
Thirty years and millions of dollars worth of pesticide registration toxicity studies, historically stored as hardcopy and scanned documents, have been digitized into highly standardized and structured toxicity data within the Toxicity Reference Database (ToxRefDB). Toxicity-bas...
NASA Astrophysics Data System (ADS)
Nishizawa, Atsushi; Namikawa, Toshiya; Taruya, Atsushi
2016-03-01
Gravitational waves (GWs) from compact binary stars at cosmological distances are promising and powerful cosmological probes, referred to as the GW standard sirens. With future GW detectors, we will be able to precisely measure source luminosity distances out to a redshift z 5. To extract cosmological information, previous studies using the GW standard sirens rely on source redshift information obtained through an extensive electromagnetic follow-up campaign. However, the redshift identification is typically time-consuming and rather challenging. Here we propose a novel method for cosmology with the GW standard sirens free from the redshift measurements. Utilizing the anisotropies of the number density and luminosity distances of compact binaries originated from the large-scale structure, we show that (i) this anisotropies can be measured even at very high-redshifts (z = 2), (ii) the expected constraints on the primordial non-Gaussianity with Einstein Telescope would be comparable to or even better than the other large-scale structure probes at the same epoch, (iii) the cross-correlation with other cosmological observations is found to have high-statistical significance. A.N. was supported by JSPS Postdoctoral Fellowships for Research Abroad No. 25-180.
Assessment of and standardization for quantitative nondestructive test
NASA Technical Reports Server (NTRS)
Neuschaefer, R. W.; Beal, J. B.
1972-01-01
Present capabilities and limitations of nondestructive testing (NDT) as applied to aerospace structures during design, development, production, and operational phases are assessed. It will help determine what useful structural quantitative and qualitative data may be provided from raw materials to vehicle refurbishment. This assessment considers metal alloys systems and bonded composites presently applied in active NASA programs or strong contenders for future use. Quantitative and qualitative data has been summarized from recent literature, and in-house information, and presented along with a description of those structures or standards where the information was obtained. Examples, in tabular form, of NDT technique capabilities and limitations have been provided. NDT techniques discussed and assessed were radiography, ultrasonics, penetrants, thermal, acoustic, and electromagnetic. Quantitative data is sparse; therefore, obtaining statistically reliable flaw detection data must be strongly emphasized. The new requirements for reusable space vehicles have resulted in highly efficient design concepts operating in severe environments. This increases the need for quantitative NDT evaluation of selected structural components, the end item structure, and during refurbishment operations.
Using normalization 3D model for automatic clinical brain quantative analysis and evaluation
NASA Astrophysics Data System (ADS)
Lin, Hong-Dun; Yao, Wei-Jen; Hwang, Wen-Ju; Chung, Being-Tau; Lin, Kang-Ping
2003-05-01
Functional medical imaging, such as PET or SPECT, is capable of revealing physiological functions of the brain, and has been broadly used in diagnosing brain disorders by clinically quantitative analysis for many years. In routine procedures, physicians manually select desired ROIs from structural MR images and then obtain physiological information from correspondent functional PET or SPECT images. The accuracy of quantitative analysis thus relies on that of the subjectively selected ROIs. Therefore, standardizing the analysis procedure is fundamental and important in improving the analysis outcome. In this paper, we propose and evaluate a normalization procedure with a standard 3D-brain model to achieve precise quantitative analysis. In the normalization process, the mutual information registration technique was applied for realigning functional medical images to standard structural medical images. Then, the standard 3D-brain model that shows well-defined brain regions was used, replacing the manual ROIs in the objective clinical analysis. To validate the performance, twenty cases of I-123 IBZM SPECT images were used in practical clinical evaluation. The results show that the quantitative analysis outcomes obtained from this automated method are in agreement with the clinical diagnosis evaluation score with less than 3% error in average. To sum up, the method takes advantage of obtaining precise VOIs, information automatically by well-defined standard 3-D brain model, sparing manually drawn ROIs slice by slice from structural medical images in traditional procedure. That is, the method not only can provide precise analysis results, but also improve the process rate for mass medical images in clinical.
Karim, Sulafa; Fegeler, Christian; Boeckler, Dittmar; H Schwartz, Lawrence; Kauczor, Hans-Ulrich
2013-01-01
Background The majority of radiological reports are lacking a standard structure. Even within a specialized area of radiology, each report has its individual structure with regards to details and order, often containing too much of non-relevant information the referring physician is not interested in. For gathering relevant clinical key parameters in an efficient way or to support long-term therapy monitoring, structured reporting might be advantageous. Objective Despite of new technologies in medical information systems, medical reporting is still not dynamic. To improve the quality of communication in radiology reports, a new structured reporting system was developed for abdominal aortic aneurysms (AAA), intended to enhance professional communication by providing the pertinent clinical information in a predefined standard. Methods Actual state analysis was performed within the departments of radiology and vascular surgery by developing a Technology Acceptance Model. The SWOT (strengths, weaknesses, opportunities, and threats) analysis focused on optimization of the radiology reporting of patients with AAA. Definition of clinical parameters was achieved by interviewing experienced clinicians in radiology and vascular surgery. For evaluation, a focus group (4 radiologists) looked at the reports of 16 patients. The usability and reliability of the method was validated in a real-world test environment in the field of radiology. Results A Web-based application for radiological “structured reporting” (SR) was successfully standardized for AAA. Its organization comprises three main categories: characteristics of pathology and adjacent anatomy, measurements, and additional findings. Using different graphical widgets (eg, drop-down menus) in each category facilitate predefined data entries. Measurement parameters shown in a diagram can be defined for clinical monitoring and be adducted for quick adjudications. Figures for optional use to guide and standardize the reporting are embedded. Analysis of variance shows decreased average time required with SR to obtain a radiological report compared to free-text reporting (P=.0001). Questionnaire responses confirm a high acceptance rate by the user. Conclusions The new SR system may support efficient radiological reporting for initial diagnosis and follow-up for AAA. Perceived advantages of our SR platform are ease of use, which may lead to more accurate decision support. The new system is open to communicate not only with clinical partners but also with Radiology Information and Hospital Information Systems. PMID:23956062
Karim, Sulafa; Fegeler, Christian; Boeckler, Dittmar; H Schwartz, Lawrence; Kauczor, Hans-Ulrich; von Tengg-Kobligk, Hendrik
2013-08-16
The majority of radiological reports are lacking a standard structure. Even within a specialized area of radiology, each report has its individual structure with regards to details and order, often containing too much of non-relevant information the referring physician is not interested in. For gathering relevant clinical key parameters in an efficient way or to support long-term therapy monitoring, structured reporting might be advantageous. Despite of new technologies in medical information systems, medical reporting is still not dynamic. To improve the quality of communication in radiology reports, a new structured reporting system was developed for abdominal aortic aneurysms (AAA), intended to enhance professional communication by providing the pertinent clinical information in a predefined standard. Actual state analysis was performed within the departments of radiology and vascular surgery by developing a Technology Acceptance Model. The SWOT (strengths, weaknesses, opportunities, and threats) analysis focused on optimization of the radiology reporting of patients with AAA. Definition of clinical parameters was achieved by interviewing experienced clinicians in radiology and vascular surgery. For evaluation, a focus group (4 radiologists) looked at the reports of 16 patients. The usability and reliability of the method was validated in a real-world test environment in the field of radiology. A Web-based application for radiological "structured reporting" (SR) was successfully standardized for AAA. Its organization comprises three main categories: characteristics of pathology and adjacent anatomy, measurements, and additional findings. Using different graphical widgets (eg, drop-down menus) in each category facilitate predefined data entries. Measurement parameters shown in a diagram can be defined for clinical monitoring and be adducted for quick adjudications. Figures for optional use to guide and standardize the reporting are embedded. Analysis of variance shows decreased average time required with SR to obtain a radiological report compared to free-text reporting (P=.0001). Questionnaire responses confirm a high acceptance rate by the user. The new SR system may support efficient radiological reporting for initial diagnosis and follow-up for AAA. Perceived advantages of our SR platform are ease of use, which may lead to more accurate decision support. The new system is open to communicate not only with clinical partners but also with Radiology Information and Hospital Information Systems.
CMMI(Registered) for Services, Version 1.3
2010-11-01
ISO 2008b] ISO /IEC 27001 :2005 Information technology – Security techniques – Information Security Management Systems – Requirements [ ISO /IEC 2005...Commission. ISO /IEC 27001 Information Technology – Security Techniques – Information Security Management Systems – Requirements, 2005. http...CMM or International Organization for Standardization ( ISO ) 9001, you will immediately recognize many similarities in their structure and content
NASA Technical Reports Server (NTRS)
Tilmes, Curt
2014-01-01
The Global Change Information System (GCIS) provides a framework for the formal representation of structured metadata about data and information about global change. The pilot deployment of the system supports the National Climate Assessment (NCA), a major report of the U.S. Global Change Research Program (USGCRP). A consumer of that report can use the system to browse and explore that supporting information. Additionally, capturing that information into a structured data model and presenting it in standard formats through well defined open inter- faces, including query interfaces suitable for data mining and linking with other databases, the information becomes valuable for other analytic uses as well.
Kreimeyer, Kory; Foster, Matthew; Pandey, Abhishek; Arya, Nina; Halford, Gwendolyn; Jones, Sandra F; Forshee, Richard; Walderhaug, Mark; Botsis, Taxiarchis
2017-09-01
We followed a systematic approach based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses to identify existing clinical natural language processing (NLP) systems that generate structured information from unstructured free text. Seven literature databases were searched with a query combining the concepts of natural language processing and structured data capture. Two reviewers screened all records for relevance during two screening phases, and information about clinical NLP systems was collected from the final set of papers. A total of 7149 records (after removing duplicates) were retrieved and screened, and 86 were determined to fit the review criteria. These papers contained information about 71 different clinical NLP systems, which were then analyzed. The NLP systems address a wide variety of important clinical and research tasks. Certain tasks are well addressed by the existing systems, while others remain as open challenges that only a small number of systems attempt, such as extraction of temporal information or normalization of concepts to standard terminologies. This review has identified many NLP systems capable of processing clinical free text and generating structured output, and the information collected and evaluated here will be important for prioritizing development of new approaches for clinical NLP. Copyright © 2017 Elsevier Inc. All rights reserved.
Structured Learning Teams: Reimagining Student Group Work
ERIC Educational Resources Information Center
Lendvay, Gregory C.
2014-01-01
Even in a standards-based curriculum, teachers can apply constructivist practices such as structured learning teams. In this environment, students become invested in the learning aims, triggering the desire in students to awaken, get information, interpret, remix, share, and design scenarios.
Developing an Information Resources Management Curriculum.
ERIC Educational Resources Information Center
Montie, Irene C.
1983-01-01
Discusses the development of an Information Resources Management (IRM) curriculum by the IRM Curriculum Advisory Committee established by the Graduate School, United States Department of Agriculture. Initial activities, models proposed for the program (standards, skills, users, operational), course selection, and structural proposals considered…
Using building information modeling to track and assess the structural condition of bridges.
DOT National Transportation Integrated Search
2016-08-01
National Bridge Inspection Standards do not require documenting damage locations during an inspection, but bridge evaluation provisions highlight the importance of it. When determining a safe load-carrying capacity of a bridge, damage location inform...
A global, open-source database of flood protection standards
NASA Astrophysics Data System (ADS)
Scussolini, Paolo; Aerts, Jeroen; Jongman, Brenden; Bouwer, Laurens; Winsemius, Hessel; de Moel, Hans; Ward, Philip
2016-04-01
Accurate flood risk estimation is pivotal in that it enables risk-informed policies in disaster risk reduction, as emphasized in the recent Sendai framework for Disaster Risk Reduction. To improve our understanding of flood risk, models are now capable to provide actionable risk information on the (sub)global scale. Still the accuracy of their results is greatly limited by the lack of information on standards of protection to flood that are actually in place; and researchers thus take large assumptions on the extent of protection. With our work we propose a first global, open-source database of FLOod PROtection Standards, FLOPROS, covering a range of spatial scales. FLOPROS is structured in three layers of information, and merges them into one consistent database: 1) the Design layer contains empirical information about the standard of protection presently in place; 2) the Policy layer contains intended protection standards from normative documents; 3) the Model layer uses a validated numerical approach to calculate protection standards for areas not covered in the other layers. The FLOPROS database can be used for more accurate risk assessment exercises across scales. As the database should be continually updated to reflect new interventions, we invite researchers and practitioners to contribute information. Further, we look for partners within the risk community to participate in additional strategies to implement the amount and accuracy of information contained in this first version of FLOPROS.
Usability of HL7 and SNOMED CT standards in Java Persistence API environment.
Antal, Gábor; Végh, Ádám Zoltán; Bilicki, Vilmos
2014-01-01
Due to the need for an efficient way of communication between the different stakeholders of healthcare (e.g. doctors, pharmacists, hospitals, patients etc.), the possibility of integrating different healthcare systems occurs. However, during the integration process several problems of heterogeneity might come up, which can turn integration into a difficult task. These problems motivated the development of healthcare information standards. The main goal of the HL7 family of standards is the standardization of communication between clinical systems and the unification of clinical document formats on the structural level. The SNOMED CT standard aims the unification of the healthcare terminology, thus the development of a standard on lexical level. The goal of this article is to introduce the usability of these two standards in Java Persistence API (JPA) environment, and to examine how standard-based system components can be efficiently generated. First, we shortly introduce the structure of the standards, their advantages and disadvantages. Then, we present an architecture design method, which can help to eliminate the possible structural drawbacks of the standards, and makes code generating tools applicable for the automatic production of certain system components.
Expected versus Observed Information in SEM with Incomplete Normal and Nonnormal Data
ERIC Educational Resources Information Center
Savalei, Victoria
2010-01-01
Maximum likelihood is the most common estimation method in structural equation modeling. Standard errors for maximum likelihood estimates are obtained from the associated information matrix, which can be estimated from the sample using either expected or observed information. It is known that, with complete data, estimates based on observed or…
Analysis of good practice of public health Emergency Operations Centers.
Xu, Min; Li, Shi-Xue
2015-08-01
To study the public health Emergency Operations Centers (EOCs)in the US, the European Union, the UK and Australia, and summarize the good practice for the improvement of National Health Emergency Response Command Center in Chinese National Health and Family Planning Commission. Literature review was conducted to explore the EOCs of selected countries. The study focused on EOC function, organizational structure, human resources and information management. The selected EOCs had the basic EOC functions of coordinating and commanding as well as the public health related functions such as monitoring the situation, risk assessment, and epidemiological briefings. The organizational structures of the EOCs were standardized, scalable and flexible. Incident Command System was the widely applied organizational structure with a strong preference. The EOCs were managed by a unit of emergency management during routine time and surge staff were engaged upon emergencies. The selected EOCs had clear information management framework including information collection, assessment and dissemination. The performance of National Health Emergency Response Command Center can be improved by learning from the good practice of the selected EOCs, including setting clear functions, standardizing the organizational structure, enhancing the human resource capacity and strengthening information management. Copyright © 2015 Hainan Medical College. Production and hosting by Elsevier B.V. All rights reserved.
Technical Data Interoperability (TDI) Pathfinder Via Emerging Standards
NASA Technical Reports Server (NTRS)
Conroy, Mike; Gill, Paul; Hill, Bradley; Ibach, Brandon; Jones, Corey; Ungar, David; Barch, Jeffrey; Ingalls, John; Jacoby, Joseph; Manning, Josh;
2014-01-01
The TDI project (TDI) investigates trending technical data standards for applicability to NASA vehicles, space stations, payloads, facilities, and equipment. TDI tested COTS software compatible with a certain suite of related industry standards for capabilities of individual benefits and interoperability. These standards not only esnable Information Technology (IT) efficiencies, but also address efficient structures and standard content for business processes. We used source data from generic industry samples as well as NASA and European Space Agency (ESA) data from space systems.
Data Mining of Macromolecular Structures.
van Beusekom, Bart; Perrakis, Anastassis; Joosten, Robbie P
2016-01-01
The use of macromolecular structures is widespread for a variety of applications, from teaching protein structure principles all the way to ligand optimization in drug development. Applying data mining techniques on these experimentally determined structures requires a highly uniform, standardized structural data source. The Protein Data Bank (PDB) has evolved over the years toward becoming the standard resource for macromolecular structures. However, the process selecting the data most suitable for specific applications is still very much based on personal preferences and understanding of the experimental techniques used to obtain these models. In this chapter, we will first explain the challenges with data standardization, annotation, and uniformity in the PDB entries determined by X-ray crystallography. We then discuss the specific effect that crystallographic data quality and model optimization methods have on structural models and how validation tools can be used to make informed choices. We also discuss specific advantages of using the PDB_REDO databank as a resource for structural data. Finally, we will provide guidelines on how to select the most suitable protein structure models for detailed analysis and how to select a set of structure models suitable for data mining.
SAR matrices: automated extraction of information-rich SAR tables from large compound data sets.
Wassermann, Anne Mai; Haebel, Peter; Weskamp, Nils; Bajorath, Jürgen
2012-07-23
We introduce the SAR matrix data structure that is designed to elucidate SAR patterns produced by groups of structurally related active compounds, which are extracted from large data sets. SAR matrices are systematically generated and sorted on the basis of SAR information content. Matrix generation is computationally efficient and enables processing of large compound sets. The matrix format is reminiscent of SAR tables, and SAR patterns revealed by different categories of matrices are easily interpretable. The structural organization underlying matrix formation is more flexible than standard R-group decomposition schemes. Hence, the resulting matrices capture SAR information in a comprehensive manner.
The Acceptability and Representativeness of Standardized Parent-Child Interaction Tasks
ERIC Educational Resources Information Center
Rhule, Dana M.; McMahon, Robert J.; Vando, Jessica
2009-01-01
Analogue behavioral observation of structured parent-child interactions has often been used to obtain a standardized, unbiased measure of child noncompliance and parenting behavior. However, for assessment information to be clinically relevant, it is essential that the behavior observed be similar to that which the child normally experiences and…
75 FR 17590 - Federal Motor Vehicle Safety Standards; Roof Crush Resistance
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-07
... FURTHER INFORMATION CONTACT: For non-legal issues, you may call Christopher J. Wiacek, NHTSA Office of Crashworthiness Standards, telephone 202-366-4801. For legal issues, you may call J. Edward Glancy, NHTSA Office... assemblage consisting, at a minimum, of chassis (including the frame) structure, power train, steering system...
24 CFR Appendix II to Subpart C of... - Development of Standards; Calculation Methods
Code of Federal Regulations, 2012 CFR
2012-04-01
...; Calculation Methods I. Background Information Concerning the Standards (a) Thermal Radiation: (1) Introduction... and structures in the event of fire. The resulting fireball emits thermal radiation which is absorbed... radiation being emitted. The radiation can cause severe burn, injuries and even death to exposed persons...
Implementing the HL7v3 standard in Croatian primary healthcare domain.
Koncar, Miroslav
2004-01-01
The mission of HL7 Inc. is to provide standards for the exchange, management and integration of data that supports clinical patient care and the management, delivery and evaluation of healthcare services. The scope of this work includes the specifications of flexible, cost-effective approaches, standards, guidelines, methodologies, and related services for interoperability between healthcare information systems. In the field of medical information technologies, HL7 provides the world's most advanced information standards. Versions 1 and 2 of the HL7 standard have on the one hand solved many issues, but on the other demonstrated the size and complexity of the health information sharing problem. As the solution, a complete new methodology has been adopted, which is being encompassed in version 3 recommendations. This approach standardizes the Reference Information Model (RIM), which is the source of all domain models and message structures. Message design is now defined in detail, enabling interoperability between loosely-coupled systems that are designed by different vendors and deployed in various environments. At the start of the Primary Healthcare Information System project, we have decided to go directly to HL7v3. Implementing the HL7v3 standard in healthcare applications represents a challenging task. By using standardized refinement and localization methods we were able to define information models for Croatian primary healthcare domain. The scope of our work includes clinical, financial and administrative data management, where in some cases we were compelled to introduce new HL7v3-compliant models. All of the HL7v3 transactions are digitally signed, using the W3C XML Digital Signature standard.
Weininger, Sandy; Jaffe, Michael B; Goldman, Julian M
2017-01-01
Medical device and health information technology systems are increasingly interdependent with users demanding increased interoperability. Related safety standards must be developed taking into account these systems' perspective. In this article, we describe the current development of medical device standards and the need for these standards to address medical device informatics. Medical device information should be gathered from a broad range of clinical scenarios to lay the foundation for safe medical device interoperability. Five clinical examples show how medical device informatics principles, if applied in the development of medical device standards, could help facilitate the development of safe interoperable medical device systems. These examples illustrate the clinical implications of the failure to capture important signals and device attributes. We provide recommendations relating to the coordination between historically separate standards development groups, some of which focus on safety and effectiveness and others focus on health informatics. We identify the need for a shared understanding among stakeholders and describe organizational structures to promote cooperation such that device-to-device interactions and related safety information are considered during standards development.
Weininger, Sandy; Jaffe, Michael B.; Goldman, Julian M
2016-01-01
Medical device and health information technology systems are increasingly interdependent with users demanding increased interoperability. Related safety standards must be developed taking into account this systems perspective. In this article we describe the current development of medical device standards and the need for these standards to address medical device informatics. Medical device information should be gathered from a broad range of clinical scenarios to lay the foundation for safe medical device interoperability. Five clinical examples show how medical device informatics principles, if applied in the development of medical device standards, could help facilitate the development of safe interoperable medical device systems. These examples illustrate the clinical implications of the failure to capture important signals and device attributes. We provide recommendations relating to the coordination between historically separate standards development groups; some which focus on safety and effectiveness, and others that focus on health informatics. We identify the need for a shared understanding among stakeholders and describe organizational structures to promote cooperation such that device-to-device interactions and related safety information are considered during standards development. PMID:27584685
Clinical data exchange standards and vocabularies for messages.
Huff, S. M.
1998-01-01
Motivation for the creation of electronic data interchange (message) standards is discussed. The ISO Open Systems Interface model is described. Clinical information models, message syntax and structure, and the need for a standardized coded vocabulary are explained. The HIPAA legislation and subsequent HHS transaction recommendations are reviewed. The history and mission statements of six of the most popular message development organizations (MDOs) are summarized, and the data exchange standards developed by these organizations are listed. The organizations described include Health Level Seven (HL7), American Standards for Testing and Materials (ASTM) E31, Digital Image Communication in Medicine (DICOM), European Committee for Standardization (Comité Européen de Normalisation), Technical Committee for Health Informatics (CEN/TC 251), the National Council for Prescription Drug Programs (NCPDP), and Accredited Standards Committee X12 Insurance Subcommittee (X12N). The locations of Internet web sites for the six organizations are provided as resources for further information. PMID:9929183
Urbonas, Gvidas; Kubilienė, Loreta; Kubilius, Raimondas; Urbonienė, Aušra
2015-03-01
As a member of a pharmacy organization, a pharmacist is not only bound to fulfill his/her professional obligations but is also affected by different personal and organizational factors that may influence his/her behavior and, consequently, the quality of the services he/she provides to patients. The main purpose of the research was to test a hypothesized model of the relationships among several organizational variables, and to investigate whether any of these variables affects the service of provision of medication information at community pharmacies. During the survey, pharmacists working at community pharmacies in Lithuania were asked to express their opinions on the community pharmacies at which they worked and to reflect on their actions when providing information on medicines to their patients. The statistical data were analyzed by applying a structural equation modeling technique to test the hypothesized model of the relationships among the variables of Perceived Organizational Support, Organizational Commitment, Turnover Intention, and Provision of Medication Information. The final model revealed that Organizational Commitment had a positive direct effect on Provision of Medication Information (standardized estimate = 0.27) and a negative direct effect (standardized estimate = -0.66) on Turnover Intention. Organizational Commitment mediated the indirect effects of Perceived Organizational Support on Turnover Intention (standardized estimate = -0.48) and on Provision of Medication Information (standardized estimate = 0.20). Pharmacists' Turnover Intention had no significant effect on Provision of Medication Information. Community pharmacies may be viewed as encouraging, to some extent, the service of provision of medication information. Pharmacists who felt higher levels of support from their organizations also expressed, to a certain extent, higher commitment to their organizations by providing more consistent medication information to patients. However, the effect of organizational variables on the variable of Provision of Medication Information appeared to be limited.
Minimum Information about a Genotyping Experiment (MIGEN)
Huang, Jie; Mirel, Daniel; Pugh, Elizabeth; Xing, Chao; Robinson, Peter N.; Pertsemlidis, Alexander; Ding, LiangHao; Kozlitina, Julia; Maher, Joseph; Rios, Jonathan; Story, Michael; Marthandan, Nishanth; Scheuermann, Richard H.
2011-01-01
Genotyping experiments are widely used in clinical and basic research laboratories to identify associations between genetic variations and normal/abnormal phenotypes. Genotyping assay techniques vary from single genomic regions that are interrogated using PCR reactions to high throughput assays examining genome-wide sequence and structural variation. The resulting genotype data may include millions of markers of thousands of individuals, requiring various statistical, modeling or other data analysis methodologies to interpret the results. To date, there are no standards for reporting genotyping experiments. Here we present the Minimum Information about a Genotyping Experiment (MIGen) standard, defining the minimum information required for reporting genotyping experiments. MIGen standard covers experimental design, subject description, genotyping procedure, quality control and data analysis. MIGen is a registered project under MIBBI (Minimum Information for Biological and Biomedical Investigations) and is being developed by an interdisciplinary group of experts in basic biomedical science, clinical science, biostatistics and bioinformatics. To accommodate the wide variety of techniques and methodologies applied in current and future genotyping experiment, MIGen leverages foundational concepts from the Ontology for Biomedical Investigations (OBI) for the description of the various types of planned processes and implements a hierarchical document structure. The adoption of MIGen by the research community will facilitate consistent genotyping data interpretation and independent data validation. MIGen can also serve as a framework for the development of data models for capturing and storing genotyping results and experiment metadata in a structured way, to facilitate the exchange of metadata. PMID:22180825
Human action recognition with group lasso regularized-support vector machine
NASA Astrophysics Data System (ADS)
Luo, Huiwu; Lu, Huanzhang; Wu, Yabei; Zhao, Fei
2016-05-01
The bag-of-visual-words (BOVW) and Fisher kernel are two popular models in human action recognition, and support vector machine (SVM) is the most commonly used classifier for the two models. We show two kinds of group structures in the feature representation constructed by BOVW and Fisher kernel, respectively, since the structural information of feature representation can be seen as a prior for the classifier and can improve the performance of the classifier, which has been verified in several areas. However, the standard SVM employs L2-norm regularization in its learning procedure, which penalizes each variable individually and cannot express the structural information of feature representation. We replace the L2-norm regularization with group lasso regularization in standard SVM, and a group lasso regularized-support vector machine (GLRSVM) is proposed. Then, we embed the group structural information of feature representation into GLRSVM. Finally, we introduce an algorithm to solve the optimization problem of GLRSVM by alternating directions method of multipliers. The experiments evaluated on KTH, YouTube, and Hollywood2 datasets show that our method achieves promising results and improves the state-of-the-art methods on KTH and YouTube datasets.
CHEMICAL STRUCTURE INDEXING OF TOXICITY DATA ON THE INTERNET: MOVING TOWARDS A FLAT WORLD
Standardized chemical structure annotation of public toxicity databases and information resources is playing an increasingly important role in the 'flattening' and integration of diverse sets of biological activity data on the Internet. This review discusses public initiatives th...
Architecture for WSN Nodes Integration in Context Aware Systems Using Semantic Messages
NASA Astrophysics Data System (ADS)
Larizgoitia, Iker; Muguira, Leire; Vazquez, Juan Ignacio
Wireless sensor networks (WSN) are becoming extremely popular in the development of context aware systems. Traditionally WSN have been focused on capturing data, which was later analyzed and interpreted in a server with more computational power. In this kind of scenario the problem of representing the sensor information needs to be addressed. Every node in the network might have different sensors attached; therefore their correspondent packet structures will be different. The server has to be aware of the meaning of every single structure and data in order to be able to interpret them. Multiple sensors, multiple nodes, multiple packet structures (and not following a standard format) is neither scalable nor interoperable. Context aware systems have solved this problem with the use of semantic technologies. They provide a common framework to achieve a standard definition of any domain. Nevertheless, these representations are computationally expensive, so a WSN cannot afford them. The work presented in this paper tries to bridge the gap between the sensor information and its semantic representation, by defining a simple architecture that enables the definition of this information natively in a semantic way, achieving the integration of the semantic information in the network packets. This will have several benefits, the most important being the possibility of promoting every WSN node to a real semantic information source.
Botsivaly, M; Spyropoulos, B; Koutsourakis, K; Mertika, K
2006-01-01
The purpose of this study is the presentation of a system appropriate to be used upon the transition of a patient from hospital to homecare. The developed system is structured according to the ASTM E2369-05 Standard Specification for Continuity of Care Record and its function is based upon the creation of a structured subset of data, containing the patient's most relevant clinical information, enabling simultaneously the planning and the optimal documentation of the provided homecare.
Semantic Technologies for Re-Use of Clinical Routine Data.
Kreuzthaler, Markus; Martínez-Costa, Catalina; Kaiser, Peter; Schulz, Stefan
2017-01-01
Routine patient data in electronic patient records are only partly structured, and an even smaller segment is coded, mainly for administrative purposes. Large parts are only available as free text. Transforming this content into a structured and semantically explicit form is a prerequisite for querying and information extraction. The core of the system architecture presented in this paper is based on SAP HANA in-memory database technology using the SAP Connected Health platform for data integration as well as for clinical data warehousing. A natural language processing pipeline analyses unstructured content and maps it to a standardized vocabulary within a well-defined information model. The resulting semantically standardized patient profiles are used for a broad range of clinical and research application scenarios.
Problem of unity of measurements in ensuring safety of hydraulic structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kheifits, V.Z.; Markov, A.I.; Braitsev, V.V.
1994-07-01
Ensuring the safety of hydraulic structures (HSs) is not only an industry but also a national and global concern, since failure of large water impounding structures can entail large losses of lives and enormous material losses related to destruction downstream. The main information on the degree of safety of a structure is obtained by comparing information about the actual state of the structure obtained on the basis of measurements in key zones of the structure with the predicted state on basis of the design model used when designing the structure for given conditions of external actions. Numerous, from hundreds tomore » thousands, string type transducers are placed in large HSs. This system of transducers monitor the stress-strain rate, seepage, and thermal regimes. These measurements are supported by the State Standards Committee which certifies the accuracy of the checking methods. To improve the instrumental monitoring of HSs, the author recommends: Calibration of methods and means of reliable diagnosis for each measuring channel in the HS, improvements to reduce measurement error, support for the system software programs, and development of appropriate standards for the design and examination of HSs.« less
Chou, Ann F; Yano, Elizabeth M; McCoy, Kimberly D; Willis, Deanna R; Doebbeling, Bradley N
2008-01-01
To address increases in the incidence of infection with antimicrobial-resistant pathogens, the National Foundation for Infectious Diseases and Centers for Disease Control and Prevention proposed two sets of strategies to (a) optimize antibiotic use and (b) prevent the spread of antimicrobial resistance and control transmission. However, little is known about the implementation of these strategies. Our objective is to explore organizational structural and process factors that facilitate the implementation of National Foundation for Infectious Diseases/Centers for Disease Control and Prevention strategies in U.S. hospitals. We surveyed 448 infection control professionals from a national sample of hospitals. Clinically anchored in the Donabedian model that defines quality in terms of structural and process factors, with the structural domain further informed by a contingency approach, we modeled the degree to which National Foundation for Infectious Diseases and Centers for Disease Control and Prevention strategies were implemented as a function of formalization and standardization of protocols, centralization of decision-making hierarchy, information technology capabilities, culture, communication mechanisms, and interdepartmental coordination, controlling for hospital characteristics. Formalization, standardization, centralization, institutional culture, provider-management communication, and information technology use were associated with optimal antibiotic use and enhanced implementation of strategies that prevent and control antimicrobial resistance spread (all p < .001). However, interdepartmental coordination for patient care was inversely related with antibiotic use in contrast to antimicrobial resistance spread prevention and control (p < .0001). Formalization and standardization may eliminate staff role conflict, whereas centralized authority may minimize ambiguity. Culture and communication likely promote internal trust, whereas information technology use helps integrate and support these organizational processes. These findings suggest concrete strategies for evaluating current capabilities to implement effective practices and foster and sustain a culture of patient safety.
Jacques, David A; Guss, Jules Mitchell; Trewhella, Jill
2012-05-17
Small-angle scattering is becoming an increasingly popular tool for the study of bio-molecular structures in solution. The large number of publications with 3D-structural models generated from small-angle solution scattering data has led to a growing consensus for the need to establish a standard reporting framework for their publication. The International Union of Crystallography recently established a set of guidelines for the necessary information required for the publication of such structural models. Here we describe the rationale for these guidelines and the importance of standardising the way in which small-angle scattering data from bio-molecules and associated structural interpretations are reported.
A Converter from the Systems Biology Markup Language to the Synthetic Biology Open Language.
Nguyen, Tramy; Roehner, Nicholas; Zundel, Zach; Myers, Chris J
2016-06-17
Standards are important to synthetic biology because they enable exchange and reproducibility of genetic designs. This paper describes a procedure for converting between two standards: the Systems Biology Markup Language (SBML) and the Synthetic Biology Open Language (SBOL). SBML is a standard for behavioral models of biological systems at the molecular level. SBOL describes structural and basic qualitative behavioral aspects of a biological design. Converting SBML to SBOL enables a consistent connection between behavioral and structural information for a biological design. The conversion process described in this paper leverages Systems Biology Ontology (SBO) annotations to enable inference of a designs qualitative function.
NASA Astrophysics Data System (ADS)
Hills, S. J.; Richard, S. M.; Doniger, A.; Danko, D. M.; Derenthal, L.; Energistics Metadata Work Group
2011-12-01
A diverse group of organizations representative of the international community involved in disciplines relevant to the upstream petroleum industry, - energy companies, - suppliers and publishers of information to the energy industry, - vendors of software applications used by the industry, - partner government and academic organizations, has engaged in the Energy Industry Metadata Standards Initiative. This Initiative envisions the use of standard metadata within the community to enable significant improvements in the efficiency with which users discover, evaluate, and access distributed information resources. The metadata standard needed to realize this vision is the initiative's primary deliverable. In addition to developing the metadata standard, the initiative is promoting its adoption to accelerate realization of the vision, and publishing metadata exemplars conformant with the standard. Implementation of the standard by community members, in the form of published metadata which document the information resources each organization manages, will allow use of tools requiring consistent metadata for efficient discovery and evaluation of, and access to, information resources. While metadata are expected to be widely accessible, access to associated information resources may be more constrained. The initiative is being conducting by Energistics' Metadata Work Group, in collaboration with the USGIN Project. Energistics is a global standards group in the oil and natural gas industry. The Work Group determined early in the initiative, based on input solicited from 40+ organizations and on an assessment of existing metadata standards, to develop the target metadata standard as a profile of a revised version of ISO 19115, formally the "Energy Industry Profile of ISO/DIS 19115-1 v1.0" (EIP). The Work Group is participating on the ISO/TC 211 project team responsible for the revision of ISO 19115, now ready for "Draft International Standard" (DIS) status. With ISO 19115 an established, capability-rich, open standard for geographic metadata, EIP v1 is expected to be widely acceptable within the community and readily sustainable over the long-term. The EIP design, also per community requirements, will enable discovery, evaluation, and access to types of information resources considered important to the community, including structured and unstructured digital resources, and physical assets such as hardcopy documents and material samples. This presentation will briefly review the development of this initiative as well as the current and planned Work Group activities. More time will be spent providing an overview of the EIP v1, including the requirements it prescribes, design efforts made to enable automated metadata capture and processing, and the structure and content of its documentation, which was written to minimize ambiguity and facilitate implementation. The Work Group considers EIP v1 a solid initial design for interoperable metadata, and first step toward the vision of the Initiative.
The Use of Structure Coefficients to Address Multicollinearity in Sport and Exercise Science
ERIC Educational Resources Information Center
Yeatts, Paul E.; Barton, Mitch; Henson, Robin K.; Martin, Scott B.
2017-01-01
A common practice in general linear model (GLM) analyses is to interpret regression coefficients (e.g., standardized ß weights) as indicators of variable importance. However, focusing solely on standardized beta weights may provide limited or erroneous information. For example, ß weights become increasingly unreliable when predictor variables are…
International Energy: Subject Thesaurus. Revision 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The International Energy Agency: Subject Thesaurus contains the standard vocabulary of indexing terms (descriptors) developed and structured to build and maintain energy information databases. Involved in this cooperative task are (1) the technical staff of the USDOE Office of Scientific and Technical Information (OSTI) in cooperation with the member countries of the International Energy Agency`s Energy Technology Data Exchange (ETDE) and (2) the International Atomic Energy Agency`s International Nuclear Information System (INIS) staff representing the more than 100 countries and organizations that record and index information for the international nuclear information community. ETDE member countries are also members of INIS.more » Nuclear information prepared for INIS by ETDE member countries is included in the ETDE Energy Database, which contains the online equivalent of the printed INIS Atomindex. Indexing terminology is therefore cooperatively standardized for use in both information systems. This structured vocabulary reflects thscope of international energy research, development, and technological programs. The terminology of this thesaurus aids in subject searching on commercial systems, such as ``Energy Science & Technology`` by DIALOG Information Services, ``Energy`` by STN International and the ``ETDE Energy Database`` by SilverPlatter. It is also the thesaurus for the Integrated Technical Information System (ITIS) online databases of the US Department of Energy.« less
Standard methods for sampling North American freshwater fishes
Bonar, Scott A.; Hubert, Wayne A.; Willis, David W.
2009-01-01
This important reference book provides standard sampling methods recommended by the American Fisheries Society for assessing and monitoring freshwater fish populations in North America. Methods apply to ponds, reservoirs, natural lakes, and streams and rivers containing cold and warmwater fishes. Range-wide and eco-regional averages for indices of abundance, population structure, and condition for individual species are supplied to facilitate comparisons of standard data among populations. Provides information on converting nonstandard to standard data, statistical and database procedures for analyzing and storing standard data, and methods to prevent transfer of invasive species while sampling.
Hoelzer, Simon; Schweiger, Ralf K; Dudeck, Joachim
2003-01-01
With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or "semantically associated" parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach.
Hoelzer, Simon; Schweiger, Ralf K.; Dudeck, Joachim
2003-01-01
With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or “semantically associated” parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach. PMID:12807813
Thirty years and over a billion of today’s dollars worth of pesticide registration toxicity studies, historically stored as hardcopy and scanned documents, have been digitized into highly standardized and structured toxicity data, within the U.S. Environmental Protection Agency’s...
The structure of airplane fabrics
NASA Technical Reports Server (NTRS)
Walen, E Dean
1920-01-01
This report prepared by the Bureau of Standards for the National Advisory Committee for Aeronautics supplies the necessary information regarding the apparatus and methods of testing and inspecting airplane fabrics.
Lafortune, Claire; Elliott, Jacobi; Egan, Mary Y; Stolee, Paul
2017-04-01
While standardized health assessments capture valuable information on patients' demographic and diagnostic characteristics, health conditions, and physical and mental functioning, they may not capture information of most relevance to individual patients and their families. Given that patients and their informal caregivers are the experts on that patient's unique context, it is important to ensure they are able to convey all relevant personal information to formal healthcare providers so that high-quality, patient-centered care may be delivered. This study aims to identify information that older patients and families consider important but that might not be included in standardized assessments. Transcripts were analyzed from 29 interviews relating to eight patients with hip fractures from three sites (large urban, smaller urban, rural) in two provinces in Canada. These interviews were conducted as part of a larger ethnographic study. Each transcript was analyzed by two researchers using content analysis. Results were reviewed in two focus group interviews with older adults and family caregivers. Identified themes were compared with items from two standardized assessments used in healthcare settings. Three broad themes emerged from the qualitative analysis that were not covered in the standardized assessments: informal caregiver and family considerations, insider healthcare knowledge, and patients' healthcare attitudes and experiences. The importance of these themes was confirmed through focus group interviews. Focus group participants also emphasized the importance of conducting assessments in a patient-centered way and the importance of open-ended questions. A less structured interview approach may yield information that would otherwise be missed in standardized assessments. Combining both sources could yield better-informed healthcare planning and quality-improvement efforts.
Ehrlich, Matthias; Schüffny, René
2013-01-01
One of the major outcomes of neuroscientific research are models of Neural Network Structures (NNSs). Descriptions of these models usually consist of a non-standardized mixture of text, figures, and other means of visual information communication in print media. However, as neuroscience is an interdisciplinary domain by nature, a standardized way of consistently representing models of NNSs is required. While generic descriptions of such models in textual form have recently been developed, a formalized way of schematically expressing them does not exist to date. Hence, in this paper we present Neural Schematics as a concept inspired by similar approaches from other disciplines for a generic two dimensional representation of said structures. After introducing NNSs in general, a set of current visualizations of models of NNSs is reviewed and analyzed for what information they convey and how their elements are rendered. This analysis then allows for the definition of general items and symbols to consistently represent these models as Neural Schematics on a two dimensional plane. We will illustrate the possibilities an agreed upon standard can yield on sampled diagrams transformed into Neural Schematics and an example application for the design and modeling of large-scale NNSs.
Botsivaly, M.; Spyropoulos, B.; Koutsourakis, K.; Mertika, K.
2006-01-01
The purpose of this study is the presentation of a system appropriate to be used upon the transition of a patient from hospital to homecare. The developed system is structured according to the ASTM E2369-05 Standard Specification for Continuity of Care Record and its function is based upon the creation of a structured subset of data, containing the patient’s most relevant clinical information, enabling simultaneously the planning and the optimal documentation of the provided homecare. PMID:17238479
Pitman, Martha B; Black-Schaffer, W Stephen
2017-06-01
Communication between cytopathologists and patients and their care team is a critical component of accurate and timely patient management. The most important single means of communication for the cytopathologist is through the cytopathology report. Implementation of standardized terminology schemes and structured, templated reporting facilitates the ability of the cytopathologist to provide a comprehensive and integrated report. Cytopathology has been among the pathology subspecialties that have led the way in developing standardized reporting, beginning with the 1954 Papanicolaou classification scheme for cervical-vaginal cytology and continuing through the Bethesda systems for gynecological cytology and several nongynecological cytology systems. The effective reporting of cytopathology necessarily becomes more complex as it addresses increasingly sophisticated management options, requiring the integration of information from a broader range of sources. In addition to the complexity of information inputs, a wider spectrum of consumers of these reports is emerging, from patients themselves to primary care providers to subspecialized disease management experts. Both these factors require that the reporting cytopathologist provide the integration and interpretation necessary to translate diverse forms of information into meaningful and actionable reports that will inform the care team while enabling the patient to meaningfully participate in his or her own care. To achieve such broad and focused communications will require first the development of standardized and integrated reports and ultimately the involvement of cytopathologists in the development of the clinical informatics needed to treat all these items of information as structured data elements with flexible reporting operators to address the full range of patient and patient care needs. Cancer Cytopathol 2017;125(6 suppl):486-93. © 2017 American Cancer Society. © 2017 American Cancer Society.
Standards to support information systems integration in anatomic pathology.
Daniel, Christel; García Rojo, Marcial; Bourquard, Karima; Henin, Dominique; Schrader, Thomas; Della Mea, Vincenzo; Gilbertson, John; Beckwith, Bruce A
2009-11-01
Integrating anatomic pathology information- text and images-into electronic health care records is a key challenge for enhancing clinical information exchange between anatomic pathologists and clinicians. The aim of the Integrating the Healthcare Enterprise (IHE) international initiative is precisely to ensure interoperability of clinical information systems by using existing widespread industry standards such as Digital Imaging and Communication in Medicine (DICOM) and Health Level Seven (HL7). To define standard-based informatics transactions to integrate anatomic pathology information to the Healthcare Enterprise. We used the methodology of the IHE initiative. Working groups from IHE, HL7, and DICOM, with special interest in anatomic pathology, defined consensual technical solutions to provide end-users with improved access to consistent information across multiple information systems. The IHE anatomic pathology technical framework describes a first integration profile, "Anatomic Pathology Workflow," dedicated to the diagnostic process including basic image acquisition and reporting solutions. This integration profile relies on 10 transactions based on HL7 or DICOM standards. A common specimen model was defined to consistently identify and describe specimens in both HL7 and DICOM transactions. The IHE anatomic pathology working group has defined standard-based informatics transactions to support the basic diagnostic workflow in anatomic pathology laboratories. In further stages, the technical framework will be completed to manage whole-slide images and semantically rich structured reports in the diagnostic workflow and to integrate systems used for patient care and those used for research activities (such as tissue bank databases or tissue microarrayers).
Industrial - Institutional - Structural and Health Related Pest Control Category Manual.
ERIC Educational Resources Information Center
Bowman, James S.; Turmel, Jon P.
This manual provides information needed to meet the standards for pesticide applicator certification. The emphasis of this document is on the identification of wood-destroying pests and the damage caused by them to the structural components of buildings. The pests discussed include termites, carpenter ants, beetles, bees, and wasps and numerous…
Commercial Pesticides Applicator Manual: Industrial, Institutional, Structural and Health Related.
ERIC Educational Resources Information Center
Fitzwater, William D.; Renes, Robert
This training manual provides information needed to meet the minimum EPA standards for certification as a commercial applicator of pesticides in the industrial, institutional, structural and health related pest control category. The text discusses the use and safety of applying pesticides to control invertebrate and vertebrate pests such as ants,…
Opposite Effects of Context on Immediate Structural and Lexical Processing.
ERIC Educational Resources Information Center
Harris, John W.
The testing of a number of hypotheses about the effect of hearing a prior context sentence on immediate processing of a subsequent target sentence is described. According to the standard deep structure model, higher level processing (e.g. semantic interpretation, integration of context-tarqet information) does not occur immediately as speech is…
A Structured Approach to Homepage Design.
ERIC Educational Resources Information Center
Gregory, Gwen; Brown, M. Marlo
With no standards governing their creation, a variety of formats are being used for World Wide Web homepages. Some are well organized, present their information clearly, and work with multiple browsers. Others, however, are slow to load, function poorly with some Web browsing software, and are so badly structured that they are very difficult to…
Rhetorical Structures in Academic Research Writing by Non-Native Writers
ERIC Educational Resources Information Center
Suryani, Ina; Kamaruddin, H.; Hashima, Noor; Yaacob, Aizan; Rashid, Salleh Abd; Desa, Hazry
2014-01-01
Writers of research articles are expected to present research information in a structured manner by following a certain rhetorical patterns determined by the discourse community. Failures to keep to the writing standard and rhetorical pattern are likely to lower the acceptance rate. While producing a research article is understandably a complex…
A Study of Information Systems Programs Accredited by ABET in Relation to IS 2010
ERIC Educational Resources Information Center
Feinstein, David; Longenecker, Herbert E., Jr.; Shrestha, Dina
2014-01-01
This article examines the relationship between ABET CAC standards for undergraduate programs of information systems and IS 2010 curriculum specifications. We have reviewed current institution described course work that identifies course structures from accredited IS programs. The accredited programs all matched the expectations expressed in ABET…
Pomery, Amanda; Schofield, Penelope; Xhilaga, Miranda; Gough, Karla
2017-06-30
Across the globe, peer support groups have emerged as a community-led approach to accessing support and connecting with others with cancer experiences. Little is known about qualities required to lead a peer support group or how to determine suitability for the role. Organisations providing assistance to cancer support groups and their leaders are currently operating independently, without a standard national framework or published guidelines. This protocol describes the methods that will be used to generate pragmatic consensus-based minimum standards and an accessible structured interview with user manual to guide the selection and development of cancer support group leaders. We will: (A) identify and collate peer-reviewed literature that describes qualities of support group leaders through a systematic review; (B) content analyse eligible documents for information relevant to requisite knowledge, skills and attributes of group leaders generally and specifically to cancer support groups; (C) use an online reactive Delphi method with an interdisciplinary panel of experts to produce a clear, suitable, relevant and appropriate structured interview comprising a set of agreed questions with behaviourally anchored rating scales; (D) produce a user manual to facilitate standard delivery of the structured interview; (E) pilot the structured interview to improve clinical utility; and (F) field test the structured interview to develop a rational scoring model and provide a summary of existing group leader qualities. The study is approved by the Department Human Ethics Advisory Group of The University of Melbourne. The study is based on voluntary participation and informed written consent, with participants able to withdraw at any time. The results will be disseminated at research conferences and peer review journals. Presentations and free access to the developed structured interview and user manual will be available to cancer agencies. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Schofield, Penelope; Xhilaga, Miranda; Gough, Karla
2017-01-01
Introduction Across the globe, peer support groups have emerged as a community-led approach to accessing support and connecting with others with cancer experiences. Little is known about qualities required to lead a peer support group or how to determine suitability for the role. Organisations providing assistance to cancer support groups and their leaders are currently operating independently, without a standard national framework or published guidelines. This protocol describes the methods that will be used to generate pragmatic consensus-based minimum standards and an accessible structured interview with user manual to guide the selection and development of cancer support group leaders. Methods and analysis We will: (A) identify and collate peer-reviewed literature that describes qualities of support group leaders through a systematic review; (B) content analyse eligible documents for information relevant to requisite knowledge, skills and attributes of group leaders generally and specifically to cancer support groups; (C) use an online reactive Delphi method with an interdisciplinary panel of experts to produce a clear, suitable, relevant and appropriate structured interview comprising a set of agreed questions with behaviourally anchored rating scales; (D) produce a user manual to facilitate standard delivery of the structured interview; (E) pilot the structured interview to improve clinical utility; and (F) field test the structured interview to develop a rational scoring model and provide a summary of existing group leader qualities. Ethics and dissemination The study is approved by the Department Human Ethics Advisory Group of The University of Melbourne. The study is based on voluntary participation and informed written consent, with participants able to withdraw at any time. The results will be disseminated at research conferences and peer review journals. Presentations and free access to the developed structured interview and user manual will be available to cancer agencies. PMID:28667202
Lockheed Martin Idaho Technologies Company information management technology architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, M.J.; Lau, P.K.S.
1996-05-01
The Information Management Technology Architecture (TA) is being driven by the business objectives of reducing costs and improving effectiveness. The strategy is to reduce the cost of computing through standardization. The Lockheed Martin Idaho Technologies Company (LMITCO) TA is a set of standards and products for use at the Idaho National Engineering Laboratory (INEL). The TA will provide direction for information management resource acquisitions, development of information systems, formulation of plans, and resolution of issues involving LMITCO computing resources. Exceptions to the preferred products may be granted by the Information Management Executive Council (IMEC). Certain implementation and deployment strategies aremore » inherent in the design and structure of LMITCO TA. These include: migration from centralized toward distributed computing; deployment of the networks, servers, and other information technology infrastructure components necessary for a more integrated information technology support environment; increased emphasis on standards to make it easier to link systems and to share information; and improved use of the company`s investment in desktop computing resources. The intent is for the LMITCO TA to be a living document constantly being reviewed to take advantage of industry directions to reduce costs while balancing technological diversity with business flexibility.« less
How Community Has Shaped the Protein Data Bank
Berman, Helen M.; Kleywegt, Gerard J.; Nakamura, Haruki; Markley, John L.
2015-01-01
Following several years of community discussion, the Protein Data Bank (PDB) was established in 1971 as a public repository for the coordinates of three-dimensional models of biological macromolecules. Since then, the number, size, and complexity of structural models have continued to grow, reflecting the productivity of structural biology. Managed by the Worldwide PDB organization, the PDB has been able to meet increasing demands for the quantity of structural information and of quality. In addition to providing unrestricted access to structural information, the PDB also works to promote data standards and to raise the profile of structural biology with broader audiences. In this perspective, we describe the history of PDB and the many ways in which the community continues to shape the archive. PMID:24010707
Pathak, Jyotishman; Bailey, Kent R; Beebe, Calvin E; Bethard, Steven; Carrell, David S; Chen, Pei J; Dligach, Dmitriy; Endle, Cory M; Hart, Lacey A; Haug, Peter J; Huff, Stanley M; Kaggal, Vinod C; Li, Dingcheng; Liu, Hongfang; Marchant, Kyle; Masanz, James; Miller, Timothy; Oniki, Thomas A; Palmer, Martha; Peterson, Kevin J; Rea, Susan; Savova, Guergana K; Stancl, Craig R; Sohn, Sunghwan; Solbrig, Harold R; Suesse, Dale B; Tao, Cui; Taylor, David P; Westberg, Les; Wu, Stephen; Zhuo, Ning; Chute, Christopher G
2013-01-01
Research objective To develop scalable informatics infrastructure for normalization of both structured and unstructured electronic health record (EHR) data into a unified, concept-based model for high-throughput phenotype extraction. Materials and methods Software tools and applications were developed to extract information from EHRs. Representative and convenience samples of both structured and unstructured data from two EHR systems—Mayo Clinic and Intermountain Healthcare—were used for development and validation. Extracted information was standardized and normalized to meaningful use (MU) conformant terminology and value set standards using Clinical Element Models (CEMs). These resources were used to demonstrate semi-automatic execution of MU clinical-quality measures modeled using the Quality Data Model (QDM) and an open-source rules engine. Results Using CEMs and open-source natural language processing and terminology services engines—namely, Apache clinical Text Analysis and Knowledge Extraction System (cTAKES) and Common Terminology Services (CTS2)—we developed a data-normalization platform that ensures data security, end-to-end connectivity, and reliable data flow within and across institutions. We demonstrated the applicability of this platform by executing a QDM-based MU quality measure that determines the percentage of patients between 18 and 75 years with diabetes whose most recent low-density lipoprotein cholesterol test result during the measurement year was <100 mg/dL on a randomly selected cohort of 273 Mayo Clinic patients. The platform identified 21 and 18 patients for the denominator and numerator of the quality measure, respectively. Validation results indicate that all identified patients meet the QDM-based criteria. Conclusions End-to-end automated systems for extracting clinical information from diverse EHR systems require extensive use of standardized vocabularies and terminologies, as well as robust information models for storing, discovering, and processing that information. This study demonstrates the application of modular and open-source resources for enabling secondary use of EHR data through normalization into standards-based, comparable, and consistent format for high-throughput phenotyping to identify patient cohorts. PMID:24190931
SAMICS support study. Volume 1: Cost account catalog
NASA Technical Reports Server (NTRS)
1977-01-01
The Jet Propulsion Laboratory (JPL) is examining the feasibility of a new industry to produce photovoltaic solar energy collectors similar to those used on spacecraft. To do this, a standardized costing procedure was developed. The Solar Array Manufacturing Industry Costing Standards (SAMICS) support study supplies the following information: (1) SAMICS critique; (2) Standard data base--cost account structure, expense item costs, inflation rates, indirect requirements relationships, and standard financial parameter values; (3) Facilities capital cost estimating relationships; (4) Conceptual plant designs; (5) Construction lead times; (6) Production start-up times; (7) Manufacturing price estimates.
Ambiguity of non-systematic chemical identifiers within and between small-molecule databases.
Akhondi, Saber A; Muresan, Sorel; Williams, Antony J; Kors, Jan A
2015-01-01
A wide range of chemical compound databases are currently available for pharmaceutical research. To retrieve compound information, including structures, researchers can query these chemical databases using non-systematic identifiers. These are source-dependent identifiers (e.g., brand names, generic names), which are usually assigned to the compound at the point of registration. The correctness of non-systematic identifiers (i.e., whether an identifier matches the associated structure) can only be assessed manually, which is cumbersome, but it is possible to automatically check their ambiguity (i.e., whether an identifier matches more than one structure). In this study we have quantified the ambiguity of non-systematic identifiers within and between eight widely used chemical databases. We also studied the effect of chemical structure standardization on reducing the ambiguity of non-systematic identifiers. The ambiguity of non-systematic identifiers within databases varied from 0.1 to 15.2 % (median 2.5 %). Standardization reduced the ambiguity only to a small extent for most databases. A wide range of ambiguity existed for non-systematic identifiers that are shared between databases (17.7-60.2 %, median of 40.3 %). Removing stereochemistry information provided the largest reduction in ambiguity across databases (median reduction 13.7 percentage points). Ambiguity of non-systematic identifiers within chemical databases is generally low, but ambiguity of non-systematic identifiers that are shared between databases, is high. Chemical structure standardization reduces the ambiguity to a limited extent. Our findings can help to improve database integration, curation, and maintenance.
Identification of subsurface structures using electromagnetic data and shape priors
NASA Astrophysics Data System (ADS)
Tveit, Svenn; Bakr, Shaaban A.; Lien, Martha; Mannseth, Trond
2015-03-01
We consider the inverse problem of identifying large-scale subsurface structures using the controlled source electromagnetic method. To identify structures in the subsurface where the contrast in electric conductivity can be small, regularization is needed to bias the solution towards preserving structural information. We propose to combine two approaches for regularization of the inverse problem. In the first approach we utilize a model-based, reduced, composite representation of the electric conductivity that is highly flexible, even for a moderate number of degrees of freedom. With a low number of parameters, the inverse problem is efficiently solved using a standard, second-order gradient-based optimization algorithm. Further regularization is obtained using structural prior information, available, e.g., from interpreted seismic data. The reduced conductivity representation is suitable for incorporation of structural prior information. Such prior information cannot, however, be accurately modeled with a gaussian distribution. To alleviate this, we incorporate the structural information using shape priors. The shape prior technique requires the choice of kernel function, which is application dependent. We argue for using the conditionally positive definite kernel which is shown to have computational advantages over the commonly applied gaussian kernel for our problem. Numerical experiments on various test cases show that the methodology is able to identify fairly complex subsurface electric conductivity distributions while preserving structural prior information during the inversion.
Informal Assessment of Competences in the Context of Science Standards in Austria
ERIC Educational Resources Information Center
Schiffl, Iris
2016-01-01
Science standards have been a topic in educational research in Austria for about ten years now. Starting in 2005, competency structure models have been developed for junior and senior classes of different school types. After evaluating these models, prototypic tasks were created to point out the meaning of the models to teachers. At the moment,…
49 CFR 172.704 - Training requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... PROVISIONS, HAZARDOUS MATERIALS COMMUNICATIONS, EMERGENCY RESPONSE INFORMATION, TRAINING REQUIREMENTS, AND... communication standards of this subchapter. (2) Function-specific training. (i) Each hazmat employee must be... must include company security objectives, organizational security structure, specific security...
49 CFR 238.201 - Scope/alternative compliance.
Code of Federal Regulations, 2012 CFR
2012-10-01
... structural standards of this subpart (§ 238.203—static end strength; § 238.205—anti-climbing mechanism; § 238... by § 238.21(c); (ii) Information, including detailed drawings and materials specifications...
49 CFR 238.201 - Scope/alternative compliance.
Code of Federal Regulations, 2013 CFR
2013-10-01
... structural standards of this subpart (§ 238.203—static end strength; § 238.205—anti-climbing mechanism; § 238... by § 238.21(c); (ii) Information, including detailed drawings and materials specifications...
49 CFR 238.201 - Scope/alternative compliance.
Code of Federal Regulations, 2014 CFR
2014-10-01
... structural standards of this subpart (§ 238.203—static end strength; § 238.205—anti-climbing mechanism; § 238... by § 238.21(c); (ii) Information, including detailed drawings and materials specifications...
49 CFR 238.201 - Scope/alternative compliance.
Code of Federal Regulations, 2011 CFR
2011-10-01
... structural standards of this subpart (§ 238.203—static end strength; § 238.205—anti-climbing mechanism; § 238... by § 238.21(c); (ii) Information, including detailed drawings and materials specifications...
Health level 7 development framework for medication administration.
Kim, Hwa Sun; Cho, Hune
2009-01-01
We propose the creation of a standard data model for medication administration activities through the development of a clinical document architecture using the Health Level 7 Development Framework process based on an object-oriented analysis and the development method of Health Level 7 Version 3. Medication administration is the most common activity performed by clinical professionals in healthcare settings. A standardized information model and structured hospital information system are necessary to achieve evidence-based clinical activities. A virtual scenario is used to demonstrate the proposed method of administering medication. We used the Health Level 7 Development Framework and other tools to create the clinical document architecture, which allowed us to illustrate each step of the Health Level 7 Development Framework in the administration of medication. We generated an information model of the medication administration process as one clinical activity. It should become a fundamental conceptual model for understanding international-standard methodology by healthcare professionals and nursing practitioners with the objective of modeling healthcare information systems.
Hoelzer, Simon; Schweiger, Ralf K; Rieger, Joerg; Meyer, Michael
2006-01-01
The organizational structures of web contents and electronic information resources must adapt to the demands of a growing volume of information and user requirements. Otherwise the information society will be threatened by disinformation. The biomedical sciences are especially vulnerable in this regard, since they are strongly oriented toward text-based knowledge sources. Here sustainable improvement can only be achieved by using a comprehensive, integrated approach that not only includes data management but also specifically incorporates the editorial processes, including structuring information sources and publication. The technical resources needed to effectively master these tasks are already available in the form of the data standards and tools of the Semantic Web. They include Rich Site Summaries (RSS), which have become an established means of distributing and syndicating conventional news messages and blogs. They can also provide access to the contents of the previously mentioned information sources, which are conventionally classified as 'deep web' content.
Nurse practitioners: leadership behaviors and organizational climate.
Jones, L C; Guberski, T D; Soeken, K L
1990-01-01
The purpose of this article is to examine the relationships of individual nurse practitioners' perceptions of the leadership climate in their organizations and self-reported formal and informal leadership behaviors. The nine climate dimensions (Structure, Responsibility, Reward, Perceived Support of Risk Taking, Warmth, Support, Standard Setting, Conflict, and Identity) identified by Litwin and Stringer in 1968 were used to predict five leadership dimensions (Meeting Organizational Needs, Managing Resources, Leadership Competence, Task Accomplishment, and Communications). Demographic variables of age, educational level, and percent of time spent performing administrative functions were forced as a first step in each multiple regression analysis and used to explain a significant amount of variance in all but one analysis. All leadership dimensions were predicted by at least one organizational climate dimension: (1) Meeting Organizational Needs by Risk and Reward; (2) Managing Resources by Risk and Structure; (3) Leadership Competence by Risk and Standards; (4) Task Accomplishment by Structure, Risk, and Standards; and (5) Communication by Rewards.
PACS-Based Computer-Aided Detection and Diagnosis
NASA Astrophysics Data System (ADS)
Huang, H. K. (Bernie); Liu, Brent J.; Le, Anh HongTu; Documet, Jorge
The ultimate goal of Picture Archiving and Communication System (PACS)-based Computer-Aided Detection and Diagnosis (CAD) is to integrate CAD results into daily clinical practice so that it becomes a second reader to aid the radiologist's diagnosis. Integration of CAD and Hospital Information System (HIS), Radiology Information System (RIS) or PACS requires certain basic ingredients from Health Level 7 (HL7) standard for textual data, Digital Imaging and Communications in Medicine (DICOM) standard for images, and Integrating the Healthcare Enterprise (IHE) workflow profiles in order to comply with the Health Insurance Portability and Accountability Act (HIPAA) requirements to be a healthcare information system. Among the DICOM standards and IHE workflow profiles, DICOM Structured Reporting (DICOM-SR); and IHE Key Image Note (KIN), Simple Image and Numeric Report (SINR) and Post-processing Work Flow (PWF) are utilized in CAD-HIS/RIS/PACS integration. These topics with examples are presented in this chapter.
Data dictionaries in information systems - Standards, usage , and application
NASA Technical Reports Server (NTRS)
Johnson, Margaret
1990-01-01
An overview of data dictionary systems and the role of standardization in the interchange of data dictionaries is presented. The development of the data dictionary for the Planetary Data System is cited as an example. The data element dictionary (DED), which is the repository of the definitions of the vocabulary utilized in an information system, is an important part of this service. A DED provides the definitions of the fields of the data set as well as the data elements of the catalog system. Finally, international efforts such as the Consultative Committee on Space Data Systems and other committees set up to provide standard recommendations on the usage and structure of data dictionaries in the international space science community are discussed.
Level-Specific Evaluation of Model Fit in Multilevel Structural Equation Modeling
ERIC Educational Resources Information Center
Ryu, Ehri; West, Stephen G.
2009-01-01
In multilevel structural equation modeling, the "standard" approach to evaluating the goodness of model fit has a potential limitation in detecting the lack of fit at the higher level. Level-specific model fit evaluation can address this limitation and is more informative in locating the source of lack of model fit. We proposed level-specific test…
Description of sampling designs using a comprehensive data structure
John C. Byrne; Albert R. Stage
1988-01-01
Maintaining permanent plot data with different sampling designs over long periods within an organization, as well as sharing such information between organizations, requires that common standards be used. A data structure for the description of the sampling design within a stand is proposed. It is based on the definition of subpopulations of trees sampled, the rules...
Structural Consequences of Retention Policies: The Use of Computer Models To Inform Policy.
ERIC Educational Resources Information Center
Morris, Don R.
This paper reports on a longitudinal study of the structural effects of grade retention on dropout rate and percent of graduates qualified. The study drew on computer simulation to explore the effects of retention and how this practice affects dropout rate, the percent of graduates who meet required standards, and enrollment itself. The computer…
Review of Literature on Probability of Detection for Magnetic Particle Nondestructive Testing
2013-01-01
4 3.2 Offshore welded structures..................................................................................... 8 3.3 Aerospace...presented in Section 6. 2. Overview of Magnetic Particle Testing MPT is used in heavy engineering to inspect welds for surface-breaking... welded structures, and concluding with a summary of reliability information embedded in aerospace standards. 3.1 Aerospace It appears that the
Computer Program Re-layers Engineering Drawings
NASA Technical Reports Server (NTRS)
Crosby, Dewey C., III
1990-01-01
RULCHK computer program aids in structuring layers of information pertaining to part or assembly designed with software described in article "Software for Drawing Design Details Concurrently" (MFS-28444). Checks and optionally updates structure of layers for part. Enables designer to construct model and annotate its documentation without burden of manually layering part to conform to standards at design time.
Automated extraction of chemical structure information from digital raster images
Park, Jungkap; Rosania, Gus R; Shedden, Kerby A; Nguyen, Mandee; Lyu, Naesung; Saitou, Kazuhiro
2009-01-01
Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links to scientific research articles. PMID:19196483
NASA Astrophysics Data System (ADS)
Yamazaki, Towako
In order to stabilize and improve quality of information retrieval service, the information retrieval team of Daicel Corporation has given some efforts on standard operating procedures, interview sheet for information retrieval, structured format for search report, and search expressions for some technological fields of Daicel. These activities and efforts will also lead to skill sharing and skill tradition between searchers. In addition, skill improvements are needed not only for a searcher individually, but also for the information retrieval team totally when playing searcher's new roles.
Constellation's Command, Control, Communications and Information (C3I) Architecture
NASA Technical Reports Server (NTRS)
Breidenthal, Julian C.
2007-01-01
Operations concepts are highly effective for: 1) Developing consensus; 2) Discovering stakeholder needs, goals, objectives; 3) Defining behavior of system components (especially emergent behaviors). An interoperability standard can provide an excellent lever to define the capabilities needed for system evolution. Two categories of architectures are needed in a program of this size are: 1) Generic - Needed for planning, design and construction standards; 2) Specific - Needed for detailed requirement allocations, interface specs. A wide variety of architectural views are needed to address stakeholder concerns, including: 1) Physical; 2) Information (structure, flow, evolution); 3) Processes (design, manufacturing, operations); 4) Performance; 5) Risk.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-21
... Facilities Cooling Water Intake Structures: Instrument, Pre-test, and Implementation (New). ICR numbers: EPA... test for and ameliorate survey non-response bias. EPA will follow standard practice in stated...
How community has shaped the Protein Data Bank.
Berman, Helen M; Kleywegt, Gerard J; Nakamura, Haruki; Markley, John L
2013-09-03
Following several years of community discussion, the Protein Data Bank (PDB) was established in 1971 as a public repository for the coordinates of three-dimensional models of biological macromolecules. Since then, the number, size, and complexity of structural models have continued to grow, reflecting the productivity of structural biology. Managed by the Worldwide PDB organization, the PDB has been able to meet increasing demands for the quantity of structural information and of quality. In addition to providing unrestricted access to structural information, the PDB also works to promote data standards and to raise the profile of structural biology with broader audiences. In this perspective, we describe the history of PDB and the many ways in which the community continues to shape the archive. Copyright © 2013 Elsevier Ltd. All rights reserved.
Development of structured ICD-10 and its application to computer-assisted ICD coding.
Imai, Takeshi; Kajino, Masayuki; Sato, Megumi; Ohe, Kazuhiko
2010-01-01
This paper presents: (1) a framework of formal representation of ICD10, which functions as a bridge between ontological information and natural language expressions; and (2) a methodology to use formally described ICD10 for computer-assisted ICD coding. First, we analyzed and structurized the meanings of categories in 15 chapters of ICD10. Then we expanded the structured ICD10 (S-ICD10) by adding subordinate concepts and labels derived from Japanese Standard Disease Names. The information model to describe formal representation was refined repeatedly. The resultant model includes 74 types of semantic links. We also developed an ICD coding module based on S-ICD10 and a 'Coding Principle,' which achieved high accuracy (>70%) for four chapters. These results not only demonstrate the basic feasibility of our coding framework but might also inform the development of the information model for formal description framework in the ICD11 revision.
Development of an IHE MRRT-compliant open-source web-based reporting platform.
Pinto Dos Santos, Daniel; Klos, G; Kloeckner, R; Oberle, R; Dueber, C; Mildenberger, P
2017-01-01
To develop a platform that uses structured reporting templates according to the IHE Management of Radiology Report Templates (MRRT) profile, and to implement this platform into clinical routine. The reporting platform uses standard web technologies (HTML / JavaScript and PHP / MySQL) only. Several freely available external libraries were used to simplify the programming. The platform runs on a standard web server, connects with the radiology information system (RIS) and PACS, and is easily accessible via a standard web browser. A prototype platform that allows structured reporting to be easily incorporated into the clinical routine was developed and successfully tested. To date, 797 reports were generated using IHE MRRT-compliant templates (many of them downloaded from the RSNA's radreport.org website). Reports are stored in a MySQL database and are easily accessible for further analyses. Development of an IHE MRRT-compliant platform for structured reporting is feasible using only standard web technologies. All source code will be made available upon request under a free license, and the participation of other institutions in further development is welcome. • A platform for structured reporting using IHE MRRT-compliant templates is presented. • Incorporating structured reporting into clinical routine is feasible. • Full source code will be provided upon request under a free license.
Jensen, Lotte Groth; Bossen, Claus
2016-03-01
It remains a continual challenge to present information in user interfaces in large IT systems to support overview in the best possible way. We here examine how an electronic health record (EHR) supports the creation of overview among hospital physicians with a particular focus on the use of an interface designed to provide clinicians with a patient information overview. The overview interface integrates information flexibly from diverse places in the EHR and presents this information in one screen display. Our study revealed widespread non-use of the overview interface. We explore the reasons for its use and non-use. We conducted exploratory ethnographic fieldwork among physicians in two hospitals and gathered statistical data on their use of the overview interface. From the quantitative data, we identified where the interface was used most and conducted 18 semi-structured, open-ended interviews framed by the theoretical framework and the findings of the initial ethnographic fieldwork. We interviewed both physicians and employees from the IT units in different hospitals. We then analysed notes from the ethnographic fieldwork and the interviews and ordered these into themes forming the basis for the presentation of findings. The overview interface was most used in departments or situations where the problem at hand and the need for information could be standardized-in particular, in anesthesiological departments and outpatient clinics. However, departments with complex and long patient histories did not make much use of the overview interface. Design and layout were not mentioned as decisive factors affecting its use or non-use. Many physicians questioned the completeness of data in the overview interface-either because they were skeptical about the hospital's or the department's documentation practices, or because they could not recognize the structure of the interface. This uncertainty discouraged physicians from using the overview interface. Dedicating a specific function or interface to supporting overview works best where information needs can be standardized. The narrative and contextual nature of creating clinical overview is unlikely to be optimally supported by using the overview interface alone. The use of these kinds of interfaces requires trust in data completeness and other clinicians' and administrative staff's documentation practices, as well as an understanding of the underlying structure of the EHR and how information is filtered when data are aggregated for the interface. Copyright © 2015. Published by Elsevier Ireland Ltd.
WebCSD: the online portal to the Cambridge Structural Database
Thomas, Ian R.; Bruno, Ian J.; Cole, Jason C.; Macrae, Clare F.; Pidcock, Elna; Wood, Peter A.
2010-01-01
WebCSD, a new web-based application developed by the Cambridge Crystallographic Data Centre, offers fast searching of the Cambridge Structural Database using only a standard internet browser. Search facilities include two-dimensional substructure, molecular similarity, text/numeric and reduced cell searching. Text, chemical diagrams and three-dimensional structural information can all be studied in the results browser using the efficient entry summaries and embedded three-dimensional viewer. PMID:22477776
An Ontology Based Approach to Information Security
NASA Astrophysics Data System (ADS)
Pereira, Teresa; Santos, Henrique
The semantically structure of knowledge, based on ontology approaches have been increasingly adopted by several expertise from diverse domains. Recently ontologies have been moved from the philosophical and metaphysics disciplines to be used in the construction of models to describe a specific theory of a domain. The development and the use of ontologies promote the creation of a unique standard to represent concepts within a specific knowledge domain. In the scope of information security systems the use of an ontology to formalize and represent the concepts of security information challenge the mechanisms and techniques currently used. This paper intends to present a conceptual implementation model of an ontology defined in the security domain. The model presented contains the semantic concepts based on the information security standard
[Current situation and development trend of Chinese medicine information research].
Dong, Yan; Cui, Meng
2013-04-01
Literature resource service was the main service that Chinese medicine (CM) information offered. But in recent years users have started to request the health information knowledge service. The CM information researches and application service mainly included: (1) the need of strength studies on theory, application of technology, information retrieval, and information standard development; (2) Information studies need to support clinical decision making, new drug research; (3) Quick response based on the network monitoring and support to emergency countermeasures. CM information researches have the following treads: (1) developing the theory system structure of CM information; (2) studying the methodology system of CM information; (3) knowledge discovery and knowledge innovation.
Guo, Jinqiu; Takada, Akira; Tanaka, Koji; Sato, Junzo; Suzuki, Muneou; Takahashi, Kiwamu; Daimon, Hiroyuki; Suzuki, Toshiaki; Nakashima, Yusei; Araki, Kenji; Yoshihara, Hiroyuki
2005-08-01
With the evolving and diverse electronic medical record (EMR) systems, there appears to be an ever greater need to link EMR systems and patient accounting systems with a standardized data exchange format. To this end, the CLinical Accounting InforMation (CLAIM) data exchange standard was developed. CLAIM is subordinate to the Medical Markup Language (MML) standard, which allows the exchange of medical data among different medical institutions. CLAIM uses eXtensible Markup Language (XML) as a meta-language. The current version, 2.1, inherited the basic structure of MML 2.x and contains two modules including information related to registration, appointment, procedure and charging. CLAIM 2.1 was implemented successfully in Japan in 2001. Consequently, it was confirmed that CLAIM could be used as an effective data exchange format between EMR systems and patient accounting systems.
DICOM static and dynamic representation through unified modeling language
NASA Astrophysics Data System (ADS)
Martinez-Martinez, Alfonso; Jimenez-Alaniz, Juan R.; Gonzalez-Marquez, A.; Chavez-Avelar, N.
2004-04-01
The DICOM standard, as all standards, specifies in generic way the management in network and storage media environments of digital medical images and their related information. However, understanding the specifications for particular implementation is not a trivial work. Thus, this work is about understanding and modelling parts of the DICOM standard using Object Oriented methodologies, as part of software development processes. This has offered different static and dynamic views, according with the standard specifications, and the resultant models have been represented through the Unified Modelling Language (UML). The modelled parts are related to network conformance claim: Network Communication Support for Message Exchange, Message Exchange, Information Object Definitions, Service Class Specifications, Data Structures and Encoding, and Data Dictionary. The resultant models have given a better understanding about DICOM parts and have opened the possibility of create a software library to develop DICOM conformable PACS applications.
Synthetic Biology Open Language (SBOL) Version 2.0.0.
Bartley, Bryan; Beal, Jacob; Clancy, Kevin; Misirli, Goksel; Roehner, Nicholas; Oberortner, Ernst; Pocock, Matthew; Bissell, Michael; Madsen, Curtis; Nguyen, Tramy; Zhang, Zhen; Gennari, John H; Myers, Chris; Wipat, Anil; Sauro, Herbert
2015-09-04
Synthetic biology builds upon the techniques and successes of genetics, molecular biology, and metabolic engineering by applying engineering principles to the design of biological systems. The field still faces substantial challenges, including long development times, high rates of failure, and poor reproducibility. One method to ameliorate these problems would be to improve the exchange of information about designed systems between laboratories. The Synthetic Biology Open Language (SBOL) has been developed as a standard to support the specification and exchange of biological design information in synthetic biology, filling a need not satisfied by other pre-existing standards. This document details version 2.0 of SBOL, introducing a standardized format for the electronic exchange of information on the structural and functional aspects of biological designs. The standard has been designed to support the explicit and unambiguous description of biological designs by means of a well defined data model. The standard also includes rules and best practices on how to use this data model and populate it with relevant design details. The publication of this specification is intended to make these capabilities more widely accessible to potential developers and users in the synthetic biology community and beyond.
Definition of a 5MW/61.5m wind turbine blade reference model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Resor, Brian Ray
2013-04-01
A basic structural concept of the blade design that is associated with the frequently utilized %E2%80%9CNREL offshore 5-MW baseline wind turbine%E2%80%9D is needed for studies involving blade structural design and blade structural design tools. The blade structural design documented in this report represents a concept that meets basic design criteria set forth by IEC standards for the onshore turbine. The design documented in this report is not a fully vetted blade design which is ready for manufacture. The intent of the structural concept described by this report is to provide a good starting point for more detailed and targeted investigationsmore » such as blade design optimization, blade design tool verification, blade materials and structures investigations, and blade design standards evaluation. This report documents the information used to create the current model as well as the analyses used to verify that the blade structural performance meets reasonable blade design criteria.« less
ERIC Educational Resources Information Center
Wamsley, Mary Ann, Ed.; Vermeire, Donna M., Ed.
This guide contains basic information to meet the specific standards for pesticide applicators. The thrust of this document is the recognition and control of common pests. Included are those which directly affect man such as bees, roaches, mites, and mosquitoes; and those which destroy food products and wooden structures. Both mechanical and…
A data structure for describing sampling designs to aid in compilation of stand attributes
John C. Byrne; Albert R. Stage
1988-01-01
Maintaining permanent plot data with different sampling designs over long periods within an organization, and sharing such information between organizations, requires that common standards be used. A data structure for the description of the sampling design within a stand is proposed. It is composed of just those variables and their relationships needed to compile...
Directory interchange format manual, version 4.0
NASA Technical Reports Server (NTRS)
1991-01-01
The Directory Interchange Format (DIF) is a data structure used to exchange directory-level information about data sets among information systems. In general the format consists of a number of fields that describe the attributes of a directory entry and text blocks that contain a descriptive summary of and references for the directory entry. All fields and the summary are preceded by labels identifying their contents. All values are ASCII character strings. The structure is intended to be flexible, allowing for future changes in the contents of directory entries. The manual is structured as follows: section 1 is a general description of what constitutes a directory entry; section 2 describes the content of the individual fields within the data structure, together with some examples. Also included in the six appendices is a description of the syntax used within the examples; samples of the directory interchange format applied to different data sets; the allowable discipline keywords; a current list of valid location keywords; a list of allowable parameter keywords; a list of acronyns and a glossary of terms used; and a description of the Standard Formatted Data Unit header, which may be added to the front of a DIF file to identify the file as a registered standard format.
DISTRIBUTED STRUCTURE-SEARCHABLE TOXICITY ...
The ability to assess the potential genotoxicity, carcinogenicity, or other toxicity of pharmaceutical or industrial chemicals based on chemical structure information is a highly coveted and shared goal of varied academic, commercial, and government regulatory groups. These diverse interests often employ different approaches and have different criteria and use for toxicity assessments, but they share a need for unrestricted access to existing public toxicity data linked with chemical structure information. Currently, there exists no central repository of toxicity information, commercial or public, that adequately meets the data requirements for flexible analogue searching, SAR model development, or building of chemical relational databases (CRD). The Distributed Structure-Searchable Toxicity (DSSTox) Public Database Network is being proposed as a community-supported, web-based effort to address these shared needs of the SAR and toxicology communities. The DSSTox project has the following major elements: 1) to adopt and encourage the use of a common standard file format (SDF) for public toxicity databases that includes chemical structure, text and property information, and that can easily be imported into available CRD applications; 2) to implement a distributed source approach, managed by a DSSTox Central Website, that will enable decentralized, free public access to structure-toxicity data files, and that will effectively link knowledgeable toxicity data s
Keynote Address: ACR-NEMA standards and their implications for teleradiology
NASA Astrophysics Data System (ADS)
Horii, Steven C.
1990-06-01
The ACR-NEMA Standard was developed initially as an interface standard for the interconnection of two pieces of imaging equipment Essentially the Standard defmes a point-to-point hardware connection with the necessary protocol and data structure so that two differing devices which meet the specification will be able to communicate with each other. The Standard does not defme a particular PACS architecture nor does it specify a database structure. In part these are the reasons why implementers have had difficulty in using the Standard in a full PACS. Recent activity of the Working Groups formed by the Committee overseeing work on the ACR-NEMA Standard has changed some of the " flavor" of the Standard. It was realized that connection of PACS with hospital and radiology information systems (HIS and RIS) is necessary if a PACS is ever to be succesful. The idea of interconnecting heterogeneous computer systems has pushed Standards development beyond the scope of the original work. Teleradiology which inherenfly involves wide-area networking may be a direct beneficiary of the new directions taken by the Standards Working Groups. This paper will give a brief history of the ACR-NEMA effort describe the " parent" Standard and its " offspring" and describe the activity of the current Working Groups with particular emphasis on the potential impacts on teleradiology.
Relevance of eHealth standards for big data interoperability in radiology and beyond.
Marcheschi, Paolo
2017-06-01
The aim of this paper is to report on the implementation of radiology and related information technology standards to feed big data repositories and so to be able to create a solid substrate on which to operate with analysis software. Digital Imaging and Communications in Medicine (DICOM) and Health Level 7 (HL7) are the major standards for radiology and medical information technology. They define formats and protocols to transmit medical images, signals, and patient data inside and outside hospital facilities. These standards can be implemented but big data expectations are stimulating a new approach, simplifying data collection and interoperability, seeking reduction of time to full implementation inside health organizations. Virtual Medical Record, DICOM Structured Reporting and HL7 Fast Healthcare Interoperability Resources (FHIR) are changing the way medical data are shared among organization and they will be the keys to big data interoperability. Until we do not find simple and comprehensive methods to store and disseminate detailed information on the patient's health we will not be able to get optimum results from the analysis of those data.
NASA Astrophysics Data System (ADS)
Hsieh, Ya-Hui; Tsai, Chin-Chung
2014-06-01
The purpose of this study is to examine the moderating role of cognitive load experience between students' scientific epistemic beliefs and information commitments, which refer to online evaluative standards and online searching strategies. A total of 344 science-related major students participated in this study. Three questionnaires were used to ascertain the students' scientific epistemic beliefs, information commitments, and cognitive load experience. Structural equation modeling was then used to analyze the moderating effect of cognitive load, with the results revealing its significant moderating effect. The relationships between sophisticated scientific epistemic beliefs and the advanced evaluative standards used by the students were significantly stronger for low than for high cognitive load students. Moreover, considering the searching strategies that the students used, the relationships between sophisticated scientific epistemic beliefs and advanced searching strategies were also stronger for low than for high cognitive load students. However, for the high cognitive load students, only one of the sophisticated scientific epistemic belief dimensions was found to positively associate with advanced evaluative standard dimensions.
Williams, Robert C; Elston, Robert C; Kumar, Pankaj; Knowler, William C; Abboud, Hanna E; Adler, Sharon; Bowden, Donald W; Divers, Jasmin; Freedman, Barry I; Igo, Robert P; Ipp, Eli; Iyengar, Sudha K; Kimmel, Paul L; Klag, Michael J; Kohn, Orly; Langefeld, Carl D; Leehey, David J; Nelson, Robert G; Nicholas, Susanne B; Pahl, Madeleine V; Parekh, Rulan S; Rotter, Jerome I; Schelling, Jeffrey R; Sedor, John R; Shah, Vallabh O; Smith, Michael W; Taylor, Kent D; Thameem, Farook; Thornley-Brown, Denyse; Winkler, Cheryl A; Guo, Xiuqing; Zager, Phillip; Hanson, Robert L
2016-05-04
The presence of population structure in a sample may confound the search for important genetic loci associated with disease. Our four samples in the Family Investigation of Nephropathy and Diabetes (FIND), European Americans, Mexican Americans, African Americans, and American Indians are part of a genome- wide association study in which population structure might be particularly important. We therefore decided to study in detail one component of this, individual genetic ancestry (IGA). From SNPs present on the Affymetrix 6.0 Human SNP array, we identified 3 sets of ancestry informative markers (AIMs), each maximized for the information in one the three contrasts among ancestral populations: Europeans (HAPMAP, CEU), Africans (HAPMAP, YRI and LWK), and Native Americans (full heritage Pima Indians). We estimate IGA and present an algorithm for their standard errors, compare IGA to principal components, emphasize the importance of balancing information in the ancestry informative markers (AIMs), and test the association of IGA with diabetic nephropathy in the combined sample. A fixed parental allele maximum likelihood algorithm was applied to the FIND to estimate IGA in four samples: 869 American Indians; 1385 African Americans; 1451 Mexican Americans; and 826 European Americans. When the information in the AIMs is unbalanced, the estimates are incorrect with large error. Individual genetic admixture is highly correlated with principle components for capturing population structure. It takes ~700 SNPs to reduce the average standard error of individual admixture below 0.01. When the samples are combined, the resulting population structure creates associations between IGA and diabetic nephropathy. The identified set of AIMs, which include American Indian parental allele frequencies, may be particularly useful for estimating genetic admixture in populations from the Americas. Failure to balance information in maximum likelihood, poly-ancestry models creates biased estimates of individual admixture with large error. This also occurs when estimating IGA using the Bayesian clustering method as implemented in the program STRUCTURE. Odds ratios for the associations of IGA with disease are consistent with what is known about the incidence and prevalence of diabetic nephropathy in these populations.
Profiling structured product labeling with NDF-RT and RxNorm
2012-01-01
Background Structured Product Labeling (SPL) is a document markup standard approved by Health Level Seven (HL7) and adopted by United States Food and Drug Administration (FDA) as a mechanism for exchanging drug product information. The SPL drug labels contain rich information about FDA approved clinical drugs. However, the lack of linkage to standard drug ontologies hinders their meaningful use. NDF-RT (National Drug File Reference Terminology) and NLM RxNorm as standard drug ontology were used to standardize and profile the product labels. Methods In this paper, we present a framework that intends to map SPL drug labels with existing drug ontologies: NDF-RT and RxNorm. We also applied existing categorical annotations from the drug ontologies to classify SPL drug labels into corresponding classes. We established the classification and relevant linkage for SPL drug labels using the following three approaches. First, we retrieved NDF-RT categorical information from the External Pharmacologic Class (EPC) indexing SPLs. Second, we used the RxNorm and NDF-RT mappings to classify and link SPLs with NDF-RT categories. Third, we profiled SPLs using RxNorm term type information. In the implementation process, we employed a Semantic Web technology framework, in which we stored the data sets from NDF-RT and SPLs into a RDF triple store, and executed SPARQL queries to retrieve data from customized SPARQL endpoints. Meanwhile, we imported RxNorm data into MySQL relational database. Results In total, 96.0% SPL drug labels were mapped with NDF-RT categories whereas 97.0% SPL drug labels are linked to RxNorm codes. We found that the majority of SPL drug labels are mapped to chemical ingredient concepts in both drug ontologies whereas a relatively small portion of SPL drug labels are mapped to clinical drug concepts. Conclusions The profiling outcomes produced by this study would provide useful insights on meaningful use of FDA SPL drug labels in clinical applications through standard drug ontologies such as NDF-RT and RxNorm. PMID:23256517
Profiling structured product labeling with NDF-RT and RxNorm.
Zhu, Qian; Jiang, Guoqian; Chute, Christopher G
2012-12-20
Structured Product Labeling (SPL) is a document markup standard approved by Health Level Seven (HL7) and adopted by United States Food and Drug Administration (FDA) as a mechanism for exchanging drug product information. The SPL drug labels contain rich information about FDA approved clinical drugs. However, the lack of linkage to standard drug ontologies hinders their meaningful use. NDF-RT (National Drug File Reference Terminology) and NLM RxNorm as standard drug ontology were used to standardize and profile the product labels. In this paper, we present a framework that intends to map SPL drug labels with existing drug ontologies: NDF-RT and RxNorm. We also applied existing categorical annotations from the drug ontologies to classify SPL drug labels into corresponding classes. We established the classification and relevant linkage for SPL drug labels using the following three approaches. First, we retrieved NDF-RT categorical information from the External Pharmacologic Class (EPC) indexing SPLs. Second, we used the RxNorm and NDF-RT mappings to classify and link SPLs with NDF-RT categories. Third, we profiled SPLs using RxNorm term type information. In the implementation process, we employed a Semantic Web technology framework, in which we stored the data sets from NDF-RT and SPLs into a RDF triple store, and executed SPARQL queries to retrieve data from customized SPARQL endpoints. Meanwhile, we imported RxNorm data into MySQL relational database. In total, 96.0% SPL drug labels were mapped with NDF-RT categories whereas 97.0% SPL drug labels are linked to RxNorm codes. We found that the majority of SPL drug labels are mapped to chemical ingredient concepts in both drug ontologies whereas a relatively small portion of SPL drug labels are mapped to clinical drug concepts. The profiling outcomes produced by this study would provide useful insights on meaningful use of FDA SPL drug labels in clinical applications through standard drug ontologies such as NDF-RT and RxNorm.
Populating the Semantic Web by Macro-reading Internet Text
NASA Astrophysics Data System (ADS)
Mitchell, Tom M.; Betteridge, Justin; Carlson, Andrew; Hruschka, Estevam; Wang, Richard
A key question regarding the future of the semantic web is "how will we acquire structured information to populate the semantic web on a vast scale?" One approach is to enter this information manually. A second approach is to take advantage of pre-existing databases, and to develop common ontologies, publishing standards, and reward systems to make this data widely accessible. We consider here a third approach: developing software that automatically extracts structured information from unstructured text present on the web. We also describe preliminary results demonstrating that machine learning algorithms can learn to extract tens of thousands of facts to populate a diverse ontology, with imperfect but reasonably good accuracy.
Information retrieval for a document writing assistance program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corral, M.L.; Simon, A.; Julien, C.
This paper presents an Information Retrieval mechanism to facilitate the writing of technical documents in the space domain. To address the need for document exchange between partners in a given project, documents are standardized. The writing of a new document requires the re-use of existing documents or parts thereof. These parts can be identified by {open_quotes}tagging{close_quotes} the logical structure of documents and restored by means of a purpose-built Information Retrieval System (I.R.S.). The I.R.S. implemented in our writing assistance tool uses natural language queries and is based on a statistical linguistic approach which is enhanced by the use of documentmore » structure module.« less
Krohne, Kariann; Torres, Sandra; Slettebø, Åshild; Bergland, Astrid
2014-02-17
Health professionals are required to collect data from standardized tests when assessing older patients' functional ability. Such data provide quantifiable documentation on health outcomes. Little is known, however, about how physiotherapists and occupational therapists who administer standardized tests use test information in their daily clinical work. This article aims to investigate how test administrators in a geriatric setting justify the everyday use of standardized test information. Qualitative study of physiotherapists and occupational therapists on two geriatric hospital wards in Norway that routinely tested their patients with standardized tests. Data draw on seven months of fieldwork, semi-structured interviews with eight physiotherapists and six occupational therapists (12 female, two male), as well as observations of 26 test situations. Data were analyzed using Systematic Text Condensation. We identified two test information components in everyday use among physiotherapist and occupational therapist test administrators. While the primary component drew on the test administrators' subjective observations during testing, the secondary component encompassed the communication of objective test results and test performance. The results of this study illustrate the overlap between objective and subjective data in everyday practice. In clinical practice, by way of the clinicians' gaze on how the patient functions, the subjective and objective components of test information are merged, allowing individual characteristics to be noticed and made relevant as test performance justifications and as rationales in the overall communication of patient needs.
[A web-based integrated clinical database for laryngeal cancer].
E, Qimin; Liu, Jialin; Li, Yong; Liang, Chuanyu
2014-08-01
To establish an integrated database for laryngeal cancer, and to provide an information platform for laryngeal cancer in clinical and fundamental researches. This database also meet the needs of clinical and scientific use. Under the guidance of clinical expert, we have constructed a web-based integrated clinical database for laryngeal carcinoma on the basis of clinical data standards, Apache+PHP+MySQL technology, laryngeal cancer specialist characteristics and tumor genetic information. A Web-based integrated clinical database for laryngeal carcinoma had been developed. This database had a user-friendly interface and the data could be entered and queried conveniently. In addition, this system utilized the clinical data standards and exchanged information with existing electronic medical records system to avoid the Information Silo. Furthermore, the forms of database was integrated with laryngeal cancer specialist characteristics and tumor genetic information. The Web-based integrated clinical database for laryngeal carcinoma has comprehensive specialist information, strong expandability, high feasibility of technique and conforms to the clinical characteristics of laryngeal cancer specialties. Using the clinical data standards and structured handling clinical data, the database can be able to meet the needs of scientific research better and facilitate information exchange, and the information collected and input about the tumor sufferers are very informative. In addition, the user can utilize the Internet to realize the convenient, swift visit and manipulation on the database.
Fine-grained information extraction from German transthoracic echocardiography reports.
Toepfer, Martin; Corovic, Hamo; Fette, Georg; Klügl, Peter; Störk, Stefan; Puppe, Frank
2015-11-12
Information extraction techniques that get structured representations out of unstructured data make a large amount of clinically relevant information about patients accessible for semantic applications. These methods typically rely on standardized terminologies that guide this process. Many languages and clinical domains, however, lack appropriate resources and tools, as well as evaluations of their applications, especially if detailed conceptualizations of the domain are required. For instance, German transthoracic echocardiography reports have not been targeted sufficiently before, despite of their importance for clinical trials. This work therefore aimed at development and evaluation of an information extraction component with a fine-grained terminology that enables to recognize almost all relevant information stated in German transthoracic echocardiography reports at the University Hospital of Würzburg. A domain expert validated and iteratively refined an automatically inferred base terminology. The terminology was used by an ontology-driven information extraction system that outputs attribute value pairs. The final component has been mapped to the central elements of a standardized terminology, and it has been evaluated according to documents with different layouts. The final system achieved state-of-the-art precision (micro average.996) and recall (micro average.961) on 100 test documents that represent more than 90 % of all reports. In particular, principal aspects as defined in a standardized external terminology were recognized with f 1=.989 (micro average) and f 1=.963 (macro average). As a result of keyword matching and restraint concept extraction, the system obtained high precision also on unstructured or exceptionally short documents, and documents with uncommon layout. The developed terminology and the proposed information extraction system allow to extract fine-grained information from German semi-structured transthoracic echocardiography reports with very high precision and high recall on the majority of documents at the University Hospital of Würzburg. Extracted results populate a clinical data warehouse which supports clinical research.
Standards for Clinical Grade Genomic Databases.
Yohe, Sophia L; Carter, Alexis B; Pfeifer, John D; Crawford, James M; Cushman-Vokoun, Allison; Caughron, Samuel; Leonard, Debra G B
2015-11-01
Next-generation sequencing performed in a clinical environment must meet clinical standards, which requires reproducibility of all aspects of the testing. Clinical-grade genomic databases (CGGDs) are required to classify a variant and to assist in the professional interpretation of clinical next-generation sequencing. Applying quality laboratory standards to the reference databases used for sequence-variant interpretation presents a new challenge for validation and curation. To define CGGD and the categories of information contained in CGGDs and to frame recommendations for the structure and use of these databases in clinical patient care. Members of the College of American Pathologists Personalized Health Care Committee reviewed the literature and existing state of genomic databases and developed a framework for guiding CGGD development in the future. Clinical-grade genomic databases may provide different types of information. This work group defined 3 layers of information in CGGDs: clinical genomic variant repositories, genomic medical data repositories, and genomic medicine evidence databases. The layers are differentiated by the types of genomic and medical information contained and the utility in assisting with clinical interpretation of genomic variants. Clinical-grade genomic databases must meet specific standards regarding submission, curation, and retrieval of data, as well as the maintenance of privacy and security. These organizing principles for CGGDs should serve as a foundation for future development of specific standards that support the use of such databases for patient care.
Morrison, Zoe; Fernando, Bernard; Kalra, Dipak; Cresswell, Kathrin; Sheikh, Aziz
2014-01-01
We aimed to explore stakeholder views, attitudes, needs, and expectations regarding likely benefits and risks resulting from increased structuring and coding of clinical information within electronic health records (EHRs). Qualitative investigation in primary and secondary care and research settings throughout the UK. Data were derived from interviews, expert discussion groups, observations, and relevant documents. Participants (n=70) included patients, healthcare professionals, health service commissioners, policy makers, managers, administrators, systems developers, researchers, and academics. Four main themes arose from our data: variations in documentation practice; patient care benefits; secondary uses of information; and informing and involving patients. We observed a lack of guidelines, co-ordination, and dissemination of best practice relating to the design and use of information structures. While we identified immediate benefits for direct care and secondary analysis, many healthcare professionals did not see the relevance of structured and/or coded data to clinical practice. The potential for structured information to increase patient understanding of their diagnosis and treatment contrasted with concerns regarding the appropriateness of coded information for patients. The design and development of EHRs requires the capture of narrative information to reflect patient/clinician communication and computable data for administration and research purposes. Increased structuring and/or coding of EHRs therefore offers both benefits and risks. Documentation standards within clinical guidelines are likely to encourage comprehensive, accurate processing of data. As data structures may impact upon clinician/patient interactions, new models of documentation may be necessary if EHRs are to be read and authored by patients.
Morrison, Zoe; Fernando, Bernard; Kalra, Dipak; Cresswell, Kathrin; Sheikh, Aziz
2014-01-01
Objective We aimed to explore stakeholder views, attitudes, needs, and expectations regarding likely benefits and risks resulting from increased structuring and coding of clinical information within electronic health records (EHRs). Materials and methods Qualitative investigation in primary and secondary care and research settings throughout the UK. Data were derived from interviews, expert discussion groups, observations, and relevant documents. Participants (n=70) included patients, healthcare professionals, health service commissioners, policy makers, managers, administrators, systems developers, researchers, and academics. Results Four main themes arose from our data: variations in documentation practice; patient care benefits; secondary uses of information; and informing and involving patients. We observed a lack of guidelines, co-ordination, and dissemination of best practice relating to the design and use of information structures. While we identified immediate benefits for direct care and secondary analysis, many healthcare professionals did not see the relevance of structured and/or coded data to clinical practice. The potential for structured information to increase patient understanding of their diagnosis and treatment contrasted with concerns regarding the appropriateness of coded information for patients. Conclusions The design and development of EHRs requires the capture of narrative information to reflect patient/clinician communication and computable data for administration and research purposes. Increased structuring and/or coding of EHRs therefore offers both benefits and risks. Documentation standards within clinical guidelines are likely to encourage comprehensive, accurate processing of data. As data structures may impact upon clinician/patient interactions, new models of documentation may be necessary if EHRs are to be read and authored by patients. PMID:24186957
[HL7 standard--features, principles, and methodology].
Koncar, Miroslav
2005-01-01
The mission of HL7 Inc. non-profit organization is to provide standards for the exchange, management and integration of data that support clinical patient care, and the management, delivery and evaluation of healthcare services. As the standards developed by HL7 Inc. represent the world's most influential standardization efforts in the field of medical informatics, the HL7 family of standards has been recognized by the technical and scientific community as the foundation for the next generation healthcare information systems. Versions 1 and 2 of HL7 standard have solved many issues, but also demonstrated the size and complexity of health information sharing problem. As the solution complete new methodology has been adopted that is encompassed in the HL7 Version 3 recommendations. This approach standardizes Reference Information Model (RIM), which is the source of all derived domain models and message structures. Message design is now defined in detail, enabling interoperability between loosely coupled systems that are.designed by different vendors and deployed in various environments. At the start of the Primary Healthcare Information System project in the Republic of Croatia in 2002, the decision was to go directly to Version 3. The target scope of work includes clinical, financial and administrative data management in the domain of healthcare processes. By using HL7v3 standardized methodology we were able to completely map the Croatian primary healthcare domain to HL7v3 artefacts. Further refinement processes that are planned for the future will provide semantic interoperability and detailed description of all elements in HL7 messages. Our HL7 Business Component is in constant process of studying different legacy applications, making solid foundation for their integration to HL7-enabled communication environment.
The use of geospatial web services for exchanging utilities data
NASA Astrophysics Data System (ADS)
Kuczyńska, Joanna
2013-04-01
Geographic information technologies and related geo-information systems currently play an important role in the management of public administration in Poland. One of these tasks is to maintain and update Geodetic Evidence of Public Utilities (GESUT), part of the National Geodetic and Cartographic Resource, which contains an important for many institutions information of technical infrastructure. It requires an active exchange of data between the Geodesy and Cartography Documentation Centers and institutions, which administrate transmission lines. The administrator of public utilities, is legally obliged to provide information about utilities to GESUT. The aim of the research work was to develop a universal data exchange methodology, which can be implemented on a variety of hardware and software platforms. This methodology use Unified Modeling Language (UML), eXtensible Markup Language (XML), and Geography Markup Language (GML). The proposed methodology is based on the two different strategies: Model Driven Architecture (MDA) and Service Oriented Architecture (SOA). Used solutions are consistent with the INSPIRE Directive and ISO 19100 series standards for geographic information. On the basis of analysis of the input data structures, conceptual models were built for both databases. Models were written in the universal modeling language: UML. Combined model that defines a common data structure was also built. This model was transformed into developed for the exchange of geographic information GML standard. The structure of the document describing the data that may be exchanged is defined in the .xsd file. Network services were selected and implemented in the system designed for data exchange based on open source tools. Methodology was implemented and tested. Data in the agreed data structure and metadata were set up on the server. Data access was provided by geospatial network services: data searching possibilities by Catalog Service for the Web (CSW), data collection by Web Feature Service (WFS). WFS provides also operation for modification data, for example to update them by utility administrator. The proposed solution significantly increases the efficiency of data exchange and facilitates maintenance the National Geodetic and Cartographic Resource.
Meizoso García, María; Iglesias Allones, José Luis; Martínez Hernández, Diego; Taboada Iglesias, María Jesús
2012-08-01
One of the main challenges of eHealth is semantic interoperability of health systems. But, this will only be possible if the capture, representation and access of patient data is standardized. Clinical data models, such as OpenEHR Archetypes, define data structures that are agreed by experts to ensure the accuracy of health information. In addition, they provide an option to normalize clinical data by means of binding terms used in the model definition to standard medical vocabularies. Nevertheless, the effort needed to establish the association between archetype terms and standard terminology concepts is considerable. Therefore, the purpose of this study is to provide an automated approach to bind OpenEHR archetypes terms to the external terminology SNOMED CT, with the capability to do it at a semantic level. This research uses lexical techniques and external terminological tools in combination with context-based techniques, which use information about structural and semantic proximity to identify similarities between terms and so, to find alignments between them. The proposed approach exploits both the structural context of archetypes and the terminology context, in which concepts are logically defined through the relationships (hierarchical and definitional) to other concepts. A set of 25 OBSERVATION archetypes with 477 bound terms was used to test the method. Of these, 342 terms (74.6%) were linked with 96.1% precision, 71.7% recall and 1.23 SNOMED CT concepts on average for each mapping. It has been detected that about one third of the archetype clinical information is grouped logically. Context-based techniques take advantage of this to increase the recall and to validate a 30.4% of the bindings produced by lexical techniques. This research shows that it is possible to automatically map archetype terms to a standard terminology with a high precision and recall, with the help of appropriate contextual and semantic information of both models. Moreover, the semantic-based methods provide a means of validating and disambiguating the resulting bindings. Therefore, this work is a step forward to reduce the human participation in the mapping process. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Zhou, Li; Hongsermeier, Tonya; Boxwala, Aziz; Lewis, Janet; Kawamoto, Kensaku; Maviglia, Saverio; Gentile, Douglas; Teich, Jonathan M; Rocha, Roberto; Bell, Douglas; Middleton, Blackford
2013-01-01
At present, there are no widely accepted, standard approaches for representing computer-based clinical decision support (CDS) intervention types and their structural components. This study aimed to identify key requirements for the representation of five widely utilized CDS intervention types: alerts and reminders, order sets, infobuttons, documentation templates/forms, and relevant data presentation. An XML schema was proposed for representing these interventions and their core structural elements (e.g., general metadata, applicable clinical scenarios, CDS inputs, CDS outputs, and CDS logic) in a shareable manner. The schema was validated by building CDS artifacts for 22 different interventions, targeted toward guidelines and clinical conditions called for in the 2011 Meaningful Use criteria. Custom style sheets were developed to render the XML files in human-readable form. The CDS knowledge artifacts were shared via a public web portal. Our experience also identifies gaps in existing standards and informs future development of standards for CDS knowledge representation and sharing.
The future of bibliographic standards in a networked information environment
NASA Technical Reports Server (NTRS)
1997-01-01
The main mission of the CENDI Cataloging Working Group is to provide guidelines for cataloging practices that support the sharing of database records among the CENDI agencies, and that incorporate principles based on cost effectiveness and efficiency. Recent efforts include the extension of COSATI Guidelines for the Cataloging of Technical Reports to include non-print materials, and the mapping of each agency's export file structure to USMARC. Of primary importance is the impact of electronic documents and the distributed nature of the networked information environment. Topics discussed during the workshop include the following: Trade-offs in Cataloging and Indexing Internet Information; The Impact on Current and Future Standards; A Look at WWW Metadata Initiatives; Standards for Electronic Journals; The Present and Future Search Engines; The Roles for Text Analysis Software; Advanced Search Engine Meets Metathesaurus; Locator Schemes for Internet Resources; Identifying and Cataloging Web Document Types; In Search of a New Bibliographic Record. The videos in this set include viewgraphs of charts and related materials of the workshop.
Machine learning for autonomous crystal structure identification.
Reinhart, Wesley F; Long, Andrew W; Howard, Michael P; Ferguson, Andrew L; Panagiotopoulos, Athanassios Z
2017-07-21
We present a machine learning technique to discover and distinguish relevant ordered structures from molecular simulation snapshots or particle tracking data. Unlike other popular methods for structural identification, our technique requires no a priori description of the target structures. Instead, we use nonlinear manifold learning to infer structural relationships between particles according to the topology of their local environment. This graph-based approach yields unbiased structural information which allows us to quantify the crystalline character of particles near defects, grain boundaries, and interfaces. We demonstrate the method by classifying particles in a simulation of colloidal crystallization, and show that our method identifies structural features that are missed by standard techniques.
Huebner-Bloder, Gudrun; Duftschmid, Georg; Kohler, Michael; Rinner, Christoph; Saboor, Samrend; Ammenwerth, Elske
2012-01-01
Cross-institutional longitudinal Electronic Health Records (EHR), as introduced in Austria at the moment, increase the challenge of information overload of healthcare professionals. We developed an innovative cross-institutional EHR query prototype that offers extended query options, including searching for specific information items or sets of information items. The available query options were derived from a systematic analysis of information needs of diabetes specialists during patient encounters. The prototype operates in an IHE-XDS-based environment where ISO/EN 13606-structured documents are available. We conducted a controlled study with seven diabetes specialists to assess the feasibility and impact of this EHR query prototype on efficient retrieving of patient information to answer typical clinical questions. The controlled study showed that the specialists were quicker and more successful (measured in percentage of expected information items found) in finding patient information compared to the standard full-document search options. The participants also appreciated the extended query options. PMID:23304308
Standard Information Models for Representing Adverse Sensitivity Information in Clinical Documents.
Topaz, M; Seger, D L; Goss, F; Lai, K; Slight, S P; Lau, J J; Nandigam, H; Zhou, L
2016-01-01
Adverse sensitivity (e.g., allergy and intolerance) information is a critical component of any electronic health record system. While several standards exist for structured entry of adverse sensitivity information, many clinicians record this data as free text. This study aimed to 1) identify and compare the existing common adverse sensitivity information models, and 2) to evaluate the coverage of the adverse sensitivity information models for representing allergy information on a subset of inpatient and outpatient adverse sensitivity clinical notes. We compared four common adverse sensitivity information models: Health Level 7 Allergy and Intolerance Domain Analysis Model, HL7-DAM; the Fast Healthcare Interoperability Resources, FHIR; the Consolidated Continuity of Care Document, C-CDA; and OpenEHR, and evaluated their coverage on a corpus of inpatient and outpatient notes (n = 120). We found that allergy specialists' notes had the highest frequency of adverse sensitivity attributes per note, whereas emergency department notes had the fewest attributes. Overall, the models had many similarities in the central attributes which covered between 75% and 95% of adverse sensitivity information contained within the notes. However, representations of some attributes (especially the value-sets) were not well aligned between the models, which is likely to present an obstacle for achieving data interoperability. Also, adverse sensitivity exceptions were not well represented among the information models. Although we found that common adverse sensitivity models cover a significant portion of relevant information in the clinical notes, our results highlight areas needed to be reconciled between the standards for data interoperability.
A standardized SOA for clinical data interchange in a cardiac telemonitoring environment.
Gazzarata, Roberta; Vergari, Fabio; Cinotti, Tullio Salmon; Giacomini, Mauro
2014-11-01
Care of chronic cardiac patients requires information interchange between patients' homes, clinical environments, and the electronic health record. Standards are emerging to support clinical information collection, exchange and management and to overcome information fragmentation and actors delocalization. Heterogeneity of information sources at patients' homes calls for open solutions to collect and accommodate multidomain information, including environmental data. Based on the experience gained in a European Research Program, this paper presents an integrated and open approach for clinical data interchange in cardiac telemonitoring applications. This interchange is supported by the use of standards following the indications provided by the national authorities of the countries involved. Taking into account the requirements provided by the medical staff involved in the project, the authors designed and implemented a prototypal middleware, based on a service-oriented architecture approach, to give a structured and robust tool to congestive heart failure patients for their personalized telemonitoring. The middleware is represented by a health record management service, whose interface is compliant to the healthcare services specification project Retrieve, Locate and Update Service standard (Level 0), which allows communication between the agents involved through the exchange of Clinical Document Architecture Release 2 documents. Three performance tests were carried out and showed that the prototype completely fulfilled all requirements indicated by the medical staff; however, certain aspects, such as authentication, security and scalability, should be deeply analyzed within a future engineering phase.
MIMS - MEDICAL INFORMATION MANAGEMENT SYSTEM
NASA Technical Reports Server (NTRS)
Frankowski, J. W.
1994-01-01
MIMS, Medical Information Management System is an interactive, general purpose information storage and retrieval system. It was first designed to be used in medical data management, and can be used to handle all aspects of data related to patient care. Other areas of application for MIMS include: managing occupational safety data in the public and private sectors; handling judicial information where speed and accuracy are high priorities; systemizing purchasing and procurement systems; and analyzing organizational cost structures. Because of its free format design, MIMS can offer immediate assistance where manipulation of large data bases is required. File structures, data categories, field lengths and formats, including alphabetic and/or numeric, are all user defined. The user can quickly and efficiently extract, display, and analyze the data. Three means of extracting data are provided: certain short items of information, such as social security numbers, can be used to uniquely identify each record for quick access; records can be selected which match conditions defined by the user; and specific categories of data can be selected. Data may be displayed and analyzed in several ways which include: generating tabular information assembled from comparison of all the records on the system; generating statistical information on numeric data such as means, standard deviations and standard errors; and displaying formatted listings of output data. The MIMS program is written in Microsoft FORTRAN-77. It was designed to operate on IBM Personal Computers and compatibles running under PC or MS DOS 2.00 or higher. MIMS was developed in 1987.
A hierarchical SVG image abstraction layer for medical imaging
NASA Astrophysics Data System (ADS)
Kim, Edward; Huang, Xiaolei; Tan, Gang; Long, L. Rodney; Antani, Sameer
2010-03-01
As medical imaging rapidly expands, there is an increasing need to structure and organize image data for efficient analysis, storage and retrieval. In response, a large fraction of research in the areas of content-based image retrieval (CBIR) and picture archiving and communication systems (PACS) has focused on structuring information to bridge the "semantic gap", a disparity between machine and human image understanding. An additional consideration in medical images is the organization and integration of clinical diagnostic information. As a step towards bridging the semantic gap, we design and implement a hierarchical image abstraction layer using an XML based language, Scalable Vector Graphics (SVG). Our method encodes features from the raw image and clinical information into an extensible "layer" that can be stored in a SVG document and efficiently searched. Any feature extracted from the raw image including, color, texture, orientation, size, neighbor information, etc., can be combined in our abstraction with high level descriptions or classifications. And our representation can natively characterize an image in a hierarchical tree structure to support multiple levels of segmentation. Furthermore, being a world wide web consortium (W3C) standard, SVG is able to be displayed by most web browsers, interacted with by ECMAScript (standardized scripting language, e.g. JavaScript, JScript), and indexed and retrieved by XML databases and XQuery. Using these open source technologies enables straightforward integration into existing systems. From our results, we show that the flexibility and extensibility of our abstraction facilitates effective storage and retrieval of medical images.
Functional Assessment of Synoptic Pathology Reporting for Ovarian Cancer.
Słodkowska, Janina; Cierniak, Szczepan; Patera, Janusz; Kopik, Jarosław; Baranowski, Włodzimierz; Markiewicz, Tomasz; Murawski, Piotr; Buda, Irmina; Kozłowski, Wojciech
2016-01-01
Ovarian cancer has one of the highest death/incidence rates and is commonly diagnosed at an advanced stage. In the recent WHO classification, new histotypes were classified which respond differently to chemotherapy. The e-standardized synoptic cancer pathology reports offer the clinicians essential and reliable information. The aim of our project was to develop an e-template for the standardized synoptic pathology reporting of ovarian carcinoma [based on the checklist of the College of American Pathologists (CAP) and the recent WHO/FIGO classification] to introduce a uniform and improved quality of cancer pathology reports. A functional and qualitative evaluation of the synoptic reporting was performed. An indispensable module for e-synoptic reporting was developed and integrated into the Hospital Information System (HIS). The electronic pathology system used a standardized structure with drop-down lists of defined elements to ensure completeness and consistency of reporting practices with the required guidelines. All ovarian cancer pathology reports (partial and final) with the corresponding glass slides selected from a 1-year current workflow were revised for the standard structured reports, and 42 tumors [13 borderline tumors and 29 carcinomas (mainly serous)] were included in the study. Analysis of the reports for completeness against the CAP checklist standard showed a lack of pTNM staging in 80% of the partial or final unstructured reports; ICD-O coding was missing in 83%. Much less frequently missed or unstated data were: ovarian capsule infiltration, angioinvasion and implant evaluation. The e-records of ovarian tumors were supplemented with digital macro- and micro-images and whole-slide images. The e-module developed for synoptic ovarian cancer pathology reporting was easily incorporated into HIS.CGM CliniNet and facilitated comprehensive reporting; it also provided open access to the database for concerned recipients. The e-synoptic pathology reports appeared more accurate, clear and conclusive than traditional narrative reports. Standardizing structured reporting and electronic tools allows open access and downstream utilization of pathology data for clinicians and tumor registries. © 2016 S. Karger AG, Basel.
Medical Image Fusion Based on Feature Extraction and Sparse Representation
Wei, Gao; Zongxi, Song
2017-01-01
As a novel multiscale geometric analysis tool, sparse representation has shown many advantages over the conventional image representation methods. However, the standard sparse representation does not take intrinsic structure and its time complexity into consideration. In this paper, a new fusion mechanism for multimodal medical images based on sparse representation and decision map is proposed to deal with these problems simultaneously. Three decision maps are designed including structure information map (SM) and energy information map (EM) as well as structure and energy map (SEM) to make the results reserve more energy and edge information. SM contains the local structure feature captured by the Laplacian of a Gaussian (LOG) and EM contains the energy and energy distribution feature detected by the mean square deviation. The decision map is added to the normal sparse representation based method to improve the speed of the algorithm. Proposed approach also improves the quality of the fused results by enhancing the contrast and reserving more structure and energy information from the source images. The experiment results of 36 groups of CT/MR, MR-T1/MR-T2, and CT/PET images demonstrate that the method based on SR and SEM outperforms five state-of-the-art methods. PMID:28321246
Novel Approach to Analyzing MFE of Noncoding RNA Sequences
George, Tina P.; Thomas, Tessamma
2016-01-01
Genomic studies have become noncoding RNA (ncRNA) centric after the study of different genomes provided enormous information on ncRNA over the past decades. The function of ncRNA is decided by its secondary structure, and across organisms, the secondary structure is more conserved than the sequence itself. In this study, the optimal secondary structure or the minimum free energy (MFE) structure of ncRNA was found based on the thermodynamic nearest neighbor model. MFE of over 2600 ncRNA sequences was analyzed in view of its signal properties. Mathematical models linking MFE to the signal properties were found for each of the four classes of ncRNA analyzed. MFE values computed with the proposed models were in concordance with those obtained with the standard web servers. A total of 95% of the sequences analyzed had deviation of MFE values within ±15% relative to those obtained from standard web servers. PMID:27695341
Novel Approach to Analyzing MFE of Noncoding RNA Sequences.
George, Tina P; Thomas, Tessamma
2016-01-01
Genomic studies have become noncoding RNA (ncRNA) centric after the study of different genomes provided enormous information on ncRNA over the past decades. The function of ncRNA is decided by its secondary structure, and across organisms, the secondary structure is more conserved than the sequence itself. In this study, the optimal secondary structure or the minimum free energy (MFE) structure of ncRNA was found based on the thermodynamic nearest neighbor model. MFE of over 2600 ncRNA sequences was analyzed in view of its signal properties. Mathematical models linking MFE to the signal properties were found for each of the four classes of ncRNA analyzed. MFE values computed with the proposed models were in concordance with those obtained with the standard web servers. A total of 95% of the sequences analyzed had deviation of MFE values within ±15% relative to those obtained from standard web servers.
[Design and implementation of field questionnaire survey system of taeniasis/cysticercosis].
Huan-Zhang, Li; Jing-Bo, Xue; Men-Bao, Qian; Xin-Zhong, Zang; Shang, Xia; Qiang, Wang; Ying-Dan, Chen; Shi-Zhu, Li
2018-04-17
A taeniasis/cysticercosis information management system was designed to achieve the dynamic monitoring of the epidemic situation of taeniasis/cysticercosis and improve the intelligence level of disease information management. The system includes three layer structures (application layer, technical core layer, and data storage layer) and designs a datum transmission and remote communication system of traffic information tube in Browser/Server architecture. The system is believed to promote disease datum collection. Additionally, the system may provide the standardized data for convenience of datum analysis.
Chen, Elizabeth S.; Maloney, Francine L.; Shilmayster, Eugene; Goldberg, Howard S.
2009-01-01
A systematic and standard process for capturing information within free-text clinical documents could facilitate opportunities for improving quality and safety of patient care, enhancing decision support, and advancing data warehousing across an enterprise setting. At Partners HealthCare System, the Medical Language Processing (MLP) services project was initiated to establish a component-based architectural model and processes to facilitate putting MLP functionality into production for enterprise consumption, promote sharing of components, and encourage reuse. Key objectives included exploring the use of an open-source framework called the Unstructured Information Management Architecture (UIMA) and leveraging existing MLP-related efforts, terminology, and document standards. This paper describes early experiences in defining the infrastructure and standards for extracting, encoding, and structuring clinical observations from a variety of clinical documents to serve enterprise-wide needs. PMID:20351830
Chen, Elizabeth S; Maloney, Francine L; Shilmayster, Eugene; Goldberg, Howard S
2009-11-14
A systematic and standard process for capturing information within free-text clinical documents could facilitate opportunities for improving quality and safety of patient care, enhancing decision support, and advancing data warehousing across an enterprise setting. At Partners HealthCare System, the Medical Language Processing (MLP) services project was initiated to establish a component-based architectural model and processes to facilitate putting MLP functionality into production for enterprise consumption, promote sharing of components, and encourage reuse. Key objectives included exploring the use of an open-source framework called the Unstructured Information Management Architecture (UIMA) and leveraging existing MLP-related efforts, terminology, and document standards. This paper describes early experiences in defining the infrastructure and standards for extracting, encoding, and structuring clinical observations from a variety of clinical documents to serve enterprise-wide needs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donatelli, Jeffrey J.; Sethian, James A.; Zwart, Peter H.
Free-electron lasers now have the ability to collect X-ray diffraction patterns from individual molecules; however, each sample is delivered at unknown orientation and may be in one of several conformational states, each with a different molecular structure. Hit rates are often low, typically around 0.1%, limiting the number of useful images that can be collected. Determining accurate structural information requires classifying and orienting each image, accurately assembling them into a 3D diffraction intensity function, and determining missing phase information. Additionally, single particles typically scatter very few photons, leading to high image noise levels. We develop a multitiered iterative phasing algorithmmore » to reconstruct structural information from singleparticle diffraction data by simultaneously determining the states, orientations, intensities, phases, and underlying structure in a single iterative procedure. We leverage real-space constraints on the structure to help guide optimization and reconstruct underlying structure from very few images with excellent global convergence properties. We show that this approach can determine structural resolution beyond what is suggested by standard Shannon sampling arguments for ideal images and is also robust to noise.« less
Donatelli, Jeffrey J.; Sethian, James A.; Zwart, Peter H.
2017-06-26
Free-electron lasers now have the ability to collect X-ray diffraction patterns from individual molecules; however, each sample is delivered at unknown orientation and may be in one of several conformational states, each with a different molecular structure. Hit rates are often low, typically around 0.1%, limiting the number of useful images that can be collected. Determining accurate structural information requires classifying and orienting each image, accurately assembling them into a 3D diffraction intensity function, and determining missing phase information. Additionally, single particles typically scatter very few photons, leading to high image noise levels. We develop a multitiered iterative phasing algorithmmore » to reconstruct structural information from singleparticle diffraction data by simultaneously determining the states, orientations, intensities, phases, and underlying structure in a single iterative procedure. We leverage real-space constraints on the structure to help guide optimization and reconstruct underlying structure from very few images with excellent global convergence properties. We show that this approach can determine structural resolution beyond what is suggested by standard Shannon sampling arguments for ideal images and is also robust to noise.« less
40 CFR 725.235 - Conditions of exemption for activities conducted inside a structure.
Code of Federal Regulations, 2011 CFR
2011-07-01
... experiments subject to Institutional Biosafety Committee review, or notification simultaneous with initiation of the experiment, the information submitted for review or notification, along with standard... experiments exempt from Institutional Biosafety Committee review or notification simultaneous with initiation...
The Interview as a Technique for Assessing Oral Ability: Some Guidelines for Its Use.
ERIC Educational Resources Information Center
Nambiar, Mohana
1990-01-01
Some guidelines are offered that detail the complexities involved in interviewing for language testing purposes. They cover strategies for structuring interviews (informal conversational, interview guide, standardized open-ended), questions, interviewing skills, and physical setting. (five references) (LB)
Peer Evaluation Can Reliably Measure Local Knowledge
ERIC Educational Resources Information Center
Reyes-García, Victoria; Díaz-Reviriego, Isabel; Duda, Romain; Fernández-Llamazares, Álvaro; Gallois, Sandrine; Guèze, Maximilien; Napitupulu, Lucentezza; Pyhälä, Aili
2016-01-01
We assess the consistency of measures of individual local ecological knowledge obtained through peer evaluation against three standard measures: identification tasks, structured questionnaires, and self-reported skills questionnaires. We collected ethnographic information among the Baka (Congo), the Punan (Borneo), and the Tsimane' (Amazon) to…
40 CFR 725.235 - Conditions of exemption for activities conducted inside a structure.
Code of Federal Regulations, 2014 CFR
2014-07-01
... experiments subject to Institutional Biosafety Committee review, or notification simultaneous with initiation of the experiment, the information submitted for review or notification, along with standard... experiments exempt from Institutional Biosafety Committee review or notification simultaneous with initiation...
40 CFR 725.235 - Conditions of exemption for activities conducted inside a structure.
Code of Federal Regulations, 2013 CFR
2013-07-01
... experiments subject to Institutional Biosafety Committee review, or notification simultaneous with initiation of the experiment, the information submitted for review or notification, along with standard... experiments exempt from Institutional Biosafety Committee review or notification simultaneous with initiation...
40 CFR 725.235 - Conditions of exemption for activities conducted inside a structure.
Code of Federal Regulations, 2012 CFR
2012-07-01
... experiments subject to Institutional Biosafety Committee review, or notification simultaneous with initiation of the experiment, the information submitted for review or notification, along with standard... experiments exempt from Institutional Biosafety Committee review or notification simultaneous with initiation...
Karapetyan, Karen; Batchelor, Colin; Sharpe, David; Tkachenko, Valery; Williams, Antony J
2015-01-01
There are presently hundreds of online databases hosting millions of chemical compounds and associated data. As a result of the number of cheminformatics software tools that can be used to produce the data, subtle differences between the various cheminformatics platforms, as well as the naivety of the software users, there are a myriad of issues that can exist with chemical structure representations online. In order to help facilitate validation and standardization of chemical structure datasets from various sources we have delivered a freely available internet-based platform to the community for the processing of chemical compound datasets. The chemical validation and standardization platform (CVSP) both validates and standardizes chemical structure representations according to sets of systematic rules. The chemical validation algorithms detect issues with submitted molecular representations using pre-defined or user-defined dictionary-based molecular patterns that are chemically suspicious or potentially requiring manual review. Each identified issue is assigned one of three levels of severity - Information, Warning, and Error - in order to conveniently inform the user of the need to browse and review subsets of their data. The validation process includes validation of atoms and bonds (e.g., making aware of query atoms and bonds), valences, and stereo. The standard form of submission of collections of data, the SDF file, allows the user to map the data fields to predefined CVSP fields for the purpose of cross-validating associated SMILES and InChIs with the connection tables contained within the SDF file. This platform has been applied to the analysis of a large number of data sets prepared for deposition to our ChemSpider database and in preparation of data for the Open PHACTS project. In this work we review the results of the automated validation of the DrugBank dataset, a popular drug and drug target database utilized by the community, and ChEMBL 17 data set. CVSP web site is located at http://cvsp.chemspider.com/. A platform for the validation and standardization of chemical structure representations of various formats has been developed and made available to the community to assist and encourage the processing of chemical structure files to produce more homogeneous compound representations for exchange and interchange between online databases. While the CVSP platform is designed with flexibility inherent to the rules that can be used for processing the data we have produced a recommended rule set based on our own experiences with the large data sets such as DrugBank, ChEMBL, and data sets from ChemSpider.
Jagannathan, V; Mullett, Charles J; Arbogast, James G; Halbritter, Kevin A; Yellapragada, Deepthi; Regulapati, Sushmitha; Bandaru, Pavani
2009-04-01
We assessed the current state of commercial natural language processing (NLP) engines for their ability to extract medication information from textual clinical documents. Two thousand de-identified discharge summaries and family practice notes were submitted to four commercial NLP engines with the request to extract all medication information. The four sets of returned results were combined to create a comparison standard which was validated against a manual, physician-derived gold standard created from a subset of 100 reports. Once validated, the individual vendor results for medication names, strengths, route, and frequency were compared against this automated standard with precision, recall, and F measures calculated. Compared with the manual, physician-derived gold standard, the automated standard was successful at accurately capturing medication names (F measure=93.2%), but performed less well with strength (85.3%) and route (80.3%), and relatively poorly with dosing frequency (48.3%). Moderate variability was seen in the strengths of the four vendors. The vendors performed better with the structured discharge summaries than with the clinic notes in an analysis comparing the two document types. Although automated extraction may serve as the foundation for a manual review process, it is not ready to automate medication lists without human intervention.
WaterML, an Information Standard for the Exchange of in-situ hydrological observations
NASA Astrophysics Data System (ADS)
Valentine, D.; Taylor, P.; Zaslavsky, I.
2012-04-01
The WaterML 2.0 Standards Working Group (SWG), working within the Open Geospatial Consortium (OGC) and in cooperation with the joint OGC-World Meteorological Organization (WMO) Hydrology Domain Working Group (HDWG), has developed an open standard for the exchange of water observation data; WaterML 2.0. The focus of the standard is time-series data, commonly generated from in-situ style monitoring. This is high value data for hydrological applications such as flood forecasting, environmental reporting and supporting hydrological infrastructure (e.g. dams, supply systems), which is commonly exchanged, but a lack of standards inhibits efficient reuse and automation. The process of developing WaterML required doing a harmonization analysis of existing standards to identify overlapping concepts and come to agreement on a harmonized definition. Generally the formats captured similar requirements, all with subtle differences, such as how time-series point metadata was handled. The in-progress standard WaterML 2.0 incorporates the semantics of the hydrologic information: location, procedure, and observations, and is implemented as an application schema of the Geography Markup Language version 3.2.1, making use of the OGC Observations & Measurements standards. WaterML2.0 is designed as an extensible schema to allow encoding of data to be used in a variety of exchange scenarios. Example areas of usage are: exchange of data for operational hydrological monitoring programs; supporting operation of infrastructure (e.g. dams, supply systems); cross-border exchange of observational data; release of data for public dissemination; enhancing disaster management through data exchange; and exchange in support of national reporting The first phase of WaterML2.0 focused on structural definitions allowing for the transfer of time-series, with less work on harmonization of vocabulary items such as quality codes. Vocabularies from various organizations tend to be specific and take time to come to agreement on. This will be continued in future work for the HDWG, along with extending the information model to cover additional types of hydrologic information: rating and gauging information, and water quality. Rating curves, gaugings and river cross sections are commonly exchanged in addition to standard time-series data to allow information relating to conversions such as river level to discharge. Members of the HDWG plan to initiate this work in early 2012. Water quality data is varied in the way it is processed and in the number of phenomena it measures. It will require specific components of extension to the WaterML2.0 model, most likely making use of the specimen types within O&M and extensive use of controlled vocabularies. Other future work involves different target encodings for the WaterML2.0 conceptual model, such as JSON, netCDF, CSV etc. are optimized for particular needs, such as efficiency in size of the encoding and parsing of structure, but may not be capable of representing the full extent of the WaterML2.0 information model. Certain encodings are best matched for particular needs; the community has begun investigation into when and how best to implement these.
Robust membrane detection based on tensor voting for electron tomography.
Martinez-Sanchez, Antonio; Garcia, Inmaculada; Asano, Shoh; Lucic, Vladan; Fernandez, Jose-Jesus
2014-04-01
Electron tomography enables three-dimensional (3D) visualization and analysis of the subcellular architecture at a resolution of a few nanometers. Segmentation of structural components present in 3D images (tomograms) is often necessary for their interpretation. However, it is severely hampered by a number of factors that are inherent to electron tomography (e.g. noise, low contrast, distortion). Thus, there is a need for new and improved computational methods to facilitate this challenging task. In this work, we present a new method for membrane segmentation that is based on anisotropic propagation of the local structural information using the tensor voting algorithm. The local structure at each voxel is then refined according to the information received from other voxels. Because voxels belonging to the same membrane have coherent structural information, the underlying global structure is strengthened. In this way, local information is easily integrated at a global scale to yield segmented structures. This method performs well under low signal-to-noise ratio typically found in tomograms of vitrified samples under cryo-tomography conditions and can bridge gaps present on membranes. The performance of the method is demonstrated by applications to tomograms of different biological samples and by quantitative comparison with standard template matching procedure. Copyright © 2014 Elsevier Inc. All rights reserved.
Function Model for Community Health Service Information
NASA Astrophysics Data System (ADS)
Yang, Peng; Pan, Feng; Liu, Danhong; Xu, Yongyong
In order to construct a function model of community health service (CHS) information for development of CHS information management system, Integration Definition for Function Modeling (IDEF0), an IEEE standard which is extended from Structured Analysis and Design(SADT) and now is a widely used function modeling method, was used to classifying its information from top to bottom. The contents of every level of the model were described and coded. Then function model for CHS information, which includes 4 super-classes, 15 classes and 28 sub-classed of business function, 43 business processes and 168 business activities, was established. This model can facilitate information management system development and workflow refinement.
Turchin, Alexander; Shubina, Maria; Breydo, Eugene; Pendergrass, Merri L; Einbinder, Jonathan S
2009-01-01
OBJECTIVE To compare information obtained from narrative and structured electronic sources using anti-hypertensive medication intensification as an example clinical issue of interest. DESIGN A retrospective cohort study of 5,634 hypertensive patients with diabetes from 2000 to 2005. MEASUREMENTS The authors determined the fraction of medication intensification events documented in both narrative and structured data in the electronic medical record. The authors analyzed the relationship between provider characteristics and concordance between intensifications in narrative and structured data. As there is no gold standard data source for medication information, the authors clinically validated medication intensification information by assessing the relationship between documented medication intensification and the patients' blood pressure in univariate and multivariate models. RESULTS Overall, 5,627 (30.9%) of 18,185 medication intensification events were documented in both sources. For a medication intensification event documented in narrative notes the probability of a concordant entry in structured records increased by 11% for each study year (p < 0.0001) and decreased by 19% for each decade of provider age (p = 0.035). In a multivariate model that adjusted for patient demographics and intraphysician correlations, an increase of one medication intensification per month documented in either narrative or structured data were associated with a 5-8 mm Hg monthly decrease in systolic and 1.5-4 mm Hg decrease in diastolic blood pressure (p < 0.0001 for all). CONCLUSION Narrative and structured electronic data sources provide complementary information on anti-hypertensive medication intensification. Clinical validity of information in both sources was demonstrated by correlation with changes in blood pressure.
PHASE I MATERIALS PROPERTY DATABASE DEVELOPMENT FOR ASME CODES AND STANDARDS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Weiju; Lin, Lianshan
2013-01-01
To support the ASME Boiler and Pressure Vessel Codes and Standard (BPVC) in modern information era, development of a web-based materials property database is initiated under the supervision of ASME Committee on Materials. To achieve efficiency, the project heavily draws upon experience from development of the Gen IV Materials Handbook and the Nuclear System Materials Handbook. The effort is divided into two phases. Phase I is planned to deliver a materials data file warehouse that offers a depository for various files containing raw data and background information, and Phase II will provide a relational digital database that provides advanced featuresmore » facilitating digital data processing and management. Population of the database will start with materials property data for nuclear applications and expand to data covering the entire ASME Code and Standards including the piping codes as the database structure is continuously optimized. The ultimate goal of the effort is to establish a sound cyber infrastructure that support ASME Codes and Standards development and maintenance.« less
Exploring the Use of Enterprise Content Management Systems in Unification Types of Organizations
NASA Astrophysics Data System (ADS)
Izza Arshad, Noreen; Mehat, Mazlina; Ariff, Mohamed Imran Mohamed
2014-03-01
The aim of this paper is to better understand how highly standardized and integrated businesses known as unification types of organizations use Enterprise Content Management Systems (ECMS) to support their business processes. Multiple case study approach was used to study the ways two unification organizations use their ECMS in their daily work practices. Arising from these case studies are insights into the differing ways in which ECMS is used to support businesses. Based on the comparisons of the two cases, this study proposed that unification organizations may use ECMS in four ways, for: (1) collaboration, (2) information sharing that supports a standardized process structure, (3) building custom workflows that support integrated and standardized processes, and (4) providing links and access to information systems. These findings may guide organizations that are highly standardized and integrated in fashion, to achieve their intended ECMS-use, to understand reasons for ECMS failures and underutilization and to exploit technologies investments.
2012-02-13
Operations DCMO Deputy Chief Management Officer DDRS Defense Departmental Reporting System DFAS Defense Finance and Accounting Service ERP Enterprise...for your review and comment. The Navy approved deployment of the Navy Enterprise Resource Planning ( ERP ) System without ensuring it complied with the...Comments from the Deputy Assistant Secretary of the Navy (Financial Management and Comptroller, Office of Financial Operations) and the Navy ERP Program
NASA Technical Reports Server (NTRS)
Yates, E. Carson, Jr.
1987-01-01
To promote the evaluation of existing and emerging unsteady aerodynamic codes and methods for applying them to aeroelastic problems, especially for the transonic range, a limited number of aerodynamic configurations and experimental dynamic response data sets are to be designated by the AGARD Structures and Materials Panel as standards for comparison. This set is a sequel to that established several years ago for comparisons of calculated and measured aerodynamic pressures and forces. This report presents the information needed to perform flutter calculations for the first candidate standard configuration for dynamic response along with the related experimental flutter data.
XML and its impact on content and structure in electronic health care documents.
Sokolowski, R.; Dudeck, J.
1999-01-01
Worldwide information networks have the requirement that electronic documents must be easily accessible, portable, flexible and system-independent. With the development of XML (eXtensible Markup Language), the future of electronic documents, health care informatics and the Web itself are about to change. The intent of the recently formed ASTM E31.25 subcommittee, "XML DTDs for Health Care", is to develop standard electronic document representations of paper-based health care documents and forms. A goal of the subcommittee is to work together to enhance existing levels of interoperability among the various XML/SGML standardization efforts, products and systems in health care. The ASTM E31.25 subcommittee uses common practices and software standards to develop the implementation recommendations for XML documents in health care. The implementation recommendations are being developed to standardize the many different structures of documents. These recommendations are in the form of a set of standard DTDs, or document type definitions that match the electronic document requirements in the health care industry. This paper discusses recent efforts of the ASTM E31.25 subcommittee. PMID:10566338
Botts, Nathan; Bouhaddou, Omar; Bennett, Jamie; Pan, Eric; Byrne, Colene; Mercincavage, Lauren; Olinger, Lois; Hunolt, Elaine; Cullen, Theresa
2014-01-01
Authors studied the United States (U.S.) Department of Veterans Affairs' (VA) Virtual Lifetime Electronic Record (VLER) Health pilot phase relative to two attributes of data quality - the adoption of eHealth Exchange data standards, and clinical content exchanged. The VLER Health pilot was an early effort in testing implementation of eHealth Exchange standards and technology. Testing included evaluation of exchange data from the VLER Health pilot sites partners: VA, U.S. Department of Defense (DoD), and private sector health care organizations. Domains assessed data quality and interoperability as it relates to: 1) conformance with data standards related to the underlying structure of C32 Summary Documents (C32) produced by eHealth Exchange partners; and 2) the types of C32 clinical content exchanged. This analysis identified several standards non-conformance issues in sample C32 files and informed further discourse on the methods needed to effectively monitor Health Information Exchange (HIE) data content and standards conformance.
Using Ecosystem Services to Inform Decisions on U.S. Air Quality Standards
The ecosystem services (ES) framework provides a link between changes in a natural system’s structure and function and public welfare. This systematic integration of ecology and economics allows for more consistency and transparency in environmental decision making by enab...
Barros Castro, Jesús; Lamelo Alfonsín, Alejandro; Prieto Cebreiro, Javier; Rimada Mora, Dolores; Carrajo García, Lino; Vázquez González, Guillermo
2015-01-01
On daily procedures, companies and organizations produce a wide quantity of data. Medical information doubles every five years approximately, and most of this information has no structure and cannot be utilised. Information obtained during Primary Health Care (PC) consultations is expected to be standardized and organised following instructions made by archetype 13606 of the International Organization for Standardization (ISO) in order to guarantee the Continuity of Care as well as the potential use of these data for secondary purposes, such as investigation or statistics. This study was designed to investigate the feasibility of representing the information collected in Primary Care consultations in a structured and normalized way. A key difference to other approaches is that the intended solution is, to the best of our knowledge, the first one to register all the information collected in this area. The participation of the Primary Health Care service (PC) from Complejo Hospitalario Universitario de A Coruña (CHUAC) has been of vital importance in this project as it has provided the necessary clinical knowledge and it has allowed us to verify the effectiveness obtained in actual environments. The archetypes developed can be reused in a wide range of projects. As an example of use, we have used these archetypes to create an intelligent system that generates organised reports based on the information dictated on a medical consultation which, afterwards, can be analysed from an analytical point of view.
NASA Astrophysics Data System (ADS)
Perez, Alejandro
2015-04-01
In an approach to quantum gravity where space-time arises from coarse graining of fundamentally discrete structures, black hole formation and subsequent evaporation can be described by a unitary evolution without the problems encountered by the standard remnant scenario or the schemes where information is assumed to come out with the radiation during evaporation (firewalls and complementarity). The final state is purified by correlations with the fundamental pre-geometric structures (in the sense of Wheeler), which are available in such approaches, and, like defects in the underlying space-time weave, can carry zero energy.
Optimal Full Information Synthesis for Flexible Structures Implemented on Cray Supercomputers
NASA Technical Reports Server (NTRS)
Lind, Rick; Balas, Gary J.
1995-01-01
This paper considers an algorithm for synthesis of optimal controllers for full information feedback. The synthesis procedure reduces to a single linear matrix inequality which may be solved via established convex optimization algorithms. The computational cost of the optimization is investigated. It is demonstrated the problem dimension and corresponding matrices can become large for practical engineering problems. This algorithm represents a process that is impractical for standard workstations for large order systems. A flexible structure is presented as a design example. Control synthesis requires several days on a workstation but may be solved in a reasonable amount of time using a Cray supercomputer.
Atalağ, Koray; Bilgen, Semih; Gür, Gürden; Boyacioğlu, Sedat
2007-09-01
There are very few evaluation studies for the Minimal Standard Terminology for Digestive Endoscopy. This study aims to evaluate the usage of the Turkish translation of Minimal Standard Terminology by developing an endoscopic information system. After elicitation of requirements, database modeling and software development were performed. Minimal Standard Terminology driven forms were designed for rapid data entry. The endoscopic report was rapidly created by applying basic Turkish syntax and grammar rules. Entering free text and also editing of final report were possible. After three years of live usage, data analysis was performed and results were evaluated. The system has been used for reporting of all endoscopic examinations. 15,638 valid records were analyzed, including 11,381 esophagogastroduodenoscopies, 2,616 colonoscopies, 1,079 rectoscopies and 562 endoscopic retrograde cholangiopancreatographies. In accordance with other previous validation studies, the overall usage of Minimal Standard Terminology terms was very high: 85% for examination characteristics, 94% for endoscopic findings and 94% for endoscopic diagnoses. Some new terms, attributes and allowed values were also added for better clinical coverage. Minimal Standard Terminology has been shown to cover a high proportion of routine endoscopy reports. Good user acceptance proves that both the terms and structure of Minimal Standard Terminology were consistent with usual clinical thinking. However, future work on Minimal Standard Terminology is mandatory for better coverage of endoscopic retrograde cholangiopancreatographies examinations. Technically new software development methodologies have to be sought for lowering cost of development and the maintenance phase. They should also address integration and interoperability of disparate information systems.
Sharing Vital Signs between mobile phone applications.
Karlen, Walter; Dumont, Guy A; Scheffer, Cornie
2014-01-01
We propose a communication library, ShareVitalSigns, for the standardized exchange of vital sign information between health applications running on mobile platforms. The library allows an application to request one or multiple vital signs from independent measurement applications on the Android OS. Compatible measurement applications are automatically detected and can be launched from within the requesting application, simplifying the work flow for the user and reducing typing errors. Data is shared between applications using intents, a passive data structure available on Android OS. The library is accompanied by a test application which serves as a demonstrator. The secure exchange of vital sign information using a standardized library like ShareVitalSigns will facilitate the integration of measurement applications into diagnostic and other high level health monitoring applications and reduce errors due to manual entry of information.
Distributed structure-searchable toxicity (DSSTox) public database network: a proposal.
Richard, Ann M; Williams, ClarLynda R
2002-01-29
The ability to assess the potential genotoxicity, carcinogenicity, or other toxicity of pharmaceutical or industrial chemicals based on chemical structure information is a highly coveted and shared goal of varied academic, commercial, and government regulatory groups. These diverse interests often employ different approaches and have different criteria and use for toxicity assessments, but they share a need for unrestricted access to existing public toxicity data linked with chemical structure information. Currently, there exists no central repository of toxicity information, commercial or public, that adequately meets the data requirements for flexible analogue searching, Structure-Activity Relationship (SAR) model development, or building of chemical relational databases (CRD). The distributed structure-searchable toxicity (DSSTox) public database network is being proposed as a community-supported, web-based effort to address these shared needs of the SAR and toxicology communities. The DSSTox project has the following major elements: (1) to adopt and encourage the use of a common standard file format (structure data file (SDF)) for public toxicity databases that includes chemical structure, text and property information, and that can easily be imported into available CRD applications; (2) to implement a distributed source approach, managed by a DSSTox Central Website, that will enable decentralized, free public access to structure-toxicity data files, and that will effectively link knowledgeable toxicity data sources with potential users of these data from other disciplines (such as chemistry, modeling, and computer science); and (3) to engage public/commercial/academic/industry groups in contributing to and expanding this community-wide, public data sharing and distribution effort. The DSSTox project's overall aims are to effect the closer association of chemical structure information with existing toxicity data, and to promote and facilitate structure-based exploration of these data within a common chemistry-based framework that spans toxicological disciplines.
Shuttle-Data-Tape XML Translator
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Osborne, Richard N.
2005-01-01
JSDTImport is a computer program for translating native Shuttle Data Tape (SDT) files from American Standard Code for Information Interchange (ASCII) format into databases in other formats. JSDTImport solves the problem of organizing the SDT content, affording flexibility to enable users to choose how to store the information in a database to better support client and server applications. JSDTImport can be dynamically configured by use of a simple Extensible Markup Language (XML) file. JSDTImport uses this XML file to define how each record and field will be parsed, its layout and definition, and how the resulting database will be structured. JSDTImport also includes a client application programming interface (API) layer that provides abstraction for the data-querying process. The API enables a user to specify the search criteria to apply in gathering all the data relevant to a query. The API can be used to organize the SDT content and translate into a native XML database. The XML format is structured into efficient sections, enabling excellent query performance by use of the XPath query language. Optionally, the content can be translated into a Structured Query Language (SQL) database for fast, reliable SQL queries on standard database server computers.
Evaluating bacterial gene-finding HMM structures as probabilistic logic programs.
Mørk, Søren; Holmes, Ian
2012-03-01
Probabilistic logic programming offers a powerful way to describe and evaluate structured statistical models. To investigate the practicality of probabilistic logic programming for structure learning in bioinformatics, we undertook a simplified bacterial gene-finding benchmark in PRISM, a probabilistic dialect of Prolog. We evaluate Hidden Markov Model structures for bacterial protein-coding gene potential, including a simple null model structure, three structures based on existing bacterial gene finders and two novel model structures. We test standard versions as well as ADPH length modeling and three-state versions of the five model structures. The models are all represented as probabilistic logic programs and evaluated using the PRISM machine learning system in terms of statistical information criteria and gene-finding prediction accuracy, in two bacterial genomes. Neither of our implementations of the two currently most used model structures are best performing in terms of statistical information criteria or prediction performances, suggesting that better-fitting models might be achievable. The source code of all PRISM models, data and additional scripts are freely available for download at: http://github.com/somork/codonhmm. Supplementary data are available at Bioinformatics online.
Toxico-Cheminformatics: New and Expanding Public ...
High-throughput screening (HTS) technologies, along with efforts to improve public access to chemical toxicity information resources and to systematize older toxicity studies, have the potential to significantly improve information gathering efforts for chemical assessments and predictive capabilities in toxicology. Important developments include: 1) large and growing public resources that link chemical structures to biological activity and toxicity data in searchable format, and that offer more nuanced and varied representations of activity; 2) standardized relational data models that capture relevant details of chemical treatment and effects of published in vivo experiments; and 3) the generation of large amounts of new data from public efforts that are employing HTS technologies to probe a wide range of bioactivity and cellular processes across large swaths of chemical space. By annotating toxicity data with associated chemical structure information, these efforts link data across diverse study domains (e.g., ‘omics’, HTS, traditional toxicity studies), toxicity domains (carcinogenicity, developmental toxicity, neurotoxicity, immunotoxicity, etc) and database sources (EPA, FDA, NCI, DSSTox, PubChem, GEO, ArrayExpress, etc.). Public initiatives are developing systematized data models of toxicity study areas and introducing standardized templates, controlled vocabularies, hierarchical organization, and powerful relational searching capability across capt
Park, Jin Seo; Jung, Yong Wook; Choi, Hyung-Do; Lee, Ae-Kyoung
2018-01-01
Abstract The anatomical structures in most phantoms are classified according to tissue properties rather than according to their detailed structures, because the tissue properties, not the detailed structures, are what is considered important. However, if a phantom does not have detailed structures, the phantom will be unreliable because different tissues can be regarded as the same. Thus, we produced the Visible Korean (VK) -phantoms with detailed structures (male, 583 structures; female, 459 structures) based on segmented images of the whole male body (interval, 1.0 mm; pixel size, 1.0 mm2) and the whole female body (interval, 1.0 mm; pixel size, 1.0 mm2), using house-developed software to analyze the text string and voxel information for each of the structures. The density of each structure in the VK-phantom was calculated based on Virtual Population and a publication of the International Commission on Radiological Protection. In the future, we will standardize the size of each structure in the VK-phantoms. If the VK-phantoms are standardized and the mass density of each structure is precisely known, researchers will be able to measure the exact absorption rate of electromagnetic radiation in specific organs and tissues of the whole body. PMID:29659988
Park, Jin Seo; Jung, Yong Wook; Choi, Hyung-Do; Lee, Ae-Kyoung
2018-05-01
The anatomical structures in most phantoms are classified according to tissue properties rather than according to their detailed structures, because the tissue properties, not the detailed structures, are what is considered important. However, if a phantom does not have detailed structures, the phantom will be unreliable because different tissues can be regarded as the same. Thus, we produced the Visible Korean (VK) -phantoms with detailed structures (male, 583 structures; female, 459 structures) based on segmented images of the whole male body (interval, 1.0 mm; pixel size, 1.0 mm2) and the whole female body (interval, 1.0 mm; pixel size, 1.0 mm2), using house-developed software to analyze the text string and voxel information for each of the structures. The density of each structure in the VK-phantom was calculated based on Virtual Population and a publication of the International Commission on Radiological Protection. In the future, we will standardize the size of each structure in the VK-phantoms. If the VK-phantoms are standardized and the mass density of each structure is precisely known, researchers will be able to measure the exact absorption rate of electromagnetic radiation in specific organs and tissues of the whole body.
Shen, Yang; Bax, Ad
2015-01-01
Summary Chemical shifts are obtained at the first stage of any protein structural study by NMR spectroscopy. Chemical shifts are known to be impacted by a wide range of structural factors and the artificial neural network based TALOS-N program has been trained to extract backbone and sidechain torsion angles from 1H, 15N and 13C shifts. The program is quite robust, and typically yields backbone torsion angles for more than 90% of the residues, and sidechain χ1 rotamer information for about half of these, in addition to reliably predicting secondary structure. The use of TALOS-N is illustrated for the protein DinI, and torsion angles obtained by TALOS-N analysis from the measured chemical shifts of its backbone and 13Cβ nuclei are compared to those seen in a prior, experimentally determined structure. The program is also particularly useful for generating torsion angle restraints, which then can be used during standard NMR protein structure calculations. PMID:25502373
Quantum Monte Carlo Studies of Bulk and Few- or Single-Layer Black Phosphorus
NASA Astrophysics Data System (ADS)
Shulenburger, Luke; Baczewski, Andrew; Zhu, Zhen; Guan, Jie; Tomanek, David
2015-03-01
The electronic and optical properties of phosphorus depend strongly on the structural properties of the material. Given the limited experimental information on the structure of phosphorene, it is natural to turn to electronic structure calculations to provide this information. Unfortunately, given phosphorus' propensity to form layered structures bound by van der Waals interactions, standard density functional theory methods provide results of uncertain accuracy. Recently, it has been demonstrated that Quantum Monte Carlo (QMC) methods achieve high accuracy when applied to solids in which van der Waals forces play a significant role. In this talk, we will present QMC results from our recent calculations on black phosphorus, focusing on the structural and energetic properties of monolayers, bilayers and bulk structures. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
Feedback data sources that inform physician self-assessment.
Lockyer, Jocelyn; Armson, Heather; Chesluk, Benjamin; Dornan, Timothy; Holmboe, Eric; Loney, Elaine; Mann, Karen; Sargeant, Joan
2011-01-01
Self-assessment is a process of interpreting data about one's performance and comparing it to explicit or implicit standards. To examine the external data sources physicians used to monitor themselves. Focus groups were conducted with physicians who participated in three practice improvement activities: a multisource feedback program; a program providing patient and chart audit data; and practice-based learning groups. We used grounded theory strategies to understand the external sources that stimulated self-assessment and how they worked. Data from seven focus groups (49 physicians) were analyzed. Physicians used information from structured programs, other educational activities, professional colleagues, and patients. Data were of varying quality, often from non-formal sources with implicit (not explicit) standards. Mandatory programs elicited variable responses, whereas data and activities the physicians selected themselves were more likely to be accepted. Physicians used the information to create a reference point against which they could weigh their performance using it variably depending on their personal interpretation of its accuracy, application, and utility. Physicians use and interpret data and standards of varying quality to inform self-assessment. Physicians may benefit from regular and routine feedback and guidance on how to seek out data for self-assessment.
Dependency-based Siamese long short-term memory network for learning sentence representations
Zhu, Wenhao; Ni, Jianyue; Wei, Baogang; Lu, Zhiguo
2018-01-01
Textual representations play an important role in the field of natural language processing (NLP). The efficiency of NLP tasks, such as text comprehension and information extraction, can be significantly improved with proper textual representations. As neural networks are gradually applied to learn the representation of words and phrases, fairly efficient models of learning short text representations have been developed, such as the continuous bag of words (CBOW) and skip-gram models, and they have been extensively employed in a variety of NLP tasks. Because of the complex structure generated by the longer text lengths, such as sentences, algorithms appropriate for learning short textual representations are not applicable for learning long textual representations. One method of learning long textual representations is the Long Short-Term Memory (LSTM) network, which is suitable for processing sequences. However, the standard LSTM does not adequately address the primary sentence structure (subject, predicate and object), which is an important factor for producing appropriate sentence representations. To resolve this issue, this paper proposes the dependency-based LSTM model (D-LSTM). The D-LSTM divides a sentence representation into two parts: a basic component and a supporting component. The D-LSTM uses a pre-trained dependency parser to obtain the primary sentence information and generate supporting components, and it also uses a standard LSTM model to generate the basic sentence components. A weight factor that can adjust the ratio of the basic and supporting components in a sentence is introduced to generate the sentence representation. Compared with the representation learned by the standard LSTM, the sentence representation learned by the D-LSTM contains a greater amount of useful information. The experimental results show that the D-LSTM is superior to the standard LSTM for sentences involving compositional knowledge (SICK) data. PMID:29513748
Structured-illumination reflectance imaging as a new modality for food quality detection
USDA-ARS?s Scientific Manuscript database
Uniform or diffuse illumination is the standard in implementing many different imaging modalities. This form of illumination, however, has some major limitations in acquisition of useful information from food products because reflectance from the food products is non-uniform due to irregular, curved...
ERIC Educational Resources Information Center
Arizona Univ., Tucson. Cooperative Extension Service.
This manual supplies information helpful to individuals wishing to become certified in public health pest control. It is designed as a technical reference for vector control workers and as preparatory material for structural applicators of restricted use pesticides to meet the General Standards of Competency required of commercial applicators. The…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-30
... of districts, sites, buildings, structures, and objects significant in American history, architecture..., architecture, engineering, or archeology based on national standards. The designation places no obligations on... individuals; state, tribal, and local governments; businesses; educational institutions; and nonprofit...
Topic Maps e-Learning Portal Development
ERIC Educational Resources Information Center
Olsevicova, Kamila
2006-01-01
Topic Maps, ISO/IEC 13250 standard, are designed to facilitate the organization and navigation of large collections of information objects by creating meta-level perspectives of their underlying concepts and relationships. The underlying structure of concepts and relations is expressed by domain ontologies. The Topics Maps technology can become…
Radiology reporting-from Hemingway to HAL?
Brady, Adrian P
2018-04-01
The job of the diagnostic radiologist is two-fold: identifying and interpreting the information available from diagnostic imaging studies and communicating that interpretation meaningfully to the referring clinician. However skilled our interpretive abilities, our patients are not well served if we fail to convey our conclusions effectively. Despite the central importance of communication skills to the work of radiologists, trainees rarely receive significant formal training in reporting skills, and much of the training given simply reflects the trainer's personal preferences. Studies have shown a preference among referrers for reports in a structured form, with findings given in a standard manner, followed by a conclusion. The technical competence to incorporate structured report templates into PACS/RIS systems is growing, "...and radiology societies (including the European Society of Radiology (ESR)) are active in producing and validating templates for a wide range of modalities and clinical circumstances. While some radiologists may prefer prose format reports, and much literature has been produced addressing "dos and don'ts" for such prose reports, it seems likely that structured reporting will become the norm in the near future. Benefits will include homogenisation and standardisation of reports, certainty that significant information has not been omitted, and capacity for data-mining of structured reports for research and teaching purposes. • The radiologist's job includes interpretation of imaging studies AND communication. • Traditionally, communication has taken the form of a prose report. • Referrers have been shown to prefer reports in a structured format. • Structured reports have many advantages over traditional prose reports. • It is likely that structured reports represent the future standard.
Metadata Design in the New PDS4 Standards - Something for Everybody
NASA Astrophysics Data System (ADS)
Raugh, Anne C.; Hughes, John S.
2015-11-01
The Planetary Data System (PDS) archives, supports, and distributes data of diverse targets, from diverse sources, to diverse users. One of the core problems addressed by the PDS4 data standard redesign was that of metadata - how to accommodate the increasingly sophisticated demands of search interfaces, analytical software, and observational documentation into label standards without imposing limits and constraints that would impinge on the quality or quantity of metadata that any particular observer or team could supply. And yet, as an archive, PDS must have detailed documentation for the metadata in the labels it supports, or the institutional knowledge encoded into those attributes will be lost - putting the data at risk.The PDS4 metadata solution is based on a three-step approach. First, it is built on two key ISO standards: ISO 11179 "Information Technology - Metadata Registries", which provides a common framework and vocabulary for defining metadata attributes; and ISO 14721 "Space Data and Information Transfer Systems - Open Archival Information System (OAIS) Reference Model", which provides the framework for the information architecture that enforces the object-oriented paradigm for metadata modeling. Second, PDS has defined a hierarchical system that allows it to divide its metadata universe into namespaces ("data dictionaries", conceptually), and more importantly to delegate stewardship for a single namespace to a local authority. This means that a mission can develop its own data model with a high degree of autonomy and effectively extend the PDS model to accommodate its own metadata needs within the common ISO 11179 framework. Finally, within a single namespace - even the core PDS namespace - existing metadata structures can be extended and new structures added to the model as new needs are identifiedThis poster illustrates the PDS4 approach to metadata management and highlights the expected return on the development investment for PDS, users and data preparers.
The Forensic Potential of Flash Memory
2009-09-01
limit range of 10 to 100 years before data is lost [12]. 5. Flash Memory Logical Structure The logical structure of flash memory from least to...area is not standardized and is manufacturer specific. This information will be used by the wear leveling algorithms and as such will be proprietary...memory cells, the manufacturers of the flash implement a wear leveling algorithm . In contrast, a magnetic disk in an overwrite operation will reuse the
Emergency healthcare process automation using mobile computing and cloud services.
Poulymenopoulou, M; Malamateniou, F; Vassilacopoulos, G
2012-10-01
Emergency care is basically concerned with the provision of pre-hospital and in-hospital medical and/or paramedical services and it typically involves a wide variety of interdependent and distributed activities that can be interconnected to form emergency care processes within and between Emergency Medical Service (EMS) agencies and hospitals. Hence, in developing an information system for emergency care processes, it is essential to support individual process activities and to satisfy collaboration and coordination needs by providing readily access to patient and operational information regardless of location and time. Filling this information gap by enabling the provision of the right information, to the right people, at the right time fosters new challenges, including the specification of a common information format, the interoperability among heterogeneous institutional information systems or the development of new, ubiquitous trans-institutional systems. This paper is concerned with the development of an integrated computer support to emergency care processes by evolving and cross-linking institutional healthcare systems. To this end, an integrated EMS cloud-based architecture has been developed that allows authorized users to access emergency case information in standardized document form, as proposed by the Integrating the Healthcare Enterprise (IHE) profile, uses the Organization for the Advancement of Structured Information Standards (OASIS) standard Emergency Data Exchange Language (EDXL) Hospital Availability Exchange (HAVE) for exchanging operational data with hospitals and incorporates an intelligent module that supports triaging and selecting the most appropriate ambulances and hospitals for each case.
Proceedings of a Conference on Telecommunication Technologies, Networkings and Libraries
NASA Astrophysics Data System (ADS)
Knight, N. K.
1981-12-01
Current and developing technologies for digital transmission of image data likely to have an impact on the operations of libraries and information centers or provide support for information networking are reviewed. Technologies reviewed include slow scan television, teleconferencing, and videodisc technology and standards development for computer network interconnection through hardware and software, particularly packet switched networks computer network protocols for library and information service applications, the structure of a national bibliographic telecommunications network; and the major policy issues involved in the regulation or deregulation of the common communications carriers industry.
Standards opportunities around data-bearing Web pages.
Karger, David
2013-03-28
The evolving Web has seen ever-growing use of structured data, thanks to the way it enhances information authoring, querying, visualization and sharing. To date, however, most structured data authoring and management tools have been oriented towards programmers and Web developers. End users have been left behind, unable to leverage structured data for information management and communication as well as professionals. In this paper, I will argue that many of the benefits of structured data management can be provided to end users as well. I will describe an approach and tools that allow end users to define their own schemas (without knowing what a schema is), manage data and author (not program) interactive Web visualizations of that data using the Web tools with which they are already familiar, such as plain Web pages, blogs, wikis and WYSIWYG document editors. I will describe our experience deploying these tools and some lessons relevant to their future evolution.
Data File Standard for Flow Cytometry, version FCS 3.1.
Spidlen, Josef; Moore, Wayne; Parks, David; Goldberg, Michael; Bray, Chris; Bierre, Pierre; Gorombey, Peter; Hyun, Bill; Hubbard, Mark; Lange, Simon; Lefebvre, Ray; Leif, Robert; Novo, David; Ostruszka, Leo; Treister, Adam; Wood, James; Murphy, Robert F; Roederer, Mario; Sudar, Damir; Zigon, Robert; Brinkman, Ryan R
2010-01-01
The flow cytometry data file standard provides the specifications needed to completely describe flow cytometry data sets within the confines of the file containing the experimental data. In 1984, the first Flow Cytometry Standard format for data files was adopted as FCS 1.0. This standard was modified in 1990 as FCS 2.0 and again in 1997 as FCS 3.0. We report here on the next generation flow cytometry standard data file format. FCS 3.1 is a minor revision based on suggested improvements from the community. The unchanged goal of the standard is to provide a uniform file format that allows files created by one type of acquisition hardware and software to be analyzed by any other type.The FCS 3.1 standard retains the basic FCS file structure and most features of previous versions of the standard. Changes included in FCS 3.1 address potential ambiguities in the previous versions and provide a more robust standard. The major changes include simplified support for international characters and improved support for storing compensation. The major additions are support for preferred display scale, a standardized way of capturing the sample volume, information about originality of the data file, and support for plate and well identification in high throughput, plate based experiments. Please see the normative version of the FCS 3.1 specification in Supporting Information for this manuscript (or at http://www.isac-net.org/ in the Current standards section) for a complete list of changes.
Data File Standard for Flow Cytometry, Version FCS 3.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spidlen, Josef; Moore, Wayne; Parks, David
2009-11-10
The flow cytometry data file standard provides the specifications needed to completely describe flow cytometry data sets within the confines of the file containing the experimental data. In 1984, the first Flow Cytometry Standard format for data files was adopted as FCS 1.0. This standard was modified in 1990 as FCS 2.0 and again in 1997 as FCS 3.0. We report here on the next generation flow cytometry standard data file format. FCS 3.1 is a minor revision based on suggested improvements from the community. The unchanged goal of the standard is to provide a uniform file format that allowsmore » files created by one type of acquisition hardware and software to be analyzed by any other type. The FCS 3.1 standard retains the basic FCS file structure and most features of previous versions of the standard. Changes included in FCS 3.1 address potential ambiguities in the previous versions and provide a more robust standard. The major changes include simplified support for international characters and improved support for storing compensation. The major additions are support for preferred display scale, a standardized way of capturing the sample volume, information about originality of the data file, and support for plate and well identification in high throughput, plate based experiments. Please see the normative version of the FCS 3.1 specification in Supporting Information for this manuscript (or at http://www.isac-net.org/ in the Current standards section) for a complete list of changes.« less
Enhancement of CLAIM (clinical accounting information) for a localized Chinese version.
Guo, Jinqiu; Takada, Akira; Niu, Tie; He, Miao; Tanaka, Koji; Sato, Junzo; Suzuki, Muneou; Takahashi, Kiwamu; Daimon, Hiroyuki; Suzuki, Toshiaki; Nakashima, Yusei; Araki, Kenji; Yoshihara, Hiroyuki
2005-10-01
CLinical Accounting InforMation (CLAIM) is a standard for the exchange of data between patient accounting systems and electronic medical record (EMR) systems. It uses eXtensible Markup Language (XML) as a meta-language and was developed in Japan. CLAIM is subordinate to the Medical Markup Language (MML) standard, which allows the exchange of medical data between different medical institutions. It has inherited the basic structure of MML 2.x and the current version, version 2.1, contains two modules and nine data definition tables. In China, no data exchange standard yet exists that links EMR systems to accounting systems. Taking advantage of CLAIM's flexibility, we created a localized Chinese version based on CLAIM 2.1. Since Chinese receipt systems differ from those of Japan, some information such as prescription formats, etc. are also different from those in Japan. Two CLAIM modules were re-engineered and six data definition tables were either added or redefined. The Chinese version of CLAIM takes local needs into account, and consequently it is now possible to transfer data between the patient accounting systems and EMR systems of Chinese medical institutions effectively.
How to integrate quantitative information into imaging reports for oncologic patients.
Martí-Bonmatí, L; Ruiz-Martínez, E; Ten, A; Alberich-Bayarri, A
2018-05-01
Nowadays, the images and information generated in imaging tests, as well as the reports that are issued, are digital and represent a reliable source of data. Reports can be classified according to their content and to the type of information they include into three main types: organized (free text in natural language), predefined (with templates and guidelines elaborated with previously determined natural language like that used in BI-RADS and PI-RADS), or structured (with drop-down menus displaying questions with various possible answers that have been agreed on with the rest of the multidisciplinary team, which use standardized lexicons and are structured in the form of a database with data that can be traced and exploited with statistical tools and data mining). The structured report, compatible with Management of Radiology Report Templates (MRRT), makes it possible to incorporate quantitative information related with the digital analysis of the data from the acquired images to accurately and precisely describe the properties and behavior of tissues by means of radiomics (characteristics and parameters). In conclusion, structured digital information (images, text, measurements, radiomic features, and imaging biomarkers) should be integrated into computerized reports so that they can be indexed in large repositories. Radiologic databanks are fundamental for exploiting health information, phenotyping lesions and diseases, and extracting conclusions in personalized medicine. Copyright © 2018 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.
ERIC Educational Resources Information Center
Golden, Cynthia; Eisenberger, Dorit
1990-01-01
Carnegie Mellon University's decision to standardize its administrative system development efforts on relational database technology and structured query language is discussed and its impact is examined in one of its larger, more widely used applications, the university information system. Advantages, new responsibilities, and challenges of the…
ERIC Educational Resources Information Center
National Bureau of Standards (DOC), Washington, DC.
These guidelines provide a handbook for use by federal organizations in structuring physical security and risk management programs for their automatic data processing facilities. This publication discusses security analysis, natural disasters, supporting utilities, system reliability, procedural measures and controls, off-site facilities,…
FABS (Formulated Abstracting): An Experiment in Regularized Content Description.
ERIC Educational Resources Information Center
Harris, Brian; Hofmann, Thomas R.
This preliminary report of research conducted at the Linguistics Documentation Centre of the University of Ottawa describes a bilingual experiment into the elaboration of well structured formulary routines for making the writing of abstracts easier, and at the same time standardizing and generally augmenting the information given in them. The…
Teachers' Increased Use of Informational Text: A Phenomenological Study of Five Primary Classrooms
ERIC Educational Resources Information Center
Young, Heather D.; Goering, Christian Z.
2018-01-01
The purpose of this phenomenological study was to explain how the Common Core State Standards may have influenced teachers' practices and philosophies regarding literacy instruction. Conducted in five kindergarten through second-grade classrooms within one elementary school, this research study collected semi-structured interviews, classroom…
Schoeppe, Franziska; Sommer, Wieland H; Haack, Mareike; Havel, Miriam; Rheinwald, Marika; Wechtenbruch, Juliane; Fischer, Martin R; Meinel, Felix G; Sabel, Bastian O; Sommer, Nora N
2018-01-01
To compare free text (FTR) and structured reports (SR) of videofluoroscopic swallowing studies (VFSS) and evaluate satisfaction of referring otolaryngologists and speech therapists. Both standard FTR and SR of 26 patients with VFSS were acquired. A dedicated template focusing on oropharyngeal phases was created for SR using online software with clickable decision-trees and concomitant generation of semantically structured reports. All reports were evaluated regarding overall quality and content, information extraction and clinical decision support (10-point Likert scale (0 = I completely disagree, 10 = I completely agree)). Two otorhinolaryngologists and two speech therapists evaluated FTR and SR. SR received better ratings than FTR in all items. SR were perceived to contain more details on the swallowing phases (median rating: 10 vs. 5; P < 0.001), penetration and aspiration (10 vs. 5; P < 0.001) and facilitated information extraction compared to FTR (10 vs. 4; P < 0.001). Overall quality was rated significantly higher in SR than FTR (P < 0.001). SR of VFSS provide more detailed information and facilitate information extraction. SR better assist in clinical decision-making, might enhance the quality of the report and, thus, are recommended for the evaluation of VFSS. • Structured reports on videofluoroscopic exams of deglutition lead to improved report quality. • Information extraction is facilitated when using structured reports based on decision trees. • Template-based reports add more value to clinical decision-making than free text reports. • Structured reports receive better ratings by speech therapists and otolaryngologists. • Structured reports on videofluoroscopic exams may improve the comparability between exams.
Comparing interval estimates for small sample ordinal CFA models
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002
Comparing interval estimates for small sample ordinal CFA models.
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.
Standardization of Questions in Rare Disease Registries: The PRISM Library Project.
Richesson, Rachel Lynn; Shereff, Denise; Andrews, James Everett
2012-10-10
Patient registries are often a helpful first step in estimating the impact and understanding the etiology of rare diseases - both requisites for the development of new diagnostics and therapeutics. The value and utility of patient registries rely on the use of both well-constructed structured research questions and relevant answer sets accompanying them. There are currently no clear standards or specifications for developing registry questions, and there are no banks of existing questions to support registry developers. This paper introduces the [Rare Disease] PRISM (Patient Registry Item Specifications and Metadata for Rare Disease) project, a library of standardized questions covering a broad spectrum of rare diseases that can be used to support the development of new registries, including Internet-based registries. A convenience sample of questions was identified from well-established (>5 years) natural history studies in various diseases and from several existing registries. Face validity of the questions was determined by review by many experts (both terminology experts at the College of American Pathologists (CAP) and research and informatics experts at the University of South Florida (USF)) for commonality, clarity, and organization. Questions were re-worded slightly, as needed, to make the full semantics of the question clear and to make the questions generalizable to multiple diseases where possible. Questions were indexed with metadata (structured and descriptive information) using a standard metadata framework to record such information as context, format, question asker and responder, and data standards information. At present, PRISM contains over 2,200 questions, with content of PRISM relevant to virtually all rare diseases. While the inclusion of disease-specific questions for thousands of rare disease organizations seeking to develop registries would present a challenge for traditional standards development organizations, the PRISM library could serve as a platform to liaison between rare disease communities and existing standardized controlled terminologies, item banks, and coding systems. If widely used, PRISM will enable the re-use of questions across registries, reduce variation in registry data collection, and facilitate a bottom-up standardization of patient registries. Although it was initially developed to fulfill an urgent need in the rare disease community for shared resources, the PRISM library of patient-directed registry questions can be a valuable resource for registries in any disease - whether common or rare. N/A.
Standardization of Questions in Rare Disease Registries: The PRISM Library Project
Shereff, Denise; Andrews, James Everett
2012-01-01
Background Patient registries are often a helpful first step in estimating the impact and understanding the etiology of rare diseases - both requisites for the development of new diagnostics and therapeutics. The value and utility of patient registries rely on the use of both well-constructed structured research questions and relevant answer sets accompanying them. There are currently no clear standards or specifications for developing registry questions, and there are no banks of existing questions to support registry developers. Objective This paper introduces the [Rare Disease] PRISM (Patient Registry Item Specifications and Metadata for Rare Disease) project, a library of standardized questions covering a broad spectrum of rare diseases that can be used to support the development of new registries, including Internet-based registries. Methods A convenience sample of questions was identified from well-established (>5 years) natural history studies in various diseases and from several existing registries. Face validity of the questions was determined by review by many experts (both terminology experts at the College of American Pathologists (CAP) and research and informatics experts at the University of South Florida (USF)) for commonality, clarity, and organization. Questions were re-worded slightly, as needed, to make the full semantics of the question clear and to make the questions generalizable to multiple diseases where possible. Questions were indexed with metadata (structured and descriptive information) using a standard metadata framework to record such information as context, format, question asker and responder, and data standards information. Results At present, PRISM contains over 2,200 questions, with content of PRISM relevant to virtually all rare diseases. While the inclusion of disease-specific questions for thousands of rare disease organizations seeking to develop registries would present a challenge for traditional standards development organizations, the PRISM library could serve as a platform to liaison between rare disease communities and existing standardized controlled terminologies, item banks, and coding systems. Conclusions If widely used, PRISM will enable the re-use of questions across registries, reduce variation in registry data collection, and facilitate a bottom-up standardization of patient registries. Although it was initially developed to fulfill an urgent need in the rare disease community for shared resources, the PRISM library of patient-directed registry questions can be a valuable resource for registries in any disease – whether common or rare. Trial Registration N/A PMID:23611924
McMahon, Christiana; Denaxas, Spiros
2017-11-06
Informed consent is an important feature of longitudinal research studies as it enables the linking of the baseline participant information with administrative data. The lack of standardized models to capture consent elements can lead to substantial challenges. A structured approach to capturing consent-related metadata can address these. a) Explore the state-of-the-art for recording consent; b) Identify key elements of consent required for record linkage; and c) Create and evaluate a novel metadata management model to capture consent-related metadata. The main methodological components of our work were: a) a systematic literature review and qualitative analysis of consent forms; b) the development and evaluation of a novel metadata model. We qualitatively analyzed 61 manuscripts and 30 consent forms. We extracted data elements related to obtaining consent for linkage. We created a novel metadata management model for consent and evaluated it by comparison with the existing standards and by iteratively applying it to case studies. The developed model can facilitate the standardized recording of consent for linkage in longitudinal research studies and enable the linkage of external participant data. Furthermore, it can provide a structured way of recording consent-related metadata and facilitate the harmonization and streamlining of processes.
Ahumada, Jorge A; Silva, Carlos E F; Gajapersad, Krisna; Hallam, Chris; Hurtado, Johanna; Martin, Emanuel; McWilliam, Alex; Mugerwa, Badru; O'Brien, Tim; Rovero, Francesco; Sheil, Douglas; Spironello, Wilson R; Winarni, Nurul; Andelman, Sandy J
2011-09-27
Terrestrial mammals are a key component of tropical forest communities as indicators of ecosystem health and providers of important ecosystem services. However, there is little quantitative information about how they change with local, regional and global threats. In this paper, the first standardized pantropical forest terrestrial mammal community study, we examine several aspects of terrestrial mammal species and community diversity (species richness, species diversity, evenness, dominance, functional diversity and community structure) at seven sites around the globe using a single standardized camera trapping methodology approach. The sites-located in Uganda, Tanzania, Indonesia, Lao PDR, Suriname, Brazil and Costa Rica-are surrounded by different landscape configurations, from continuous forests to highly fragmented forests. We obtained more than 51 000 images and detected 105 species of mammals with a total sampling effort of 12 687 camera trap days. We find that mammal communities from highly fragmented sites have lower species richness, species diversity, functional diversity and higher dominance when compared with sites in partially fragmented and continuous forest. We emphasize the importance of standardized camera trapping approaches for obtaining baselines for monitoring forest mammal communities so as to adequately understand the effect of global, regional and local threats and appropriately inform conservation actions.
Wlodawer, Alexander; Minor, Wladek; Dauter, Zbigniew; Jaskolski, Mariusz
2013-11-01
The number of macromolecular structures deposited in the Protein Data Bank now approaches 100,000, with the vast majority of them determined by crystallographic methods. Thousands of papers describing such structures have been published in the scientific literature, and 20 Nobel Prizes in chemistry or medicine have been awarded for discoveries based on macromolecular crystallography. New hardware and software tools have made crystallography appear to be an almost routine (but still far from being analytical) technique and many structures are now being determined by scientists with very limited experience in the practical aspects of the field. However, this apparent ease is sometimes illusory and proper procedures need to be followed to maintain high standards of structure quality. In addition, many noncrystallographers may have problems with the critical evaluation and interpretation of structural results published in the scientific literature. The present review provides an outline of the technical aspects of crystallography for less experienced practitioners, as well as information that might be useful for users of macromolecular structures, aiming to show them how to interpret (but not overinterpret) the information present in the coordinate files and in their description. A discussion of the extent of information that can be gleaned from the atomic coordinates of structures solved at different resolution is provided, as well as problems and pitfalls encountered in structure determination and interpretation. © 2013 FEBS.
Wlodawer, Alexander; Minor, Wladek; Dauter, Zbigniew; Jaskolski, Mariusz
2014-01-01
The number of macromolecular structures deposited in the Protein Data Bank now approaches 100 000, with the vast majority of them determined by crystallographic methods. Thousands of papers describing such structures have been published in the scientific literature, and 20 Nobel Prizes in chemistry or medicine have been awarded for discoveries based on macromolecular crystallography. New hardware and software tools have made crystallography appear to be an almost routine (but still far from being analytical) technique and many structures are now being determined by scientists with very limited experience in the practical aspects of the field. However, this apparent ease is sometimes illusory and proper procedures need to be followed to maintain high standards of structure quality. In addition, many noncrystallographers may have problems with the critical evaluation and interpretation of structural results published in the scientific literature. The present review provides an outline of the technical aspects of crystallography for less experienced practitioners, as well as information that might be useful for users of macromolecular structures, aiming to show them how to interpret (but not overinterpret) the information present in the coordinate files and in their description. A discussion of the extent of information that can be gleaned from the atomic coordinates of structures solved at different resolution is provided, as well as problems and pitfalls encountered in structure determination and interpretation. PMID:24034303
Time-variant random interval natural frequency analysis of structures
NASA Astrophysics Data System (ADS)
Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin
2018-02-01
This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.
Coggins, L.G.; Pine, William E.; Walters, C.J.; Martell, S.J.D.
2006-01-01
We present a new model to estimate capture probabilities, survival, abundance, and recruitment using traditional Jolly-Seber capture-recapture methods within a standard fisheries virtual population analysis framework. This approach compares the numbers of marked and unmarked fish at age captured in each year of sampling with predictions based on estimated vulnerabilities and abundance in a likelihood function. Recruitment to the earliest age at which fish can be tagged is estimated by using a virtual population analysis method to back-calculate the expected numbers of unmarked fish at risk of capture. By using information from both marked and unmarked animals in a standard fisheries age structure framework, this approach is well suited to the sparse data situations common in long-term capture-recapture programs with variable sampling effort. ?? Copyright by the American Fisheries Society 2006.
Military handbook: Metallic materials and elements for aerospace vehicle structures, volume 1
NASA Astrophysics Data System (ADS)
1994-11-01
Since many aerospace companies manufacture both commercial and military products, the standardization of metallic materials design data, which are acceptable to government procuring or certification agencies, is very beneficial to those manufacturers as well as governmental agencies. Although the design requirements for military and commercial products may differ greatly, the required design values for the strength of materials and elements and other needed material characteristics are often identical. Therefore this publication is to provide standardized design values and related design information for metallic materials and structural elements used in aerospace structures. The data contained herein or from approved items in the minutes of MIL-RDBK-5 coordination meetings are acceptable to the Air Force, the Navy, the Army, and the Federal Aviation Administration. Approval by the procuring or certificating agency must be obtained for the use of design values for products not contained herein.
NASA Astrophysics Data System (ADS)
Paul, V. J.
2016-02-01
Herbivory is an important process determining the structure and function of marine ecosystems, and this is especially true on coral reefs and in associated tropical and subtropical habitats where grazing by fishes can be intense. As reef degradation is occurring on a global scale, and overfishing can contribute to this problem, rates of herbivory can be an important indicator of reef function and resilience. Our goal was to develop a standardized herbivory assay that can be deployed globally to measure the impact of herbivorous fishes across multiple habitat types. Many tropical and subtropical seaweeds contain chemical and structural defenses that can protect them from herbivores, and this information was key to selecting a range of marine plants that are differentially palatable to herbivorous fishes for these assays. We present method development and experimental results from extensive deployment of these herbivory assays at Carrie Bow Cay, Belize.
Ellwein, L B; Thulasiraj, R D; Boulter, A R; Dhittal, S P
1998-01-01
The financial viability of programme services and product offerings requires that revenue exceeds expenses. Revenue includes payments for services and products as well as donor cash and in-kind contributions. Expenses reflect consumption of purchased or contributed time and materials and utilization (depreciation) of physical plant facilities and equipment. Standard financial reports contain this revenue and expense information, complemented when necessary by valuation and accounting of in-kind contributions. Since financial statements are prepared using consistent and accepted accounting practices, year-to-year and organization-to-organization comparisons can be made. The use of such financial information is illustrated in this article by determining the unit cost of cataract surgery in two hospitals in Nepal. The proportion of unit cost attributed to personnel, medical supplies, administrative materials, and depreciation varied significantly by institution. These variations are accounted for by examining differences in operational structure and capacity utilization.
Ellwein, L. B.; Thulasiraj, R. D.; Boulter, A. R.; Dhittal, S. P.
1998-01-01
The financial viability of programme services and product offerings requires that revenue exceeds expenses. Revenue includes payments for services and products as well as donor cash and in-kind contributions. Expenses reflect consumption of purchased or contributed time and materials and utilization (depreciation) of physical plant facilities and equipment. Standard financial reports contain this revenue and expense information, complemented when necessary by valuation and accounting of in-kind contributions. Since financial statements are prepared using consistent and accepted accounting practices, year-to-year and organization-to-organization comparisons can be made. The use of such financial information is illustrated in this article by determining the unit cost of cataract surgery in two hospitals in Nepal. The proportion of unit cost attributed to personnel, medical supplies, administrative materials, and depreciation varied significantly by institution. These variations are accounted for by examining differences in operational structure and capacity utilization. PMID:9868836
Fire Incident Reporting Manual
1984-02-01
Purpose 1-1 B. Scope 1-1 C. Procedures 1-1 D. Exclusions 1-3 E . Preparation 1-3 F. Information Requirements 1-4 CHAPTER 2 - INSTRUCTIONS FOR PREPARING DoD...Structure and Fire Data 2-16 4. Section D - Fire Protection Facilities (In Structures Only) 2-28 5. Section E - Losses 2-30 6. Section F - Times (24...Activities Program," February 21, 1976 ( e ) National Fire Protection Association (NFPA) Standard 901, "Uniform Coding for Fire Protection," 1976 (f) NFPA
Worldwide Protein Data Bank validation information: usage and trends.
Smart, Oliver S; Horský, Vladimír; Gore, Swanand; Svobodová Vařeková, Radka; Bendová, Veronika; Kleywegt, Gerard J; Velankar, Sameer
2018-03-01
Realising the importance of assessing the quality of the biomolecular structures deposited in the Protein Data Bank (PDB), the Worldwide Protein Data Bank (wwPDB) partners established Validation Task Forces to obtain advice on the methods and standards to be used to validate structures determined by X-ray crystallography, nuclear magnetic resonance spectroscopy and three-dimensional electron cryo-microscopy. The resulting wwPDB validation pipeline is an integral part of the wwPDB OneDep deposition, biocuration and validation system. The wwPDB Validation Service webserver (https://validate.wwpdb.org) can be used to perform checks prior to deposition. Here, it is shown how validation metrics can be combined to produce an overall score that allows the ranking of macromolecular structures and domains in search results. The ValTrends DB database provides users with a convenient way to access and analyse validation information and other properties of X-ray crystal structures in the PDB, including investigating trends in and correlations between different structure properties and validation metrics.
Worldwide Protein Data Bank validation information: usage and trends
Horský, Vladimír; Gore, Swanand; Svobodová Vařeková, Radka; Bendová, Veronika
2018-01-01
Realising the importance of assessing the quality of the biomolecular structures deposited in the Protein Data Bank (PDB), the Worldwide Protein Data Bank (wwPDB) partners established Validation Task Forces to obtain advice on the methods and standards to be used to validate structures determined by X-ray crystallography, nuclear magnetic resonance spectroscopy and three-dimensional electron cryo-microscopy. The resulting wwPDB validation pipeline is an integral part of the wwPDB OneDep deposition, biocuration and validation system. The wwPDB Validation Service webserver (https://validate.wwpdb.org) can be used to perform checks prior to deposition. Here, it is shown how validation metrics can be combined to produce an overall score that allows the ranking of macromolecular structures and domains in search results. The ValTrendsDB database provides users with a convenient way to access and analyse validation information and other properties of X-ray crystal structures in the PDB, including investigating trends in and correlations between different structure properties and validation metrics. PMID:29533231
Magnetic resonance imaging based functional imaging in paediatric oncology.
Manias, Karen A; Gill, Simrandip K; MacPherson, Lesley; Foster, Katharine; Oates, Adam; Peet, Andrew C
2017-02-01
Imaging is central to management of solid tumours in children. Conventional magnetic resonance imaging (MRI) is the standard imaging modality for tumours of the central nervous system (CNS) and limbs and is increasingly used in the abdomen. It provides excellent structural detail, but imparts limited information about tumour type, aggressiveness, metastatic potential or early treatment response. MRI based functional imaging techniques, such as magnetic resonance spectroscopy, diffusion and perfusion weighted imaging, probe tissue properties to provide clinically important information about metabolites, structure and blood flow. This review describes the role of and evidence behind these functional imaging techniques in paediatric oncology and implications for integrating them into routine clinical practice. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Smith, Terence R.; Menon, Sudhakar; Star, Jeffrey L.; Estes, John E.
1987-01-01
This paper provides a brief survey of the history, structure and functions of 'traditional' geographic information systems (GIS), and then suggests a set of requirements that large-scale GIS should satisfy, together with a set of principles for their satisfaction. These principles, which include the systematic application of techniques from several subfields of computer science to the design and implementation of GIS and the integration of techniques from computer vision and image processing into standard GIS technology, are discussed in some detail. In particular, the paper provides a detailed discussion of questions relating to appropriate data models, data structures and computational procedures for the efficient storage, retrieval and analysis of spatially-indexed data.
Price, Malcolm J; Welton, Nicky J; Briggs, Andrew H; Ades, A E
2011-01-01
Standard approaches to estimation of Markov models with data from randomized controlled trials tend either to make a judgment about which transition(s) treatments act on, or they assume that treatment has a separate effect on every transition. An alternative is to fit a series of models that assume that treatment acts on specific transitions. Investigators can then choose among alternative models using goodness-of-fit statistics. However, structural uncertainty about any chosen parameterization will remain and this may have implications for the resulting decision and the need for further research. We describe a Bayesian approach to model estimation, and model selection. Structural uncertainty about which parameterization to use is accounted for using model averaging and we developed a formula for calculating the expected value of perfect information (EVPI) in averaged models. Marginal posterior distributions are generated for each of the cost-effectiveness parameters using Markov Chain Monte Carlo simulation in WinBUGS, or Monte-Carlo simulation in Excel (Microsoft Corp., Redmond, WA). We illustrate the approach with an example of treatments for asthma using aggregate-level data from a connected network of four treatments compared in three pair-wise randomized controlled trials. The standard errors of incremental net benefit using structured models is reduced by up to eight- or ninefold compared to the unstructured models, and the expected loss attaching to decision uncertainty by factors of several hundreds. Model averaging had considerable influence on the EVPI. Alternative structural assumptions can alter the treatment decision and have an overwhelming effect on model uncertainty and expected value of information. Structural uncertainty can be accounted for by model averaging, and the EVPI can be calculated for averaged models. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Eysenbach, Gunther; Yihune, Gabriel; Lampe, Kristian; Cross, Phil; Brickley, Dan
2000-01-01
MedCERTAIN (MedPICS Certification and Rating of Trustworthy Health Information on the Net, http://www.medcertain.org/) is a recently launched international project funded under the European Union's (EU) "Action Plan for safer use of the Internet". It provides a technical infrastructure and a conceptual basis for an international system of "quality seals", ratings and self-labelling of Internet health information, with the final aim to establish a global "trustmark" for networked health information. Digital "quality seals" are evaluative metadata (using standards such as PICS=Platform for Internet Content Selection, now being replaced by RDF/XML) assigned by trusted third-party raters. The project also enables and encourages self-labelling with descriptive metainformation by web authors. Together these measures will help consumers as well as professionals to identify high-quality information on the Internet. MedCERTAIN establishes a fully functional demonstrator for a self- and third-party rating system enabling consumers and professionals to filter harmful health information and to positively identify and select high quality information. We aim to provide a trustmark system which allows citizens to place greater confidence in networked information, to encourage health information providers to follow best practices guidelines such as the Washington eHealth Code of Ethics, to provide effective feedback and law enforcement channels to handle user complaints, and to stimulate medical societies to develop standard for patient information. The project further proposes and identifies standards for interoperability of rating and description services (such as libraries or national health portals) and fosters a worldwide collaboration to guide consumers to high-quality information on the web.
State-of-the-art Anonymization of Medical Records Using an Iterative Machine Learning Framework
Szarvas, György; Farkas, Richárd; Busa-Fekete, Róbert
2007-01-01
Objective The anonymization of medical records is of great importance in the human life sciences because a de-identified text can be made publicly available for non-hospital researchers as well, to facilitate research on human diseases. Here the authors have developed a de-identification model that can successfully remove personal health information (PHI) from discharge records to make them conform to the guidelines of the Health Information Portability and Accountability Act. Design We introduce here a novel, machine learning-based iterative Named Entity Recognition approach intended for use on semi-structured documents like discharge records. Our method identifies PHI in several steps. First, it labels all entities whose tags can be inferred from the structure of the text and it then utilizes this information to find further PHI phrases in the flow text parts of the document. Measurements Following the standard evaluation method of the first Workshop on Challenges in Natural Language Processing for Clinical Data, we used token-level Precision, Recall and Fβ=1 measure metrics for evaluation. Results Our system achieved outstanding accuracy on the standard evaluation dataset of the de-identification challenge, with an F measure of 99.7534% for the best submitted model. Conclusion We can say that our system is competitive with the current state-of-the-art solutions, while we describe here several techniques that can be beneficial in other tasks that need to handle structured documents such as clinical records. PMID:17823086
NASA Astrophysics Data System (ADS)
Fish, Philip E.
1995-05-01
In 1978, Wisconsin Department of Transportation discovered major cracking on a two-girder, fracture critical structure, just four years after it was constructed. In 1981, on the same structure, now seven years old, major cracking was discovered in the tie girder flange of the tied arch span. This is one example of the type of failures that transportation departments discovered on welded structures in the 1970's and '80's. The failures from welded details and pinned connections lead to much stricter standards for present day designs. All areas were affected: design with identification of fatigue-prone details and classification of fatigue categories; material requirements with emphasis on toughness and weldability; increased welding and fabrication standards with licensure of fabrication shops to minimum quality standards including personnel; and an increased effort on inspection of existing bridges, where critical details were overlooked or missed in the past. FHWA inspection requirements for existing structures increased through this same time period, in reaction to the failures that had occurred. Obviously, many structures in Wisconsin were not built to the standards now required, thus the importance for quality inspection techniques. The new FHWA inspection requirements now being implemented throughout the nation require an in-depth, hands-on type inspection at a specified frequency, on all fracture critical structures. Wisconsin Department of Transportation started an in-depth inspection program in 1985 and made it a full time program in 1987. This program included extensive nondestructive testing. Ultrasonic inspection has played a major role in this type of inspection. All fracture critical structures, pin and hanger systems, and pinned connections are inspected on a five-year cycle now. The program requires an experienced inspection team and a practical inspection approach. Extensive preparation is required with review of all design, construction, and maintenance documents. An inspection plan is developed from the review and downloaded to a laptop computer. Inspection emphasis are on 'hands on' visual and nondestructive evaluation. Report documentation includes all design plans, pictorial documentation of structural deficiencies, nondestructive evaluation reports, conclusions, and recommendations. Planned changes in the program include implementation of an engineering work station as a 'single source' information file and reporting file for the inspection program. This would include scanning all current information into the file such as design, construction, and maintenance history. It would also include all inspection data with pictures. Inspections would be performed by downloading data onto a laptop and then uploading after completion of inspection. Pictures and nondestructive data would be entered by digital disks.
A Stochastic Evolutionary Model for Protein Structure Alignment and Phylogeny
Challis, Christopher J.; Schmidler, Scott C.
2012-01-01
We present a stochastic process model for the joint evolution of protein primary and tertiary structure, suitable for use in alignment and estimation of phylogeny. Indels arise from a classic Links model, and mutations follow a standard substitution matrix, whereas backbone atoms diffuse in three-dimensional space according to an Ornstein–Uhlenbeck process. The model allows for simultaneous estimation of evolutionary distances, indel rates, structural drift rates, and alignments, while fully accounting for uncertainty. The inclusion of structural information enables phylogenetic inference on time scales not previously attainable with sequence evolution models. The model also provides a tool for testing evolutionary hypotheses and improving our understanding of protein structural evolution. PMID:22723302
An Extensible Information Grid for Risk Management
NASA Technical Reports Server (NTRS)
Maluf, David A.; Bell, David G.
2003-01-01
This paper describes recent work on developing an extensible information grid for risk management at NASA - a RISK INFORMATION GRID. This grid is being developed by integrating information grid technology with risk management processes for a variety of risk related applications. To date, RISK GRID applications are being developed for three main NASA processes: risk management - a closed-loop iterative process for explicit risk management, program/project management - a proactive process that includes risk management, and mishap management - a feedback loop for learning from historical risks that escaped other processes. This is enabled through an architecture involving an extensible database, structuring information with XML, schemaless mapping of XML, and secure server-mediated communication using standard protocols.
Kim, J H; Ferziger, R; Kawaloff, H B; Sands, D Z; Safran, C; Slack, W V
2001-01-01
Even the most extensive hospital information system cannot support all the complex and ever-changing demands associated with a clinical database, such as providing department or personal data forms, and rating scales. Well-designed clinical dialogue programs may facilitate direct interaction of patients with their medical records. Incorporation of extensive and loosely structured clinical data into an existing medical record system is an essential step towards a comprehensive clinical information system, and can best be achieved when the practitioner and the patient directly enter the contents. We have developed a rapid prototyping and clinical conversational system that complements the electronic medical record system, with its generic data structure and standard communication interfaces based on Web technology. We believe our approach can enhance collaboration between consumer-oriented and provider-oriented information systems.
Automatic classification of protein structures relying on similarities between alignments
2012-01-01
Background Identification of protein structural cores requires isolation of sets of proteins all sharing a same subset of structural motifs. In the context of an ever growing number of available 3D protein structures, standard and automatic clustering algorithms require adaptations so as to allow for efficient identification of such sets of proteins. Results When considering a pair of 3D structures, they are stated as similar or not according to the local similarities of their matching substructures in a structural alignment. This binary relation can be represented in a graph of similarities where a node represents a 3D protein structure and an edge states that two 3D protein structures are similar. Therefore, classifying proteins into structural families can be viewed as a graph clustering task. Unfortunately, because such a graph encodes only pairwise similarity information, clustering algorithms may include in the same cluster a subset of 3D structures that do not share a common substructure. In order to overcome this drawback we first define a ternary similarity on a triple of 3D structures as a constraint to be satisfied by the graph of similarities. Such a ternary constraint takes into account similarities between pairwise alignments, so as to ensure that the three involved protein structures do have some common substructure. We propose hereunder a modification algorithm that eliminates edges from the original graph of similarities and gives a reduced graph in which no ternary constraints are violated. Our approach is then first to build a graph of similarities, then to reduce the graph according to the modification algorithm, and finally to apply to the reduced graph a standard graph clustering algorithm. Such method was used for classifying ASTRAL-40 non-redundant protein domains, identifying significant pairwise similarities with Yakusa, a program devised for rapid 3D structure alignments. Conclusions We show that filtering similarities prior to standard graph based clustering process by applying ternary similarity constraints i) improves the separation of proteins of different classes and consequently ii) improves the classification quality of standard graph based clustering algorithms according to the reference classification SCOP. PMID:22974051
Progress on an implementation of MIFlowCyt in XML
NASA Astrophysics Data System (ADS)
Leif, Robert C.; Leif, Stephanie H.
2015-03-01
Introduction: The International Society for Advancement of Cytometry (ISAC) Data Standards Task Force (DSTF) has created a standard for the Minimum Information about a Flow Cytometry Experiment (MIFlowCyt 1.0). The CytometryML schemas, are based in part upon the Flow Cytometry Standard and Digital Imaging and Communication (DICOM) standards. CytometryML has and will be extended and adapted to include MIFlowCyt, as well as to serve as a common standard for flow and image cytometry (digital microscopy). Methods: The MIFlowCyt data-types were created, as is the rest of CytometryML, in the XML Schema Definition Language (XSD1.1). Individual major elements of the MIFlowCyt schema were translated into XML and filled with reasonable data. A small section of the code was formatted with HTML formatting elements. Results: The differences in the amount of detail to be recorded for 1) users of standard techniques including data analysts and 2) others, such as method and device creators, laboratory and other managers, engineers, and regulatory specialists required that separate data-types be created to describe the instrument configuration and components. A very substantial part of the MIFlowCyt element that describes the Experimental Overview part of the MIFlowCyt and substantial parts of several other major elements have been developed. Conclusions: The future use of structured XML tags and web technology should facilitate searching of experimental information, its presentation, and inclusion in structured research, clinical, and regulatory documents, as well as demonstrate in publications adherence to the MIFlowCyt standard. The use of CytometryML together with XML technology should also result in the textual and numeric data being published using web technology without any change in composition. Preliminary testing indicates that CytometryML XML pages can be directly formatted with the combination of HTML and CSS.
TRENCADIS--a WSRF grid MiddleWare for managing DICOM structured reporting objects.
Blanquer, Ignacio; Hernandez, Vicente; Segrelles, Damià
2006-01-01
The adoption of the digital processing of medical data, especially on radiology, has leaded to the availability of millions of records (images and reports). However, this information is mainly used at patient level, being the extraction of information, organised according to administrative criteria, which make the extraction of knowledge difficult. Moreover, legal constraints make the direct integration of information systems complex or even impossible. On the other side, the widespread of the DICOM format has leaded to the inclusion of other information different from just radiological images. The possibility of coding radiology reports in a structured form, adding semantic information about the data contained in the DICOM objects, eases the process of structuring images according to content. DICOM Structured Reporting (DICOM-SR) is a specification of tags and sections to code and integrate radiology reports, with seamless references to findings and regions of interests of the associated images, movies, waveforms, signals, etc. The work presented in this paper aims at developing of a framework to efficiently and securely share medical images and radiology reports, as well as to provide high throughput processing services. This system is based on a previously developed architecture in the framework of the TRENCADIS project, and uses other components such as the security system and the Grid processing service developed in previous activities. The work presented here introduces a semantic structuring and an ontology framework, to organise medical images considering standard terminology and disease coding formats (SNOMED, ICD9, LOINC..).
On the Structure of Earth Science Data Collections
NASA Astrophysics Data System (ADS)
Barkstrom, B. R.
2009-12-01
While there has been substantial work in the IT community regarding metadata and file identifier schemas, there appears to be relatively little work on the organization of the file collections that constitute the preponderance of Earth science data. One symptom of this difficulty appears in nomenclature describing collections: the terms `Data Product,' `Data Set,' and `Version' are overlaid with multiple meanings between communities. A particularly important aspect of this lack of standardization appears when the community attempts to developa schema for data file identifiers. There are four candidate families of identifiers: ● Randomly assigned identifiers, such as GUIDs or UUIDs, ● Segmented numerical identifiers, such as OIDs or the prefixes for DOIs, ● Extensible URL-based identifiers, such as URNs, PURL, ARK, and similar schemas, ● Text-based identifiers based on citations for papers and books, such as those suggested for the International Polar Year (IPY) citations. Unfortunately, these schema families appear to be devoid of content based on the actual structures of Earth science data collections. In this paper, we consider an organization based on an industrial production paradigm that appears to provide the preponderance of Earth science data from satellites and in situ observations. This paradigm produces a hierarchical collection structure, similar to one discussed in Barkstrom [2003: Lecture Notes in Computer Science, 2649, pp. 118-133]. In this organization, three key collection types are ● a Data Product, which is a collection of files that have similar key parameters and included data time interval, ● a Data Set, which is a collection of files within a Data Product that comes from a specified set of Data Sources, ● a Data Set Version, which is a collection of files within a Data Set for which the data producer has attempted to ensure error homogeneity. Within a Data Set Version, files appear as a time series of instances that may be identified by the starting time of the data in the file. For data intended for climate uses, it seems appropriate to state this time in terms of Astronomical Julian Date, which is a long-standing international standard that provides continuity between current observations and paleo-climatic observations. Because this collection structure is hierarchical, it could be used by either of the two hierarchical identifier schema families, although it is probably easier to use with the OID/DOI family. This hierarchical collection structure fits into the hierarchical structure of Archival Information Packages (AIPs) identified in the Open Archival Information Systems (OAIS) Reference Model. In that model, AIPs are subdivided into Archival Information Units (AIUs), which describe individual files, or Archival Information Collections (AICs). The latter can be hierarchically nested, leading to an OAIS RM-consistent collection structure that does not appear clearly in other metadata standards. This paper will also discuss the connection between these collection categories and other metadata, as well as the possible need for other organizational schemas to capture the full range of Earth science data collection structures.
BEGINNING TAGALOG, A COURSE FOR SPEAKERS OF ENGLISH.
ERIC Educational Resources Information Center
BOWEN, J. DONALD; AND OTHERS
THE TWO MAJOR GOALS OF THIS COMPREHENSIVE COURSE ARE--FIRST, AN ORAL CONTROL OF TAGALOG AND SUFFICIENT MASTERY OF ITS STRUCTURE TO CONTINUE INDEPENDENT STUDY, AND SECOND, PRESENTATION OF UP-TO-DATE INFORMATION ABOUT THE SOCIAL CUSTOMS, STANDARDS, VALUES, AND ASPIRATIONS OF THE FILIPINO PEOPLE SO THAT THE LANGUAGE LEARNER MAY PARTICIPATE FULLY IN…
Avionics Technology Contract Project Report Phase I with Research Findings.
ERIC Educational Resources Information Center
Sappe', Hoyt; Squires, Shiela S.
This document reports on Phase I of a project that examined the occupation of avionics technician, established appropriate committees, and conducted task verification. Results of this phase provide the basic information required to develop the program standards and to guide and set up the committee structure to guide the project. Section 1…
Structure of Pine Stands in the Southeast
William A. Bechtold; Gregory A. Ruark
1988-01-01
Distributional and statistical information associated with stand age, site index, basal area per acre, number of stems per acre, and stand density index is reported for major pine cover types of the Southeastern United States. Means, standard deviations, and ranges of these variables are listed by State and physiographic region for loblolly, slash, longleaf, pond,...
Structure and Form. Elementary Science Activity Series, Volume 2.
ERIC Educational Resources Information Center
Blackwell, Frank F.
This book is number 2 of a series of elementary science books that presents a wealth of ideas for science activities for the elementary school teacher. Each activity includes a standard set of information designed to help teachers determine the activity's appropriateness for their students, plan its implementation, and help children focus on a…
An Experimental Investigation of Complexity in Database Query Formulation Tasks
ERIC Educational Resources Information Center
Casterella, Gretchen Irwin; Vijayasarathy, Leo
2013-01-01
Information Technology professionals and other knowledge workers rely on their ability to extract data from organizational databases to respond to business questions and support decision making. Structured query language (SQL) is the standard programming language for querying data in relational databases, and SQL skills are in high demand and are…
PolySac3DB: an annotated data base of 3 dimensional structures of polysaccharides.
Sarkar, Anita; Pérez, Serge
2012-11-14
Polysaccharides are ubiquitously present in the living world. Their structural versatility makes them important and interesting components in numerous biological and technological processes ranging from structural stabilization to a variety of immunologically important molecular recognition events. The knowledge of polysaccharide three-dimensional (3D) structure is important in studying carbohydrate-mediated host-pathogen interactions, interactions with other bio-macromolecules, drug design and vaccine development as well as material science applications or production of bio-ethanol. PolySac3DB is an annotated database that contains the 3D structural information of 157 polysaccharide entries that have been collected from an extensive screening of scientific literature. They have been systematically organized using standard names in the field of carbohydrate research into 18 categories representing polysaccharide families. Structure-related information includes the saccharides making up the repeat unit(s) and their glycosidic linkages, the expanded 3D representation of the repeat unit, unit cell dimensions and space group, helix type, diffraction diagram(s) (when applicable), experimental and/or simulation methods used for structure description, link to the abstract of the publication, reference and the atomic coordinate files for visualization and download. The database is accompanied by a user-friendly graphical user interface (GUI). It features interactive displays of polysaccharide structures and customized search options for beginners and experts, respectively. The site also serves as an information portal for polysaccharide structure determination techniques. The web-interface also references external links where other carbohydrate-related resources are available. PolySac3DB is established to maintain information on the detailed 3D structures of polysaccharides. All the data and features are available via the web-interface utilizing the search engine and can be accessed at http://polysac3db.cermav.cnrs.fr.
Development of a Novel Method for Determination of Residual Stresses in a Friction Stir Weld
NASA Technical Reports Server (NTRS)
Reynolds, Anthony P.
2001-01-01
Material constitutive properties, which describe the mechanical behavior of a material under loading, are vital to the design and implementation of engineering materials. For homogeneous materials, the standard process for determining these properties is the tensile test, which is used to measure the material stress-strain response. However, a majority of the applications for engineering materials involve the use of heterogeneous materials and structures (i.e. alloys, welded components) that exhibit heterogeneity on a global or local level. Regardless of the scale of heterogeneity, the overall response of the material or structure is dependent on the response of each of the constituents. Therefore, in order to produce materials and structures that perform in the best possible manner, the properties of the constituents that make up the heterogeneous material must be thoroughly examined. When materials exhibit heterogeneity on a local level, such as in alloys or particle/matrix composites, they are often treated as statistically homogenous and the resulting 'effective' properties may be determined through homogenization techniques. In the case of globally heterogeneous materials, such as weldments, the standard tensile test provides the global response but no information on what is Occurring locally within the different constituents. This information is necessary to improve the material processing as well as the end product.
Newly Enacted Intent Changes to ADS-B MASPS: Emphasis on Operations, Compatibility, and Integrity
NASA Technical Reports Server (NTRS)
Barhydt, Richard; Warren, Anthony W.
2002-01-01
Significant changes to the intent reporting structure in the Minimum Aviation System Performance Standards (MASPS) for Automatic Dependent Surveillance Broadcast (ADS-B) have recently been approved by RTCA Special Committee 186. The re-structured intent formats incorporate two major changes to the current MASPS (DO-242): addition of a Target State (TS) report that provides information on the horizontal and vertical targets for the current flight segment and replacement of the current Trajectory Change Point (TCP) and TCP+1 reports with Trajectory Change (TC) reports. TC reports include expanded information about TCPs and their connecting flight segments, in addition to making provisions for trajectory conformance elements. New intent elements are designed to accommodate a greater range of intent information, better reflect operational use and capabilities of existing and future aircraft avionics, and aid trajectory synthesis and conformance monitoring systems. These elements are expected to benefit near-term and future Air Traffic Management (ATM) applications, including separation assurance, local traffic flow management, and conformance monitoring. The current MASPS revision (DO-242A) implements those intent elements that are supported by current avionics standards and data buses. Additional elements are provisioned for inclusion in future MASPS revisions (beyond DO-242A) as avionics systems are evolved.
An entity-relation model for a tocho-gynaecology service.
Dono, L; Ríos, F; Montesino, I; Gonzalez, O
1992-01-01
There is a trend, both at national and European levels, towards the use of standards in hardware, software and communications. This implies a common root for the design of unified databases which will result in the homogeneity and interchangeability of data and knowledge. The technological solution exists and is imposed. The problem then is not the tools to be used as information support but how this information is organized and structured. According the the standards proposed by the Plan de Dotación Informática para la Asistencia Sanitaria (DIAS), and with respect to the problem of computer coverage of specialties, we present in this work an entity-relation model for a generic service within our project of integrally computerizing a tocho-gynaecology area. It is a multidisciplinary job between doctors and physicists where we use a planning strategy in order to identify the functions, processes and activities, making the traditional clinical management structure compatible with the one proposed. This means that if we want the introduction of the system to be effective, we have to make all the clinical and sanitary staff participate in the project. We must induce the need for change and carry out the process of change gradually, so that only a change in the support, and not in the organization, is initially perceived.
Yokochi, Masashi; Kobayashi, Naohiro; Ulrich, Eldon L; Kinjo, Akira R; Iwata, Takeshi; Ioannidis, Yannis E; Livny, Miron; Markley, John L; Nakamura, Haruki; Kojima, Chojiro; Fujiwara, Toshimichi
2016-05-05
The nuclear magnetic resonance (NMR) spectroscopic data for biological macromolecules archived at the BioMagResBank (BMRB) provide a rich resource of biophysical information at atomic resolution. The NMR data archived in NMR-STAR ASCII format have been implemented in a relational database. However, it is still fairly difficult for users to retrieve data from the NMR-STAR files or the relational database in association with data from other biological databases. To enhance the interoperability of the BMRB database, we present a full conversion of BMRB entries to two standard structured data formats, XML and RDF, as common open representations of the NMR-STAR data. Moreover, a SPARQL endpoint has been deployed. The described case study demonstrates that a simple query of the SPARQL endpoints of the BMRB, UniProt, and Online Mendelian Inheritance in Man (OMIM), can be used in NMR and structure-based analysis of proteins combined with information of single nucleotide polymorphisms (SNPs) and their phenotypes. We have developed BMRB/XML and BMRB/RDF and demonstrate their use in performing a federated SPARQL query linking the BMRB to other databases through standard semantic web technologies. This will facilitate data exchange across diverse information resources.
Natural learning in NLDA networks.
González, Ana; Dorronsoro, José R
2007-07-01
Non Linear Discriminant Analysis (NLDA) networks combine a standard Multilayer Perceptron (MLP) transfer function with the minimization of a Fisher analysis criterion. In this work we will define natural-like gradients for NLDA network training. Instead of a more principled approach, that would require the definition of an appropriate Riemannian structure on the NLDA weight space, we will follow a simpler procedure, based on the observation that the gradient of the NLDA criterion function J can be written as the expectation nablaJ(W)=E[Z(X,W)] of a certain random vector Z and defining then I=E[Z(X,W)Z(X,W)(t)] as the Fisher information matrix in this case. This definition of I formally coincides with that of the information matrix for the MLP or other square error functions; the NLDA J criterion, however, does not have this structure. Although very simple, the proposed approach shows much faster convergence than that of standard gradient descent, even when its costlier complexity is taken into account. While the faster convergence of natural MLP batch training can be also explained in terms of its relationship with the Gauss-Newton minimization method, this is not the case for NLDA training, as we will see analytically and numerically that the hessian and information matrices are different.
Monotone Approximation for a Nonlinear Size and Class Age Structured Epidemic Model
2006-02-22
information if it does not display a currently valid OMB control number. 1. REPORT DATE 22 FEB 2006 2. REPORT TYPE 3. DATES COVERED 00-00-2006 to 00...follows from standard results, given the fact that they are all linear problems with local boundary conditions for Sinko-Streifer type systems. We...model, J. Franklin Inst., 297 (1974), 325-333. [14] K. E. Howard, A size and maturity structured model of cell dwarfism exhibiting chaotic be- havior
DOE Office of Scientific and Technical Information (OSTI.GOV)
The Autonomic Intelligent Cyber Sensor (AICS) provides cyber security and industrial network state awareness for Ethernet based control network implementations. The AICS utilizes collaborative mechanisms based on Autonomic Research and a Service Oriented Architecture (SOA) to: 1) identify anomalous network traffic; 2) discover network entity information; 3) deploy deceptive virtual hosts; and 4) implement self-configuring modules. AICS achieves these goals by dynamically reacting to the industrial human-digital ecosystem in which it resides. Information is transported internally and externally on a standards based, flexible two-level communication structure.
Evolution of an Intelligent Information Fusion System
NASA Technical Reports Server (NTRS)
Campbell, William J.; Cromp, Robert F.
1990-01-01
Consideration is given to the hardware and software needed to manage the enormous amount and complexity of data that the next generation of space-borne sensors will provide. An anthology is presented illustrating the evolution of artificial intelligence, science data processing, and management from the 1960s to the near future. Problems and limitations of technologies, data structures, data standards, and conceptual thinking are addressed. The development of an end-to-end Intelligent Information Fusion System that embodies knowledge of the user's domain-specific goals is proposed.
Nemesis I: Parallel Enhancements to ExodusII
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hennigan, Gary L.; John, Matthew S.; Shadid, John N.
2006-03-28
NEMESIS I is an enhancement to the EXODUS II finite element database model used to store and retrieve data for unstructured parallel finite element analyses. NEMESIS I adds data structures which facilitate the partitioning of a scalar (standard serial) EXODUS II file onto parallel disk systems found on many parallel computers. Since the NEMESIS I application programming interface (APl)can be used to append information to an existing EXODUS II files can be used on files which contain NEMESIS I information. The NEMESIS I information is written and read via C or C++ callable functions which compromise the NEMESIS I API.
Virtual file system on NoSQL for processing high volumes of HL7 messages.
Kimura, Eizen; Ishihara, Ken
2015-01-01
The Standardized Structured Medical Information Exchange (SS-MIX) is intended to be the standard repository for HL7 messages that depend on a local file system. However, its scalability is limited. We implemented a virtual file system using NoSQL to incorporate modern computing technology into SS-MIX and allow the system to integrate local patient IDs from different healthcare systems into a universal system. We discuss its implementation using the database MongoDB and describe its performance in a case study.
Erlenwein, J; Hinz, J; Meißner, W; Stamer, U; Bauer, M; Petzke, F
2015-07-01
Due to the implementation of the diagnosis-related groups (DRG) system, the competitive pressure on German hospitals increased. In this context it has been shown that acute pain management offers economic benefits for hospitals. The aim of this study was to analyze the impact of the competitive situation, the ownership and the economic resources required on structures and processes for acute pain management. A standardized questionnaire on structures and processes of acute pain management was mailed to the 885 directors of German departments of anesthesiology listed as members of the German Society of Anesthesiology and Intensive Care Medicine (DGAI, Deutsche Gesellschaft für Anästhesiologie und Intensivmedizin). For most hospitals a strong regional competition existed; however, this parameter affected neither the implementation of structures nor the recommended treatment processes for pain therapy. In contrast, a clear preference for hospitals in private ownership to use the benchmarking tool QUIPS (quality improvement in postoperative pain therapy) was found. These hospitals also presented information on coping with the management of pain in the corporate clinic mission statement more often and published information about the quality of acute pain management in the quality reports more frequently. No differences were found between hospitals with different forms of ownership in the implementation of acute pain services, quality circles, expert standard pain management and the implementation of recommended processes. Hospitals with a higher case mix index (CMI) had a certified acute pain management more often. The corporate mission statement of these hospitals also contained information on how to cope with pain, presentation of the quality of pain management in the quality report, implementation of quality circles and the implementation of the expert standard pain management more frequently. There were no differences in the frequency of using the benchmarking tool QUIPS or the implementation of recommended treatment processes with respect to the CMI. In this survey no effect of the competitive situation of hospitals on acute pain management could be demonstrated. Private ownership and a higher CMI were more often associated with structures of acute pain management which were publicly accessible in terms of hospital marketing.
Predictors and Effects of Knowledge Management in U.S. Colleges and Schools of Pharmacy
NASA Astrophysics Data System (ADS)
Watcharadamrongkun, Suntaree
Public demands for accountability in higher education have placed increasing pressure on institutions to document their achievement of critical outcomes. These demands also have had wide-reaching implications for the development and enforcement of accreditation standards, including those governing pharmacy education. The knowledge management (KM) framework provides perspective for understanding how organizations evaluate themselves and guidance for how to improve their performance. In this study, we explore knowledge management processes, how these processes are affected by organizational structure and by information technology resources, and how these processes affect organizational performance. This is done in the context of Accreditation Standards and Guidelines for the Professional Program in Pharmacy Leading to the Doctor of Pharmacy Degree (Standards 2007). Data were collected using an online census survey of 121 U.S. Colleges and Schools of Pharmacy and supplemented with archival data. A key informant method was used with CEO Deans and Assessment leaders serving as respondents. The survey yielded a 76.0% (92/121) response rate. Exploratory factor analysis was used to construct scales (and scales) describing core KM processes: Knowledge Acquisition, Knowledge Integration, and Institutionalization; all scale reliabilities were found to be acceptable. Analysis showed that, as expected, greater Knowledge Acquisition predicts greater Knowledge Integration and greater Knowledge Integration predicts greater Institutionalization. Predictive models were constructed using hierarchical multiple regression and path analysis. Overall, information technology resources had stronger effects on KM processes than did characteristics of organizational structure. Greater Institutionalization predicted better outcomes related to direct measures of performance (i.e., NAPLEX pass rates, Accreditation actions) but Institutionalization was unrelated to an indirect measure of performance (i.e., USNWR ratings). Several organizational structure characteristics (i.e., size, age, and being part of an academic health center) were significant predictors of organizational performance; in contrast, IT resources had no direct effects on performance. Findings suggest that knowledge management processes, organizational structures and IT resources are related to better performance for Colleges and Schools of Pharmacy. Further research is needed to understand mechanisms through which specific knowledge management processes translate into better performance and, relatedly, to establish how enhancing KM processes can be used to improve institutional quality.
Inference of Population Structure using Dense Haplotype Data
Lawson, Daniel John; Hellenthal, Garrett
2012-01-01
The advent of genome-wide dense variation data provides an opportunity to investigate ancestry in unprecedented detail, but presents new statistical challenges. We propose a novel inference framework that aims to efficiently capture information on population structure provided by patterns of haplotype similarity. Each individual in a sample is considered in turn as a recipient, whose chromosomes are reconstructed using chunks of DNA donated by the other individuals. Results of this “chromosome painting” can be summarized as a “coancestry matrix,” which directly reveals key information about ancestral relationships among individuals. If markers are viewed as independent, we show that this matrix almost completely captures the information used by both standard Principal Components Analysis (PCA) and model-based approaches such as STRUCTURE in a unified manner. Furthermore, when markers are in linkage disequilibrium, the matrix combines information across successive markers to increase the ability to discern fine-scale population structure using PCA. In parallel, we have developed an efficient model-based approach to identify discrete populations using this matrix, which offers advantages over PCA in terms of interpretability and over existing clustering algorithms in terms of speed, number of separable populations, and sensitivity to subtle population structure. We analyse Human Genome Diversity Panel data for 938 individuals and 641,000 markers, and we identify 226 populations reflecting differences on continental, regional, local, and family scales. We present multiple lines of evidence that, while many methods capture similar information among strongly differentiated groups, more subtle population structure in human populations is consistently present at a much finer level than currently available geographic labels and is only captured by the haplotype-based approach. The software used for this article, ChromoPainter and fineSTRUCTURE, is available from http://www.paintmychromosomes.com/. PMID:22291602
Reducing data friction through site-based data curation
NASA Astrophysics Data System (ADS)
Thomer, A.; Palmer, C. L.
2017-12-01
Much of geoscience research takes place at "scientifically significant sites": localities which have attracted a critical mass of scientific interest, and thereby merit protection by government bodies, as well as the preservation of specimen and data collections and the development of site-specific permitting requirements for access to the site and its associated collections. However, many data standards and knowledge organization schemas do not adequately describe key characteristics of the sites, despite their centrality to research projects. Through work conducted as part of the IMLS-funded Site-Based Data Curation (SBDC) project, we developed a Minimum Information Framework (MIF) for site-based science, in which "information about a site's structure" is considered a core class of information. Here we present our empirically-derived information framework, as well as the methods used to create it. We believe these approaches will lead to the development of more effective data repositories and tools, and thereby will reduce "data friction" in interdisciplinary, yet site-based, geoscience workflows. The Minimum Information Framework for Site-based Research was developed through work at two scientifically significant sites: the hot springs at Yellowstone National Park, which are key to geobiology research; and the La Brea Tar Pits, an important paleontology locality in Southern California. We employed diverse methods of participatory engagement, in which key stakeholders at our sites (e.g. curators, collections managers, researchers, permit officers) were consulted through workshops, focus groups, interviews, action research methods, and collaborative information modeling and systems analysis. These participatory approaches were highly effective in fostering on-going partnership among a diverse team of domain scientists, information scientists, and software developers. The MIF developed in this work may be viewed as a "proto-standard" that can inform future repository development and data standards. Further, the approaches used to develop the MIF represent an important step toward systematic methods of developing geoscience data standards. Finally, we argue that organizing data around aspects of a site makes data collections more accessible to a range of scientific communities.
GeoSciML version 3: A GML application for geologic information
NASA Astrophysics Data System (ADS)
International Union of Geological Sciences., I. C.; Richard, S. M.
2011-12-01
After 2 years of testing and development, XML schema for GeoSciML version 3 are now ready for application deployment. GeoSciML draws from many geoscience data modelling efforts to establish a common suite of feature types to represent information associated with geologic maps (materials, structures, and geologic units) and observations including structure data, samples, and chemical analyses. After extensive testing and use case analysis, in December 2008 the CGI Interoperability Working Group (IWG) released GeoSciML 2.0 as an application schema for basic geological information. GeoSciML 2.0 is in use to deliver geologic data by the OneGeology Europe portal, the Geological Survey of Canada Groundwater Information Network (wet GIN), and the Auscope Mineral Resources portal. GeoSciML to version 3.0 is updated to OGC Geography Markup Language v3.2, re-engineered patterns for association of element values with controlled vocabulary concepts, incorporation of ISO19156 Observation and Measurement constructs for representing numeric and categorical values and for representing analytical data, incorporation of EarthResourceML to represent mineral occurrences and mines, incorporation of the GeoTime model to represent GSSP and stratigraphic time scale, and refactoring of the GeoSciML namespace to follow emerging ISO practices for decoupling of dependencies between standardized namespaces. These changes will make it easier for data providers to link to standard vocabulary and registry services. The depth and breadth of GeoSciML remains largely unchanged, covering the representation of geologic units, earth materials and geologic structures. ISO19156 elements and patterns are used to represent sampling features such as boreholes and rock samples, as well as geochemical and geochronologic measurements. Geologic structures include shear displacement structures (brittle faults and ductile shears), contacts, folds, foliations, lineations and structures with no preferred orientation (e.g. 'miarolitic cavities'). The Earth material package allows for the description of both individual components, such as minerals, and compound materials, such as rocks or unconsolidated materials. Provision is made for alteration, weathering, metamorphism, particle geometry, fabric, and petrophysical descriptions. Mapped features describe the shape of the geological features using standard GML geometries, such as polygons, lines, points or 3D volumes. Geological events provide the age, process and environment of formation of geological features. The Earth Resource section includes features to represent mineral occurrences and mines and associated human activities independently. This addition allows description of resources and reserves that can comply with national and internationally accepted reporting codes. GeoSciML v3 is under consideration as the data model for INSPIRE annex 2 geologic reporting in Europe.
Discrete Haar transform and protein structure.
Morosetti, S
1997-12-01
The discrete Haar transform of the sequence of the backbone dihedral angles (phi and psi) was performed over a set of X-ray protein structures of high resolution from the Brookhaven Protein Data Bank. Afterwards, the new dihedral angles were calculated by the inverse transform, using a growing number of Haar functions, from the lower to the higher degree. New structures were obtained using these dihedral angles, with standard values for bond lengths and angles, and with omega = 0 degree. The reconstructed structures were compared with the experimental ones, and analyzed by visual inspection and statistical analysis. When half of the Haar coefficients were used, all the reconstructed structures were not yet collapsed to a tertiary folding, but they showed yet realized most of the secondary motifs. These results indicate a substantial separation of structural information in the space of Haar transform, with the secondary structural information mainly present in the Haar coefficients of lower degrees, and the tertiary one present in the higher degree coefficients. Because of this separation, the representation of the folded structures in the space of Haar transform seems a promising candidate to encompass the problem of premature convergence in genetic algorithms.
Algorithms of maximum likelihood data clustering with applications
NASA Astrophysics Data System (ADS)
Giada, Lorenzo; Marsili, Matteo
2002-12-01
We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.
Building the United States National Vegetation Classification
Franklin, S.B.; Faber-Langendoen, D.; Jennings, M.; Keeler-Wolf, T.; Loucks, O.; Peet, R.; Roberts, D.; McKerrow, A.
2012-01-01
The Federal Geographic Data Committee (FGDC) Vegetation Subcommittee, the Ecological Society of America Panel on Vegetation Classification, and NatureServe have worked together to develop the United States National Vegetation Classification (USNVC). The current standard was accepted in 2008 and fosters consistency across Federal agencies and non-federal partners for the description of each vegetation concept and its hierarchical classification. The USNVC is structured as a dynamic standard, where changes to types at any level may be proposed at any time as new information comes in. But, because much information already exists from previous work, the NVC partners first established methods for screening existing types to determine their acceptability with respect to the 2008 standard. Current efforts include a screening process to assign confidence to Association and Group level descriptions, and a review of the upper three levels of the classification. For the upper levels especially, the expectation is that the review process includes international scientists. Immediate future efforts include the review of remaining levels and the development of a proposal review process.
Ding, Hang
2014-01-01
Structures in recurrence plots (RPs), preserving the rich information of nonlinear invariants and trajectory characteristics, have been increasingly analyzed in dynamic discrimination studies. The conventional analysis of RPs is mainly focused on quantifying the overall diagonal and vertical line structures through a method, called recurrence quantification analysis (RQA). This study extensively explores the information in RPs by quantifying local complex RP structures. To do this, an approach was developed to analyze the combination of three major RQA variables: determinism, laminarity, and recurrence rate (DLR) in a metawindow moving over a RP. It was then evaluated in two experiments discriminating (1) ideal nonlinear dynamic series emulated from the Lorenz system with different control parameters and (2) data sets of human heart rate regulations with normal sinus rhythms (n = 18) and congestive heart failure (n = 29). Finally, the DLR was compared with seven major RQA variables in terms of discriminatory power, measured by standardized mean difference (DSMD). In the two experiments, DLR resulted in the highest discriminatory power with DSMD = 2.53 and 0.98, respectively, which were 7.41 and 2.09 times the best performance from RQA. The study also revealed that the optimal RP structures for the discriminations were neither typical diagonal structures nor vertical structures. These findings indicate that local complex RP structures contain some rich information unexploited by RQA. Therefore, future research to extensively analyze complex RP structures would potentially improve the effectiveness of the RP analysis in dynamic discrimination studies.
Gstruct: a system for extracting schemas from GML documents
NASA Astrophysics Data System (ADS)
Chen, Hui; Zhu, Fubao; Guan, Jihong; Zhou, Shuigeng
2008-10-01
Geography Markup Language (GML) becomes the de facto standard for geographic information representation on the internet. GML schema provides a way to define the structure, content, and semantic of GML documents. It contains useful structural information of GML documents and plays an important role in storing, querying and analyzing GML data. However, GML schema is not mandatory, and it is common that a GML document contains no schema. In this paper, we present Gstruct, a tool for GML schema extraction. Gstruct finds the features in the input GML documents, identifies geometry datatypes as well as simple datatypes, then integrates all these features and eliminates improper components to output the optimal schema. Experiments demonstrate that Gstruct is effective in extracting semantically meaningful schemas from GML documents.
Automatically updating predictive modeling workflows support decision-making in drug design.
Muegge, Ingo; Bentzien, Jörg; Mukherjee, Prasenjit; Hughes, Robert O
2016-09-01
Using predictive models for early decision-making in drug discovery has become standard practice. We suggest that model building needs to be automated with minimum input and low technical maintenance requirements. Models perform best when tailored to answering specific compound optimization related questions. If qualitative answers are required, 2-bin classification models are preferred. Integrating predictive modeling results with structural information stimulates better decision making. For in silico models supporting rapid structure-activity relationship cycles the performance deteriorates within weeks. Frequent automated updates of predictive models ensure best predictions. Consensus between multiple modeling approaches increases the prediction confidence. Combining qualified and nonqualified data optimally uses all available information. Dose predictions provide a holistic alternative to multiple individual property predictions for reaching complex decisions.
Conceptual short term memory in perception and thought.
Potter, Mary C
2012-01-01
Conceptual short term memory (CSTM) is a theoretical construct that provides one answer to the question of how perceptual and conceptual processes are related. CSTM is a mental buffer and processor in which current perceptual stimuli and their associated concepts from long term memory (LTM) are represented briefly, allowing meaningful patterns or structures to be identified (Potter, 1993, 1999, 2009). CSTM is different from and complementary to other proposed forms of working memory: it is engaged extremely rapidly, has a large but ill-defined capacity, is largely unconscious, and is the basis for the unreflective understanding that is characteristic of everyday experience. The key idea behind CSTM is that most cognitive processing occurs without review or rehearsal of material in standard working memory and with little or no conscious reasoning. When one perceives a meaningful stimulus such as a word, picture, or object, it is rapidly identified at a conceptual level and in turn activates associated information from LTM. New links among concurrently active concepts are formed in CSTM, shaped by parsing mechanisms of language or grouping principles in scene perception and by higher-level knowledge and current goals. The resulting structure represents the gist of a picture or the meaning of a sentence, and it is this structure that we are conscious of and that can be maintained in standard working memory and consolidated into LTM. Momentarily activated information that is not incorporated into such structures either never becomes conscious or is rapidly forgotten. This whole cycle - identification of perceptual stimuli, memory recruitment, structuring, consolidation in LTM, and forgetting of non-structured material - may occur in less than 1 s when viewing a pictured scene or reading a sentence. The evidence for such a process is reviewed and its implications for the relation of perception and cognition are discussed.
NeuroNames: an ontology for the BrainInfo portal to neuroscience on the web.
Bowden, Douglas M; Song, Evan; Kosheleva, Julia; Dubach, Mark F
2012-01-01
BrainInfo ( http://braininfo.org ) is a growing portal to neuroscientific information on the Web. It is indexed by NeuroNames, an ontology designed to compensate for ambiguities in neuroanatomical nomenclature. The 20-year old ontology continues to evolve toward the ideal of recognizing all names of neuroanatomical entities and accommodating all structural concepts about which neuroscientists communicate, including multiple concepts of entities for which neuroanatomists have yet to determine the best or 'true' conceptualization. To make the definitions of structural concepts unambiguous and terminologically consistent we created a 'default vocabulary' of unique structure names selected from existing terminology. We selected standard names by criteria designed to maximize practicality for use in verbal communication as well as computerized knowledge management. The ontology of NeuroNames accommodates synonyms and homonyms of the standard terms in many languages. It defines complex structures as models composed of primary structures, which are defined in unambiguous operational terms. NeuroNames currently relates more than 16,000 names in eight languages to some 2,500 neuroanatomical concepts. The ontology is maintained in a relational database with three core tables: Names, Concepts and Models. BrainInfo uses NeuroNames to index information by structure, to interpret users' queries and to clarify terminology on remote web pages. NeuroNames is a resource vocabulary of the NLM's Unified Medical Language System (UMLS, 2011) and the basis for the brain regions component of NIFSTD (NeuroLex, 2011). The current version has been downloaded to hundreds of laboratories for indexing data and linking to BrainInfo, which attracts some 400 visitors/day, downloading 2,000 pages/day.
Ali, Anjum A; Dale, Anders M; Badea, Alexandra; Johnson, G Allan
2005-08-15
We present the automated segmentation of magnetic resonance microscopy (MRM) images of the C57BL/6J mouse brain into 21 neuroanatomical structures, including the ventricular system, corpus callosum, hippocampus, caudate putamen, inferior colliculus, internal capsule, globus pallidus, and substantia nigra. The segmentation algorithm operates on multispectral, three-dimensional (3D) MR data acquired at 90-microm isotropic resolution. Probabilistic information used in the segmentation is extracted from training datasets of T2-weighted, proton density-weighted, and diffusion-weighted acquisitions. Spatial information is employed in the form of prior probabilities of occurrence of a structure at a location (location priors) and the pairwise probabilities between structures (contextual priors). Validation using standard morphometry indices shows good consistency between automatically segmented and manually traced data. Results achieved in the mouse brain are comparable with those achieved in human brain studies using similar techniques. The segmentation algorithm shows excellent potential for routine morphological phenotyping of mouse models.
A haptic-inspired audio approach for structural health monitoring decision-making
NASA Astrophysics Data System (ADS)
Mao, Zhu; Todd, Michael; Mascareñas, David
2015-03-01
Haptics is the field at the interface of human touch (tactile sensation) and classification, whereby tactile feedback is used to train and inform a decision-making process. In structural health monitoring (SHM) applications, haptic devices have been introduced and applied in a simplified laboratory scale scenario, in which nonlinearity, representing the presence of damage, was encoded into a vibratory manual interface. In this paper, the "spirit" of haptics is adopted, but here ultrasonic guided wave scattering information is transformed into audio (rather than tactile) range signals. After sufficient training, the structural damage condition, including occurrence and location, can be identified through the encoded audio waveforms. Different algorithms are employed in this paper to generate the transformed audio signals and the performance of each encoding algorithms is compared, and also compared with standard machine learning classifiers. In the long run, the haptic decision-making is aiming to detect and classify structural damages in a more rigorous environment, and approaching a baseline-free fashion with embedded temperature compensation.
Geologic and mineral and water resources investigations in western Colorado using ERTS-1 data
NASA Technical Reports Server (NTRS)
Knepper, D. H. (Principal Investigator)
1974-01-01
The author has identified the following significant results. Most of the geologic information in ERTS-1 imagery can be extracted from bulk processed black and white transparencies by a skilled interpreter using standard photogeologic techniques. In central and western Colorado, the detectability of lithologic contacts on ERTS-1 imagery is closely related to the time of year the imagery was acquired. Geologic structures are the most readily extractable type of geologic information contained in ERTS images. Major tectonic features and associated minor structures can be rapidly mapped, allowing the geologic setting of a large region to be quickly accessed. Trends of geologic structures in younger sedimentary appear to strongly parallel linear trends in older metamorphic and igneous basement terrain. Linears and color anomalies mapped from ERTS imagery are closely related to loci of known mineralization in the Colorado mineral belt.
Towards Effective Clustering Techniques for the Analysis of Electric Power Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Emilie A.; Cotilla Sanchez, Jose E.; Halappanavar, Mahantesh
2013-11-30
Clustering is an important data analysis technique with numerous applications in the analysis of electric power grids. Standard clustering techniques are oblivious to the rich structural and dynamic information available for power grids. Therefore, by exploiting the inherent topological and electrical structure in the power grid data, we propose new methods for clustering with applications to model reduction, locational marginal pricing, phasor measurement unit (PMU or synchrophasor) placement, and power system protection. We focus our attention on model reduction for analysis based on time-series information from synchrophasor measurement devices, and spectral techniques for clustering. By comparing different clustering techniques onmore » two instances of realistic power grids we show that the solutions are related and therefore one could leverage that relationship for a computational advantage. Thus, by contrasting different clustering techniques we make a case for exploiting structure inherent in the data with implications for several domains including power systems.« less
Wolle, Patrik; Müller, Matthias P; Rauh, Daniel
2018-03-16
The examination of three-dimensional structural models in scientific publications allows the reader to validate or invalidate conclusions drawn by the authors. However, either due to a (temporary) lack of access to proper visualization software or a lack of proficiency, this information is not necessarily available to every reader. As the digital revolution is quickly progressing, technologies have become widely available that overcome the limitations and offer to all the opportunity to appreciate models not only in 2D, but also in 3D. Additionally, mobile devices such as smartphones and tablets allow access to this information almost anywhere, at any time. Since access to such information has only recently become standard practice, we want to outline straightforward ways to incorporate 3D models in augmented reality into scientific publications, books, posters, and presentations and suggest that this should become general practice.
Using ontologies to integrate and share resuscitation data from diverse medical devices.
Thorsen, Kari Anne Haaland; Eftestøl, Trygve; Tøssebro, Erlend; Rong, Chunming; Steen, Petter Andreas
2009-05-01
To propose a method for standardised data representation and demonstrate a technology that makes it possible to translate data from device dependent formats to this standard representation format. Outcome statistics vary between emergency medical systems organising resuscitation services. Such differences indicate a potential for improvement by identifying factors affecting outcome, but data subject to analysis have to be comparable. Modern technology for communicating information makes it possible to structure, store and transfer data flexibly. Ontologies describe entities in the world and how they relate. Letting different computer systems refer to the same ontology results in a common understanding on data content. Information on therapy such as shock delivery, chest compressions and ventilation should be defined and described in a standardised ontology to enable comparison and combining data from diverse sources. By adding rules and logic data can be merged and combined in new ways to produce new information. An example ontology is designed to demonstrate the feasibility and value of such a standardised structure. The proposed technology makes possible capturing and storing of data from different devices in a structured and standardised format. Data can easily be transformed to this standardised format, compared and combined independent of the original structure.
Introducing glycomics data into the Semantic Web
2013-01-01
Background Glycoscience is a research field focusing on complex carbohydrates (otherwise known as glycans)a, which can, for example, serve as “switches” that toggle between different functions of a glycoprotein or glycolipid. Due to the advancement of glycomics technologies that are used to characterize glycan structures, many glycomics databases are now publicly available and provide useful information for glycoscience research. However, these databases have almost no link to other life science databases. Results In order to implement support for the Semantic Web most efficiently for glycomics research, the developers of major glycomics databases agreed on a minimal standard for representing glycan structure and annotation information using RDF (Resource Description Framework). Moreover, all of the participants implemented this standard prototype and generated preliminary RDF versions of their data. To test the utility of the converted data, all of the data sets were uploaded into a Virtuoso triple store, and several SPARQL queries were tested as “proofs-of-concept” to illustrate the utility of the Semantic Web in querying across databases which were originally difficult to implement. Conclusions We were able to successfully retrieve information by linking UniCarbKB, GlycomeDB and JCGGDB in a single SPARQL query to obtain our target information. We also tested queries linking UniProt with GlycoEpitope as well as lectin data with GlycomeDB through PDB. As a result, we have been able to link proteomics data with glycomics data through the implementation of Semantic Web technologies, allowing for more flexible queries across these domains. PMID:24280648
Introducing glycomics data into the Semantic Web.
Aoki-Kinoshita, Kiyoko F; Bolleman, Jerven; Campbell, Matthew P; Kawano, Shin; Kim, Jin-Dong; Lütteke, Thomas; Matsubara, Masaaki; Okuda, Shujiro; Ranzinger, Rene; Sawaki, Hiromichi; Shikanai, Toshihide; Shinmachi, Daisuke; Suzuki, Yoshinori; Toukach, Philip; Yamada, Issaku; Packer, Nicolle H; Narimatsu, Hisashi
2013-11-26
Glycoscience is a research field focusing on complex carbohydrates (otherwise known as glycans)a, which can, for example, serve as "switches" that toggle between different functions of a glycoprotein or glycolipid. Due to the advancement of glycomics technologies that are used to characterize glycan structures, many glycomics databases are now publicly available and provide useful information for glycoscience research. However, these databases have almost no link to other life science databases. In order to implement support for the Semantic Web most efficiently for glycomics research, the developers of major glycomics databases agreed on a minimal standard for representing glycan structure and annotation information using RDF (Resource Description Framework). Moreover, all of the participants implemented this standard prototype and generated preliminary RDF versions of their data. To test the utility of the converted data, all of the data sets were uploaded into a Virtuoso triple store, and several SPARQL queries were tested as "proofs-of-concept" to illustrate the utility of the Semantic Web in querying across databases which were originally difficult to implement. We were able to successfully retrieve information by linking UniCarbKB, GlycomeDB and JCGGDB in a single SPARQL query to obtain our target information. We also tested queries linking UniProt with GlycoEpitope as well as lectin data with GlycomeDB through PDB. As a result, we have been able to link proteomics data with glycomics data through the implementation of Semantic Web technologies, allowing for more flexible queries across these domains.
Nuclear power plant Generic Aging Lessons Learned (GALL). Main report and appendix A
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaza, K.E.; Diercks, D.R.; Holland, J.W.
The purpose of this generic aging lessons learned (GALL) review is to provide a systematic review of plant aging information in order to assess materials and component aging issues related to continued operation and license renewal of operating reactors. Literature on mechanical, structural, and thermal-hydraulic components and systems reviewed consisted of 97 Nuclear Plant Aging Research (NPAR) reports, 23 NRC Generic Letters, 154 Information Notices, 29 Licensee Event Reports (LERs), 4 Bulletins, and 9 Nuclear Management and Resources Council Industry Reports (NUMARC IRs) and literature on electrical components and systems reviewed consisted of 66 NPAR reports, 8 NRC Generic Letters,more » 111 Information Notices, 53 LERs, 1 Bulletin, and 1 NUMARC IR. More than 550 documents were reviewed. The results of these reviews were systematized using a standardized GALL tabular format and standardized definitions of aging-related degradation mechanisms and effects. The tables are included in volume s 1 and 2 of this report. A computerized data base has also been developed for all review tables and can be used to expedite the search for desired information on structures, components, and relevant aging effects. A survey of the GALL tables reveals that all ongoing significant component aging issues are currently being addressed by the regulatory process. However, the aging of what are termed passive components has been highlighted for continued scrutiny. This document is Volume 1, consisting of the executive summary, summary and observations, and an appendix listing the GALL literature review tables.« less
Cooley, Richard L.
1982-01-01
Prior information on the parameters of a groundwater flow model can be used to improve parameter estimates obtained from nonlinear regression solution of a modeling problem. Two scales of prior information can be available: (1) prior information having known reliability (that is, bias and random error structure) and (2) prior information consisting of best available estimates of unknown reliability. A regression method that incorporates the second scale of prior information assumes the prior information to be fixed for any particular analysis to produce improved, although biased, parameter estimates. Approximate optimization of two auxiliary parameters of the formulation is used to help minimize the bias, which is almost always much smaller than that resulting from standard ridge regression. It is shown that if both scales of prior information are available, then a combined regression analysis may be made.
Current Standardization and Cooperative Efforts Related to Industrial Information Infrastructures.
1993-05-01
Data Management Systems: Components used to store, manage, and retrieve data. Data management includes knowledge bases, database management...Application Development Tools and Methods X/Open and POSIX APIs Integrated Design Support System (IDS) Knowledge -Based Systems (KBS) Application...IDEFlx) Yourdon Jackson System Design (JSD) Knowledge -Based Systems (KBSs) Structured Systems Development (SSD) Semantic Unification Meta-Model
ERIC Educational Resources Information Center
Armour, Kathleen M.; Duncombe, Rebecca
2004-01-01
There is a growing recognition that teachers' learning, and effective policies and structures to support it, should be at the heart of government polices to improve standards in education (Day, 1999). In England, the continuing professional development (CPD) landscape for teachers is changing; and professional development in physical education…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-27
... quantifying the results of the project should be submitted to the FHWA. 10. Proposals should not exceed 20... communications plan, a risk management plan and a work breakdown structure. V. Application Review Information... other locations/projects and possibly serve as a model for other locations. B. Review Standards 1. All...
Chinese green product standards: international experience and pathway for a unified system
NASA Astrophysics Data System (ADS)
Yun, Fu; Ling, Lin; Dongfeng, Gao; Shuo, Yang
2017-11-01
The establishment of a unified green product standard system is of great importance regarding the effective supply of green products and meeting trend of the consumption upgrade. It also is helpful to reduce the cost of green information disclosure of enterprises, and facilitate the supply-side structural reform. Based on the experience of developing and implementing green product standards in the EU, Germany, America, Japan and so on, combined with current Chinese standard systems including environmental protection, energy conservation, water conservation, low carbon, recycling, regeneration and organic, with the adoption of the life cycle thinking, this paper brings forward basic requirements on organizations including pollutant emissions, establishment of management system, energy conservation and emission reduction technology and green supply chain management, and proposes indicator requirements on product including resource attributes, energy attributes, environmental attributes and quality attributes, so as to guide the establishment of green product evaluation standards in the context of China.
Structures data collection for The National Map using volunteered geographic information
Poore, Barbara S.; Wolf, Eric B.; Korris, Erin M.; Walter, Jennifer L.; Matthews, Greg D.
2012-01-01
The U.S. Geological Survey (USGS) has historically sponsored volunteered data collection projects to enhance its topographic paper and digital map products. This report describes one phase of an ongoing project to encourage volunteers to contribute data to The National Map using online editing tools. The USGS recruited students studying geographic information systems (GIS) at the University of Colorado Denver and the University of Denver in the spring of 2011 to add data on structures - manmade features such as schools, hospitals, and libraries - to four quadrangles covering metropolitan Denver. The USGS customized a version of the online Potlatch editor created by the OpenStreetMap project and populated it with 30 structure types drawn from the Geographic Names Information System (GNIS), a USGS database of geographic features. The students corrected the location and attributes of these points and added information on structures that were missing. There were two rounds of quality control. Student volunteers reviewed each point, and an in-house review of each point by the USGS followed. Nine-hundred and thirty-eight structure points were initially downloaded from the USGS database. Editing and quality control resulted in 1,214 structure points that were subsequently added to The National Map. A post-project analysis of the data shows that after student edit and peer review, 92 percent of the points contributed by volunteers met National Map Accuracy Standards for horizontal accuracy. Lessons from this project will be applied to later phases. These include: simplifying editing tasks and the user interfaces, stressing to volunteers the importance of adding structures that are missing, and emphasizing the importance of conforming to editorial guidelines for formatting names and addresses of structures. The next phase of the project will encompass the entire State of Colorado and will allow any citizen to contribute structures data. Volunteers will benefit from this project by engaging with their local geography and contributing to a national resource of topographic information that remains in the public domain for anyone to download.
James Webb Space Telescope XML Database: From the Beginning to Today
NASA Technical Reports Server (NTRS)
Gal-Edd, Jonathan; Fatig, Curtis C.
2005-01-01
The James Webb Space Telescope (JWST) Project has been defining, developing, and exercising the use of a common eXtensible Markup Language (XML) for the command and telemetry (C&T) database structure. JWST is the first large NASA space mission to use XML for databases. The JWST project started developing the concepts for the C&T database in 2002. The database will need to last at least 20 years since it will be used beginning with flight software development, continuing through Observatory integration and test (I&T) and through operations. Also, a database tool kit has been provided to the 18 various flight software development laboratories located in the United States, Europe, and Canada that allows the local users to create their own databases. Recently the JWST Project has been working with the Jet Propulsion Laboratory (JPL) and Object Management Group (OMG) XML Telemetry and Command Exchange (XTCE) personnel to provide all the information needed by JWST and JPL for exchanging database information using a XML standard structure. The lack of standardization requires custom ingest scripts for each ground system segment, increasing the cost of the total system. Providing a non-proprietary standard of the telemetry and command database definition formation will allow dissimilar systems to communicate without the need for expensive mission specific database tools and testing of the systems after the database translation. The various ground system components that would benefit from a standardized database are the telemetry and command systems, archives, simulators, and trending tools. JWST has exchanged the XML database with the Eclipse, EPOCH, ASIST ground systems, Portable spacecraft simulator (PSS), a front-end system, and Integrated Trending and Plotting System (ITPS) successfully. This paper will discuss how JWST decided to use XML, the barriers to a new concept, experiences utilizing the XML structure, exchanging databases with other users, and issues that have been experienced in creating databases for the C&T system.
NASA Astrophysics Data System (ADS)
Dumitrache, P.; Goanţă, A. M.
2017-08-01
The ability of the cabins to insure the operator protection in the case of the shock loading that appears at the roll-over of the machine or when the cab is struck by the falling objects, it’s one of the most important performance criterions that it must comply by the machines and the mobile equipments. The experimental method provides the most accurate information on the behaviour of protective structures, but generates high costs due to experimental installations and structures which may be compromised during the experiments. In these circumstances, numerical simulation of the actual problem (mechanical shock applied to a strength structure) is a perfectly viable alternative, given that the hardware and software current performances provides the necessary support to obtain results with an acceptable level of accuracy. In this context, the paper proposes using FEA platforms for virtual testing of the actual strength structures of the cabins using their finite element models based on 3D models generated in CAD environments. In addition to the economic advantage above mentioned, although the results obtained by simulation using the finite element method are affected by a number of simplifying assumptions, the adequate modelling of the phenomenon can be a successful support in the design process of structures to meet safety performance criteria imposed by current standards. In the first section of the paper is presented the general context of the security performance requirements imposed by current standards on the cabins strength structures. The following section of the paper is dedicated to the peculiarities of finite element modelling in problems that impose simulation of the behaviour of structures subjected to shock loading. The final section of the paper is dedicated to a case study and to the future objectives.
Development of a methodology for structured reporting of information in echocardiography.
Homorodean, Călin; Olinic, Maria; Olinic, Dan
2012-03-01
In order to conduct research relying on ultrasound images, it is necessary to access a large number of relevant cases represented by images and their interpretation. DICOM standard defines the structured reporting information object. Templates are tree-like structures which offer structural guidance in report construction. Laying the foundations of a structured reporting methodology in echocardiography, through the generation of a consistent set of DICOM templates. We developed an information system with the ability of managing echocardiographic images and structured reports. In order to perform a complete description of the cardiac structures, we used 1900 coded concepts organized into 344 contexts by their semantic meaning in a variety of cardiac diseases. We developed 30 templates, with up to 10 nesting levels. The list of templates has a pyramid-like architecture. Two templates are used for reporting every measurement and description: "EchoMeasurement" and "EchoDescription". Intermediate level templates specify how to report the features of echoDoppler findings: "Spectral Curve", "Color Jet", "Intracardiac mass". Templates for every cardiovascular structure include the previous ones. "Echocardiography Procedure Report" includes all other templates. The templates were tested in reporting echo features of 100 patients by analyzing 500 DICOM images. The benefits of these templates has been proven during the testing process, through the quality of the echocardiography report, the ability to argue and to link every diagnostic feature to a defining image and by opening up opportunities for education, research. In the future, our template-based reporting methodology might be extended to other imaging modalities.
National Center for Standards and Certification Information: Service and programs
NASA Technical Reports Server (NTRS)
Overman, Joanne
1994-01-01
The National Center for Standards and Certification Information (NCSCI) provides information on U.S., foreign and international voluntary standards, government regulations, and conformity assessment procedures for non-agricultural products. The Center serves as a referral service and focal point in the United States for information on standards and standards-related information. NCSCI staff respond to inquiries, maintain a reference collection of standards and standards-related documents, and serve as the U.S. inquiry point for information to and from foreign countries.
Current National Approach to Healthcare ICT Standardization: Focus on Progress in New Zealand.
Park, Young-Taek; Atalag, Koray
2015-07-01
Many countries try to efficiently deliver high quality healthcare services at lower and manageable costs where healthcare information and communication technologies (ICT) standardisation may play an important role. New Zealand provides a good model of healthcare ICT standardisation. The purpose of this study was to review the current healthcare ICT standardisation and progress in New Zealand. This study reviewed the reports regarding the healthcare ICT standardisation in New Zealand. We also investigated relevant websites related with the healthcare ICT standards, most of which were run by the government. Then, we summarised the governance structure, standardisation processes, and their output regarding the current healthcare ICT standards status of New Zealand. New Zealand government bodies have established a set of healthcare ICT standards and clear guidelines and procedures for healthcare ICT standardisation. Government has actively participated in various enactments of healthcare ICT standards from the inception of ideas to their eventual retirement. Great achievements in eHealth have already been realized, and various standards are currently utilised at all levels of healthcare regionally and nationally. Standard clinical terminologies, such as International Classification of Diseases (ICD) and Systematized Nomenclature of Medicine - Clinical Terms (SNOMED-CT) have been adopted and Health Level Seven (HL7) standards are actively used in health information exchanges. The government to New Zealand has well organised ICT institutions, guidelines, and regulations, as well as various programs, such as e-Medications and integrated care services. Local district health boards directly running hospitals have effectively adopted various new ICT standards. They might already be benefiting from improved efficiency resulting from healthcare ICT standardisation.
Family structure and childhood anthropometry in Saint Paul, Minnesota in 1918
Warren, John Robert
2017-01-01
Concern with childhood nutrition prompted numerous surveys of children’s growth in the United States after 1870. The Children’s Bureau’s 1918 “Weighing and Measuring Test” measured two million children to produce the first official American growth norms. Individual data for 14,000 children survives from the Saint Paul, Minnesota survey whose stature closely approximated national norms. As well as anthropometry the survey recorded exact ages, street address and full name. These variables allow linkage to the 1920 census to obtain demographic and socioeconomic information. We matched 72% of children to census families creating a sample of nearly 10,000 children. Children in the entire survey (linked set) averaged 0.74 (0.72) standard deviations below modern WHO height-for-age standards, and 0.48 (0.46) standard deviations below modern weight-for-age norms. Sibship size strongly influenced height-for-age, and had weaker influence on weight-for-age. Each additional child six or underreduced height-for-age scores by 0.07 standard deviations (95% CI: −0.03, 0.11). Teenage siblings had little effect on height-forage. Social class effects were substantial. Children of laborers averaged half a standard deviation shorter than children of professionals. Family structure and socio-economic status had compounding impacts on children’s stature. PMID:28943749
Information Metacatalog for a Grid
NASA Technical Reports Server (NTRS)
Kolano, Paul
2007-01-01
SWIM is a Software Information Metacatalog that gathers detailed information about the software components and packages installed on a grid resource. Information is currently gathered for Executable and Linking Format (ELF) executables and shared libraries, Java classes, shell scripts, and Perl and Python modules. SWIM is built on top of the POUR framework, which is described in the preceding article. SWIM consists of a set of Perl modules for extracting software information from a system, an XML schema defining the format of data that can be added by users, and a POUR XML configuration file that describes how these elements are used to generate periodic, on-demand, and user-specified information. Periodic software information is derived mainly from the package managers used on each system. SWIM collects information from native package managers in FreeBSD, Solaris, and IRX as well as the RPM, Perl, and Python package managers on multiple platforms. Because not all software is available, or installed in package form, SWIM also crawls the set of relevant paths from the File System Hierarchy Standard that defines the standard file system structure used by all major UNIX distributions. Using these two techniques, the vast majority of software installed on a system can be located. SWIM computes the same information gathered by the periodic routines for specific files on specific hosts, and locates software on a system given only its name and type.
Factors shaping the evolution of electronic documentation systems
NASA Technical Reports Server (NTRS)
Dede, Christopher J.; Sullivan, Tim R.; Scace, Jacque R.
1990-01-01
The main goal is to prepare the space station technical and managerial structure for likely changes in the creation, capture, transfer, and utilization of knowledge. By anticipating advances, the design of Space Station Project (SSP) information systems can be tailored to facilitate a progression of increasingly sophisticated strategies as the space station evolves. Future generations of advanced information systems will use increases in power to deliver environmentally meaningful, contextually targeted, interconnected data (knowledge). The concept of a Knowledge Base Management System is emerging when the problem is focused on how information systems can perform such a conversion of raw data. Such a system would include traditional management functions for large space databases. Added artificial intelligence features might encompass co-existing knowledge representation schemes; effective control structures for deductive, plausible, and inductive reasoning; means for knowledge acquisition, refinement, and validation; explanation facilities; and dynamic human intervention. The major areas covered include: alternative knowledge representation approaches; advanced user interface capabilities; computer-supported cooperative work; the evolution of information system hardware; standardization, compatibility, and connectivity; and organizational impacts of information intensive environments.
The Future of Geospatial Standards
NASA Astrophysics Data System (ADS)
Bermudez, L. E.; Simonis, I.
2016-12-01
The OGC is an international not-for-profit standards development organization (SDO) committed to making quality standards for the geospatial community. A community of more than 500 member organizations with more than 6,000 people registered at the OGC communication platform drives the development of standards that are freely available for anyone to use and to improve sharing of the world's geospatial data. OGC standards are applied in a variety of application domains including Environment, Defense and Intelligence, Smart Cities, Aviation, Disaster Management, Agriculture, Business Development and Decision Support, and Meteorology. Profiles help to apply information models to different communities, thus adapting to particular needs of that community while ensuring interoperability by using common base models and appropriate support services. Other standards address orthogonal aspects such as handling of Big Data, Crowd-sourced information, Geosemantics, or container for offline data usage. Like most SDOs, the OGC develops and maintains standards through a formal consensus process under the OGC Standards Program (OGC-SP) wherein requirements and use cases are discussed in forums generally open to the public (Domain Working Groups, or DWGs), and Standards Working Groups (SWGs) are established to create standards. However, OGC is unique among SDOs in that it also operates the OGC Interoperability Program (OGC-IP) to provide real-world testing of existing and proposed standards. The OGC-IP is considered the experimental playground, where new technologies are researched and developed in a user-driven process. Its goal is to prototype, test, demonstrate, and promote OGC Standards in a structured environment. Results from the OGC-IP often become requirements for new OGC standards or identify deficiencies in existing OGC standards that can be addressed. This presentation will provide an analysis of the work advanced in the OGC consortium including standards and testbeds, where we can extract a trend for the future of geospatial standards. We see a number of key elements in focus, but simultaneously a broadening of standards to address particular communities' needs.
Environmental site assessments and audits: Building inspection requirements
NASA Astrophysics Data System (ADS)
Lange, John H.; Kaiser, Genevieve; Thomulka, Kenneth W.
1994-01-01
Environmental site assessment criteria were originally developed by organizations that focused, almost exclusively, on surface, subsurface, and pollution source contamination. Many of the hazards associated with indoor environments and building structures were traditionally not considered when evaluating sources and entities of environmental pollution. Since a large number of building materials are potentially hazardous, careful evaluation is necessary. Until recently, little information on building inspection requirements of environmental problems has been published. Traditionally, asbestos has been the main component of concern. The ever-changing environmental standards have dramatically expanded the scope of building surveys. Indoor environmental concerns, for example, currently include formaldehyde, lead-based paint, polychlorinated biphenyls, radon, and indoor air pollution. Environmental regulations are being expanded and developed that specifically include building structures. These regulatory standards are being triggered by an increased awareness of health effects from indoor exposure, fires, spills, and other accidents that have resulted in injury, death, and financial loss. This article discusses various aspects of assessments for building structures.
Marenco, Luis; Li, Yuli; Martone, Maryann E; Sternberg, Paul W; Shepherd, Gordon M; Miller, Perry L
2008-09-01
This paper describes a pilot query interface that has been constructed to help us explore a "concept-based" approach for searching the Neuroscience Information Framework (NIF). The query interface is concept-based in the sense that the search terms submitted through the interface are selected from a standardized vocabulary of terms (concepts) that are structured in the form of an ontology. The NIF contains three primary resources: the NIF Resource Registry, the NIF Document Archive, and the NIF Database Mediator. These NIF resources are very different in their nature and therefore pose challenges when designing a single interface from which searches can be automatically launched against all three resources simultaneously. The paper first discusses briefly several background issues involving the use of standardized biomedical vocabularies in biomedical information retrieval, and then presents a detailed example that illustrates how the pilot concept-based query interface operates. The paper concludes by discussing certain lessons learned in the development of the current version of the interface.
Building information modelling review with potential applications in tunnel engineering of China.
Zhou, Weihong; Qin, Haiyang; Qiu, Junling; Fan, Haobo; Lai, Jinxing; Wang, Ke; Wang, Lixin
2017-08-01
Building information modelling (BIM) can be applied to tunnel engineering to address a number of problems, including complex structure, extensive design, long construction cycle and increased security risks. To promote the development of tunnel engineering in China, this paper combines actual cases, including the Xingu mountain tunnel and the Shigu Mountain tunnel, to systematically analyse BIM applications in tunnel engineering in China. The results indicate that BIM technology in tunnel engineering is currently mainly applied during the design stage rather than during construction and operation stages. The application of BIM technology in tunnel engineering covers many problems, such as a lack of standards, incompatibility of different software, disorganized management, complex combination with GIS (Geographic Information System), low utilization rate and poor awareness. In this study, through summary of related research results and engineering cases, suggestions are introduced and an outlook for the BIM application in tunnel engineering in China is presented, which provides guidance for design optimization, construction standards and later operation maintenance.
Imam, Fahim T.; Larson, Stephen D.; Bandrowski, Anita; Grethe, Jeffery S.; Gupta, Amarnath; Martone, Maryann E.
2012-01-01
An initiative of the NIH Blueprint for neuroscience research, the Neuroscience Information Framework (NIF) project advances neuroscience by enabling discovery and access to public research data and tools worldwide through an open source, semantically enhanced search portal. One of the critical components for the overall NIF system, the NIF Standardized Ontologies (NIFSTD), provides an extensive collection of standard neuroscience concepts along with their synonyms and relationships. The knowledge models defined in the NIFSTD ontologies enable an effective concept-based search over heterogeneous types of web-accessible information entities in NIF’s production system. NIFSTD covers major domains in neuroscience, including diseases, brain anatomy, cell types, sub-cellular anatomy, small molecules, techniques, and resource descriptors. Since the first production release in 2008, NIF has grown significantly in content and functionality, particularly with respect to the ontologies and ontology-based services that drive the NIF system. We present here on the structure, design principles, community engagement, and the current state of NIFSTD ontologies. PMID:22737162
Li, Yuli; Martone, Maryann E.; Sternberg, Paul W.; Shepherd, Gordon M.; Miller, Perry L.
2009-01-01
This paper describes a pilot query interface that has been constructed to help us explore a “concept-based” approach for searching the Neuroscience Information Framework (NIF). The query interface is concept-based in the sense that the search terms submitted through the interface are selected from a standardized vocabulary of terms (concepts) that are structured in the form of an ontology. The NIF contains three primary resources: the NIF Resource Registry, the NIF Document Archive, and the NIF Database Mediator. These NIF resources are very different in their nature and therefore pose challenges when designing a single interface from which searches can be automatically launched against all three resources simultaneously. The paper first discusses briefly several background issues involving the use of standardized biomedical vocabularies in biomedical information retrieval, and then presents a detailed example that illustrates how the pilot concept-based query interface operates. The paper concludes by discussing certain lessons learned in the development of the current version of the interface. PMID:18953674
Building information modelling review with potential applications in tunnel engineering of China
Zhou, Weihong; Qin, Haiyang; Fan, Haobo; Lai, Jinxing; Wang, Ke; Wang, Lixin
2017-01-01
Building information modelling (BIM) can be applied to tunnel engineering to address a number of problems, including complex structure, extensive design, long construction cycle and increased security risks. To promote the development of tunnel engineering in China, this paper combines actual cases, including the Xingu mountain tunnel and the Shigu Mountain tunnel, to systematically analyse BIM applications in tunnel engineering in China. The results indicate that BIM technology in tunnel engineering is currently mainly applied during the design stage rather than during construction and operation stages. The application of BIM technology in tunnel engineering covers many problems, such as a lack of standards, incompatibility of different software, disorganized management, complex combination with GIS (Geographic Information System), low utilization rate and poor awareness. In this study, through summary of related research results and engineering cases, suggestions are introduced and an outlook for the BIM application in tunnel engineering in China is presented, which provides guidance for design optimization, construction standards and later operation maintenance. PMID:28878970
Building information modelling review with potential applications in tunnel engineering of China
NASA Astrophysics Data System (ADS)
Zhou, Weihong; Qin, Haiyang; Qiu, Junling; Fan, Haobo; Lai, Jinxing; Wang, Ke; Wang, Lixin
2017-08-01
Building information modelling (BIM) can be applied to tunnel engineering to address a number of problems, including complex structure, extensive design, long construction cycle and increased security risks. To promote the development of tunnel engineering in China, this paper combines actual cases, including the Xingu mountain tunnel and the Shigu Mountain tunnel, to systematically analyse BIM applications in tunnel engineering in China. The results indicate that BIM technology in tunnel engineering is currently mainly applied during the design stage rather than during construction and operation stages. The application of BIM technology in tunnel engineering covers many problems, such as a lack of standards, incompatibility of different software, disorganized management, complex combination with GIS (Geographic Information System), low utilization rate and poor awareness. In this study, through summary of related research results and engineering cases, suggestions are introduced and an outlook for the BIM application in tunnel engineering in China is presented, which provides guidance for design optimization, construction standards and later operation maintenance.
Smith, David; Woodman, Richard; Drummond, Aaron; Battersby, Malcolm
2016-03-30
Knowledge of a problem gambler's underlying gambling related cognitions plays an important role in treatment planning. The Gambling Related Cognitions Scale (GRCS) is therefore frequently used in clinical settings for screening and evaluation of treatment outcomes. However, GRCS validation studies have generated conflicting results regarding its latent structure using traditional confirmatory factor analyses (CFA). This may partly be due to the rigid constraints imposed on cross-factor loadings with traditional CFA. The aim of this investigation was to determine whether a Bayesian structural equation modelling (BSEM) approach to examination of the GRCS factor structure would better replicate substantive theory and also inform model re-specifications. Participants were 454 treatment-seekers at first presentation to a gambling treatment centre between January 2012 and December 2014. Model fit indices were well below acceptable standards for CFA. In contrast, the BSEM model which included small informative priors for the residual covariance matrix in addition to cross-loadings produced excellent model fit for the original hypothesised factor structure. The results also informed re-specification of the CFA model which provided more reasonable model fit. These conclusions have implications that should be useful to both clinicians and researchers evaluating measurement models relating to gambling related cognitions in treatment-seekers. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Computer-Aided Drug Design Methods.
Yu, Wenbo; MacKerell, Alexander D
2017-01-01
Computational approaches are useful tools to interpret and guide experiments to expedite the antibiotic drug design process. Structure-based drug design (SBDD) and ligand-based drug design (LBDD) are the two general types of computer-aided drug design (CADD) approaches in existence. SBDD methods analyze macromolecular target 3-dimensional structural information, typically of proteins or RNA, to identify key sites and interactions that are important for their respective biological functions. Such information can then be utilized to design antibiotic drugs that can compete with essential interactions involving the target and thus interrupt the biological pathways essential for survival of the microorganism(s). LBDD methods focus on known antibiotic ligands for a target to establish a relationship between their physiochemical properties and antibiotic activities, referred to as a structure-activity relationship (SAR), information that can be used for optimization of known drugs or guide the design of new drugs with improved activity. In this chapter, standard CADD protocols for both SBDD and LBDD will be presented with a special focus on methodologies and targets routinely studied in our laboratory for antibiotic drug discoveries.
Extracting the Textual and Temporal Structure of Supercomputing Logs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, S; Singh, I; Chandra, A
2009-05-26
Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an onlinemore » clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.« less
DSSTOX WEBSITE LAUNCH: IMPROVING PUBLIC ACCESS ...
DSSTox Website Launch: Improving Public Access to Databases for Building Structure-Toxicity Prediction ModelsAnn M. RichardUS Environmental Protection Agency, Research Triangle Park, NC, USADistributed: Decentralized set of standardized, field-delimited databases, each separatelyauthored and maintained, that are able to accommodate diverse toxicity data content;Structure-Searchable: Standard format (SDF) structure-data files that can be readily imported into available chemical relational databases and structure-searched;Tox: Toxicity data as it exists in widely disparate forms in current public databases, spanning diverse toxicity endpoints, test systems, levels of biological content, degrees of summarization, and information content.INTRODUCTIONThe economic and social pressures to reduce the need for animal testing and to better anticipate the potential for human and eco-toxicity of environmental, industrial, or pharmaceutical chemicals are as pressing today as at any time prior. However, the goal of predicting chemical toxicity in its many manifestations, the `T' in 'ADMET' (adsorption, distribution, metabolism, elimination, toxicity), remains one of the most difficult and largely unmet challenges in a chemical screening paradigm [1]. It is widely acknowledged that the single greatest hurdle to improving structure-activity relationship (SAR) toxicity prediction capabilities, in both the pharmaceutical and environmental regulation arenas, is the lack of suffici
Sornborger, Andrew T; Lauderdale, James D
2016-11-01
Neural data analysis has increasingly incorporated causal information to study circuit connectivity. Dimensional reduction forms the basis of most analyses of large multivariate time series. Here, we present a new, multitaper-based decomposition for stochastic, multivariate time series that acts on the covariance of the time series at all lags, C ( τ ), as opposed to standard methods that decompose the time series, X ( t ), using only information at zero-lag. In both simulated and neural imaging examples, we demonstrate that methods that neglect the full causal structure may be discarding important dynamical information in a time series.
Doyle, J D
1999-01-01
The roles of hospital librarians have evolved from keeping print materials to serving as a focal point for information services and structures within the hospital. Concepts that emerged from the Integrated Academic Information Management Systems (IAIMS) as described in the Matheson Report and the 1994 Joint Commission on Accreditation of Healthcare Organizations (JCAHO) standards have combined to propel hospital libraries into many new roles and functions. This paper will review the relationship of the two frameworks, provide a view of their commonalities, and establish the advantages of both for hospital librarianship as a profession. PMID:10550022
Kelly, Bridget; King, Lesley; Bauman, Adrian E; Baur, Louise A; Macniven, Rona; Chapman, Kathy; Smith, Ben J
2014-01-01
Children's high participation in organised sport in Australia makes sport an ideal setting for health promotion. This study aimed to generate consensus on priority health promotion objectives for community sports clubs, based on informed expert judgements. Delphi survey using three structured questionnaires. Forty-six health promotion, nutrition, physical activity and sport management/delivery professionals were approached to participate in the survey. Questionnaires used an iterative process to determine aspects of sports clubs deemed necessary for developing healthy sporting environments for children. Initially, participants were provided with a list of potential standards for a range of health promotion areas and asked to rate standards based on their importance and feasibility, and any barriers to implementation. Subsequently, participants were provided with information that summarised ratings for each standard to indicate convergence of the group, and asked to review and potentially revise their responses where they diverged. In a third round, participants ranked confirmed standards by priority. 26 professionals completed round 1, 21 completed round 2, and 18 completed round 3. The highest ranked standards related to responsible alcohol practices, availability of healthy food and drinks at sports canteens, smoke-free club facilities, restricting the sale and consumption of alcohol during junior sporting activities, and restricting unhealthy food and beverage company sponsorship. Identifying and prioritising health promotion areas that are relevant to children's sports clubs assists in focusing public health efforts and may guide future engagement of sports clubs. Approaches for providing informational and financial support to clubs to operationalise these standards are proposed. Copyright © 2013 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
45 CFR 170.207 - Vocabulary standards for representing electronic health information.
Code of Federal Regulations, 2011 CFR
2011-10-01
... INFORMATION TECHNOLOGY HEALTH INFORMATION TECHNOLOGY STANDARDS, IMPLEMENTATION SPECIFICATIONS, AND CERTIFICATION CRITERIA AND CERTIFICATION PROGRAMS FOR HEALTH INFORMATION TECHNOLOGY Standards and Implementation Specifications for Health Information Technology § 170.207 Vocabulary standards for representing electronic...
45 CFR 170.207 - Vocabulary standards for representing electronic health information.
Code of Federal Regulations, 2013 CFR
2013-10-01
... INFORMATION TECHNOLOGY HEALTH INFORMATION TECHNOLOGY STANDARDS, IMPLEMENTATION SPECIFICATIONS, AND CERTIFICATION CRITERIA AND CERTIFICATION PROGRAMS FOR HEALTH INFORMATION TECHNOLOGY Standards and Implementation Specifications for Health Information Technology § 170.207 Vocabulary standards for representing electronic...
45 CFR 170.207 - Vocabulary standards for representing electronic health information.
Code of Federal Regulations, 2014 CFR
2014-10-01
... INFORMATION TECHNOLOGY HEALTH INFORMATION TECHNOLOGY STANDARDS, IMPLEMENTATION SPECIFICATIONS, AND CERTIFICATION CRITERIA AND CERTIFICATION PROGRAMS FOR HEALTH INFORMATION TECHNOLOGY Standards and Implementation Specifications for Health Information Technology § 170.207 Vocabulary standards for representing electronic...
45 CFR 170.207 - Vocabulary standards for representing electronic health information.
Code of Federal Regulations, 2012 CFR
2012-10-01
... INFORMATION TECHNOLOGY HEALTH INFORMATION TECHNOLOGY STANDARDS, IMPLEMENTATION SPECIFICATIONS, AND CERTIFICATION CRITERIA AND CERTIFICATION PROGRAMS FOR HEALTH INFORMATION TECHNOLOGY Standards and Implementation Specifications for Health Information Technology § 170.207 Vocabulary standards for representing electronic...
Indices of polarimetric purity for biological tissues inspection
NASA Astrophysics Data System (ADS)
Van Eeckhout, Albert; Lizana, Angel; Garcia-Caurel, Enric; Gil, José J.; Sansa, Adrià; Rodríguez, Carla; Estévez, Irene; González, Emilio; Escalera, Juan C.; Moreno, Ignacio; Campos, Juan
2018-02-01
We highlight the interest of using the Indices of Polarimetric Purity (IPPs) for the biological tissue inspection. These are three polarimetric metrics focused on the study of the depolarizing behaviour of the sample. The IPPs have been recently proposed in the literature and provide different and synthetized information than the commonly used depolarizing indices, as depolarization index (PΔ) or depolarization power (Δ). Compared with the standard polarimetric images of biological samples, IPPs enhance the contrast between different tissues of the sample and show differences between similar tissues which are not observed using the other standard techniques. Moreover, they present further physical information related to the depolarization mechanisms inherent to different tissues. In addition, the algorithm does not require advanced calculations (as in the case of polar decompositions), being the indices of polarimetric purity fast and easy to implement. We also propose a pseudo-coloured image method which encodes the sample information as a function of the different indices weights. These images allow us to customize the visualization of samples and to highlight certain of their constitutive structures. The interest and potential of the IPP approach are experimentally illustrated throughout the manuscript by comparing polarimetric images of different ex-vivo samples obtained with standard polarimetric methods with those obtained from the IPPs analysis. Enhanced contrast and retrieval of new information are experimentally obtained from the different IPP based images.
The archiving and dissemination of biological structure data.
Berman, Helen M; Burley, Stephen K; Kleywegt, Gerard J; Markley, John L; Nakamura, Haruki; Velankar, Sameer
2016-10-01
The global Protein Data Bank (PDB) was the first open-access digital archive in biology. The history and evolution of the PDB are described, together with the ways in which molecular structural biology data and information are collected, curated, validated, archived, and disseminated by the members of the Worldwide Protein Data Bank organization (wwPDB; http://wwpdb.org). Particular emphasis is placed on the role of community in establishing the standards and policies by which the PDB archive is managed day-to-day. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
FDA toxicity databases and real-time data entry.
Arvidson, Kirk B
2008-11-15
Structure-searchable electronic databases are valuable new tools that are assisting the FDA in its mission to promptly and efficiently review incoming submissions for regulatory approval of new food additives and food contact substances. The Center for Food Safety and Applied Nutrition's Office of Food Additive Safety (CFSAN/OFAS), in collaboration with Leadscope, Inc., is consolidating genetic toxicity data submitted in food additive petitions from the 1960s to the present day. The Center for Drug Evaluation and Research, Office of Pharmaceutical Science's Informatics and Computational Safety Analysis Staff (CDER/OPS/ICSAS) is separately gathering similar information from their submissions. Presently, these data are distributed in various locations such as paper files, microfiche, and non-standardized toxicology memoranda. The organization of the data into a consistent, searchable format will reduce paperwork, expedite the toxicology review process, and provide valuable information to industry that is currently available only to the FDA. Furthermore, by combining chemical structures with genetic toxicity information, biologically active moieties can be identified and used to develop quantitative structure-activity relationship (QSAR) modeling and testing guidelines. Additionally, chemicals devoid of toxicity data can be compared to known structures, allowing for improved safety review through the identification and analysis of structural analogs. Four database frameworks have been created: bacterial mutagenesis, in vitro chromosome aberration, in vitro mammalian mutagenesis, and in vivo micronucleus. Controlled vocabularies for these databases have been established. The four separate genetic toxicity databases are compiled into a single, structurally-searchable database for easy accessibility of the toxicity information. Beyond the genetic toxicity databases described here, additional databases for subchronic, chronic, and teratogenicity studies have been prepared.
XML-Based Generator of C++ Code for Integration With GUIs
NASA Technical Reports Server (NTRS)
Hua, Hook; Oyafuso, Fabiano; Klimeck, Gerhard
2003-01-01
An open source computer program has been developed to satisfy a need for simplified organization of structured input data for scientific simulation programs. Typically, such input data are parsed in from a flat American Standard Code for Information Interchange (ASCII) text file into computational data structures. Also typically, when a graphical user interface (GUI) is used, there is a need to completely duplicate the input information while providing it to a user in a more structured form. Heretofore, the duplication of the input information has entailed duplication of software efforts and increases in susceptibility to software errors because of the concomitant need to maintain two independent input-handling mechanisms. The present program implements a method in which the input data for a simulation program are completely specified in an Extensible Markup Language (XML)-based text file. The key benefit for XML is storing input data in a structured manner. More importantly, XML allows not just storing of data but also describing what each of the data items are. That XML file contains information useful for rendering the data by other applications. It also then generates data structures in the C++ language that are to be used in the simulation program. In this method, all input data are specified in one place only, and it is easy to integrate the data structures into both the simulation program and the GUI. XML-to-C is useful in two ways: 1. As an executable, it generates the corresponding C++ classes and 2. As a library, it automatically fills the objects with the input data values.
NASA Astrophysics Data System (ADS)
Kwon, O.; Kim, W.; Kim, J.
2017-12-01
Recently construction of subsea tunnel has been increased globally. For safe construction of subsea tunnel, identifying the geological structure including fault at design and construction stage is more than important. Then unlike the tunnel in land, it's very difficult to obtain the data on geological structure because of the limit in geological survey. This study is intended to challenge such difficulties in a way of developing the technology to identify the geological structure of seabed automatically by using echo sounding data. When investigation a potential site for a deep subsea tunnel, there is the technical and economical limit with borehole of geophysical investigation. On the contrary, echo sounding data is easily obtainable while information reliability is higher comparing to above approaches. This study is aimed at developing the algorithm that identifies the large scale of geological structure of seabed using geostatic approach. This study is based on theory of structural geology that topographic features indicate geological structure. Basic concept of algorithm is outlined as follows; (1) convert the seabed topography to the grid data using echo sounding data, (2) apply the moving window in optimal size to the grid data, (3) estimate the spatial statistics of the grid data in the window area, (4) set the percentile standard of spatial statistics, (5) display the values satisfying the standard on the map, (6) visualize the geological structure on the map. The important elements in this study include optimal size of moving window, kinds of optimal spatial statistics and determination of optimal percentile standard. To determine such optimal elements, a numerous simulations were implemented. Eventually, user program based on R was developed using optimal analysis algorithm. The user program was designed to identify the variations of various spatial statistics. It leads to easy analysis of geological structure depending on variation of spatial statistics by arranging to easily designate the type of spatial statistics and percentile standard. This research was supported by the Korea Agency for Infrastructure Technology Advancement under the Ministry of Land, Infrastructure and Transport of the Korean government. (Project Number: 13 Construction Research T01)
Data-directed RNA secondary structure prediction using probabilistic modeling
Deng, Fei; Ledda, Mirko; Vaziri, Sana; Aviran, Sharon
2016-01-01
Structure dictates the function of many RNAs, but secondary RNA structure analysis is either labor intensive and costly or relies on computational predictions that are often inaccurate. These limitations are alleviated by integration of structure probing data into prediction algorithms. However, existing algorithms are optimized for a specific type of probing data. Recently, new chemistries combined with advances in sequencing have facilitated structure probing at unprecedented scale and sensitivity. These novel technologies and anticipated wealth of data highlight a need for algorithms that readily accommodate more complex and diverse input sources. We implemented and investigated a recently outlined probabilistic framework for RNA secondary structure prediction and extended it to accommodate further refinement of structural information. This framework utilizes direct likelihood-based calculations of pseudo-energy terms per considered structural context and can readily accommodate diverse data types and complex data dependencies. We use real data in conjunction with simulations to evaluate performances of several implementations and to show that proper integration of structural contexts can lead to improvements. Our tests also reveal discrepancies between real data and simulations, which we show can be alleviated by refined modeling. We then propose statistical preprocessing approaches to standardize data interpretation and integration into such a generic framework. We further systematically quantify the information content of data subsets, demonstrating that high reactivities are major drivers of SHAPE-directed predictions and that better understanding of less informative reactivities is key to further improvements. Finally, we provide evidence for the adaptive capability of our framework using mock probe simulations. PMID:27251549
Long-Wavelength X-Ray Diffraction and Its Applications in Macromolecular Crystallography.
Weiss, Manfred S
2017-01-01
For many years, diffraction experiments in macromolecular crystallography at X-ray wavelengths longer than that of Cu-K α (1.54 Å) have been largely underappreciated. Effects caused by increased X-ray absorption result in the fact that these experiments are more difficult than the standard diffraction experiments at short wavelengths. However, due to the also increased anomalous scattering of many biologically relevant atoms, important additional structural information can be obtained. This information, in turn, can be used for phase determination, for substructure identification, in molecular replacement approaches, as well as in structure refinement. This chapter reviews the possibilities and the difficulties associated with such experiments, and it provides a short description of two macromolecular crystallography synchrotron beam lines dedicated to long-wavelength X-ray diffraction experiments.
ERIC Educational Resources Information Center
Allen, Daniel N.; Thaler, Nicholas S.; Barchard, Kimberly A.; Vertinski, Mary; Mayfield, Joan
2012-01-01
The Comprehensive Trail Making Test (CTMT) is a relatively new version of the Trail Making Test that has a number of appealing features, including a large normative sample that allows raw scores to be converted to standard "T" scores adjusted for age. Preliminary validity information suggests that CTMT scores are sensitive to brain…
ERIC Educational Resources Information Center
Kirtland, Monika
1981-01-01
Outlines a methodology for standardizing word lists of subject- related fields using a macrothesaurus which provides basic classification structure and terminology for the subject at large and which adapts to the specific needs of its subfields. The example of the Cancer Information Thesaurus (CIT) is detailed. Six references are listed. (FM)
ERIC Educational Resources Information Center
Johnson, Marcus Edward
2017-01-01
Using an analytic informed by Nietzschean genealogy and systems theory, this paper explains how two conceptual structures (the emancipatory binary and the progressive triad), along with standard citation practices in academic journal writing, function to sustain and regenerate a progressive perspective within social studies education scholarship.…
ERIC Educational Resources Information Center
Dill, David D.
2010-01-01
What have we learned from 25 years of experience with external academic quality assurance that can help design more effective framework conditions for assuring academic standards? The key elements appear to be the structure and means of evaluating national academic quality assurance agencies, the nature of academic quality information mandated by…
Bernard R. Parresol; Joe H. Scott; Anne Andreu; Susan Prichard; Laurie Kurth
2012-01-01
Currently geospatial fire behavior analyses are performed with an array of fire behavior modeling systems such as FARSITE, FlamMap, and the Large Fire Simulation System. These systems currently require standard or customized surface fire behavior fuel models as inputs that are often assigned through remote sensing information. The ability to handle hundreds or...
ERIC Educational Resources Information Center
Ledwell, Katherine; Oyler, Celia
2016-01-01
We examine edTPA (a teacher performance assessment) implementation at one private university during the first year that our state required this exam for initial teaching certification. Using data from semi-structured interviews with 19 teacher educators from 12 programs as well as public information on edTPA pass rates, we explore whether the…
ERIC Educational Resources Information Center
Gifford, Bernard R.
The purpose of this study was to develop a plan for alleviating problems in the collection, processing, and dissemination of educational data as they affect the New York City Board of Education information requirements. The Data Base Management concept was used to analyze three topics: administration, structure, and standards. The study found that…
2012-03-01
Cross-sectional assessments versus more specialized thematic or sectional surveys • Formal, structured and often scientific assessments as...required output. Data collection issues - Observation. - Interviews. - Surveys . - Checklists. - Sampling. - Indicators and standards...Jacket Sleeping bag / pad Cash, $50 min, small bills/coins Poncho/rain suit Deodorant Toothbrush/paste Shampoo Mouthwash Dental floss Hand
Proceedings of the workshop on structural composites and nondestructive evaluation
NASA Technical Reports Server (NTRS)
1974-01-01
The problems and opportunities in the nondestructive evaluation of composites are covered in formal papers and a summary of the discussion which took place at a Workshop held in Dayton on February 13-14, 1974. The recommendations arrived at by an NMAB Committee, on flaw detection, composite strength, standardization and design information, and research on composite degradation are stated.
The Use of Nominal Group Technique: Case Study in Vietnam
ERIC Educational Resources Information Center
Dang, Vi Hoang
2015-01-01
The Nominal Group Technique (NGT) is a structured process to gather information from a group. The technique was first described in early 1970s and has since become a widely-used standard to facilitate working groups. The NGT is effective for generating large numbers of creative new ideas and for group priority setting. This article reports on a…
Using Multispectral False Color Imaging to Characterize Tropical Cyclone Structure and Environment
NASA Astrophysics Data System (ADS)
Cossuth, J.; Bankert, R.; Richardson, K.; Surratt, M. L.
2016-12-01
The Naval Research Laboratory's (NRL) tropical cyclone (TC) web page (http://www.nrlmry.navy.mil/TC.html) has provided nearly two decades of near real-time access to TC-centric images and products by TC forecasters and enthusiasts around the world. Particularly, microwave imager and sounder information that is featured on this site provides crucial internal storm structure information by allowing users to perceive hydrometeor structure, providing key details beyond cloud top information provided by visible and infrared channels. Towards improving TC analysis techniques and helping advance the utility of the NRL TC webpage resource, new research efforts are presented. This work demonstrates results as well as the methodology used to develop new automated, objective satellite-based TC structure and intensity guidance and enhanced data fusion imagery products that aim to bolster and streamline TC forecast operations. This presentation focuses on the creation and interpretation of false color RGB composite imagery that leverages the different emissive and scattering properties of atmospheric ice, liquid, and vapor water as well as ocean surface roughness as seen by microwave radiometers. Specifically, a combination of near-realtime data and a standardized digital database of global TCs in microwave imagery from 1987-2012 is employed as a climatology of TC structures. The broad range of TC structures, from pinhole eyes through multiple eyewall configurations, is characterized as resolved by passive microwave sensors. The extraction of these characteristic features from historical data also lends itself to statistical analysis. For example, histograms of brightness temperature distributions allows a rigorous examination of how structural features are conveyed in image products, allowing a better representation of colors and breakpoints as they relate to physical features. Such climatological work also suggests steps to better inform the near-real time application of upcoming satellite datasets to TC analyses.
Guo, Yufan; Silins, Ilona; Stenius, Ulla; Korhonen, Anna
2013-06-01
Techniques that are capable of automatically analyzing the information structure of scientific articles could be highly useful for improving information access to biomedical literature. However, most existing approaches rely on supervised machine learning (ML) and substantial labeled data that are expensive to develop and apply to different sub-fields of biomedicine. Recent research shows that minimal supervision is sufficient for fairly accurate information structure analysis of biomedical abstracts. However, is it realistic for full articles given their high linguistic and informational complexity? We introduce and release a novel corpus of 50 biomedical articles annotated according to the Argumentative Zoning (AZ) scheme, and investigate active learning with one of the most widely used ML models-Support Vector Machines (SVM)-on this corpus. Additionally, we introduce two novel applications that use AZ to support real-life literature review in biomedicine via question answering and summarization. We show that active learning with SVM trained on 500 labeled sentences (6% of the corpus) performs surprisingly well with the accuracy of 82%, just 2% lower than fully supervised learning. In our question answering task, biomedical researchers find relevant information significantly faster from AZ-annotated than unannotated articles. In the summarization task, sentences extracted from particular zones are significantly more similar to gold standard summaries than those extracted from particular sections of full articles. These results demonstrate that active learning of full articles' information structure is indeed realistic and the accuracy is high enough to support real-life literature review in biomedicine. The annotated corpus, our AZ classifier and the two novel applications are available at http://www.cl.cam.ac.uk/yg244/12bioinfo.html
Exploring information provision in reconstructive breast surgery: A qualitative study.
Potter, Shelley; Mills, Nicola; Cawthorn, Simon; Wilson, Sherif; Blazeby, Jane
2015-12-01
Women considering reconstructive breast surgery (RBS) require adequate information to make informed treatment decisions. This study explored patients' and health professionals' (HPs) perceptions of the adequacy of information provided for decision-making in RBS. Semi-structured interviews with a purposive sample of patients who had undergone RBS and HPs providing specialist care explored participants' experiences of information provision prior to RBS. Professionals reported providing standardised verbal, written and photographic information about the process and outcomes of surgery. Women, by contrast, reported varying levels of information provision. Some felt fully-informed but others perceived they had received insufficient information about available treatment options or possible outcomes of surgery to make an informed decision. Women need adequate information to make informed decisions about RBS and current practice may not meet women's needs. Minimum agreed standards of information provision, especially about alternative types of reconstruction, are recommended to improve decision-making in RBS. Copyright © 2015 Elsevier Ltd. All rights reserved.
Insights to primitive replication derived from structures of small oligonucleotides
NASA Technical Reports Server (NTRS)
Smith, G. K.; Fox, G. E.
1995-01-01
Available information on the structure of small oligonucleotides is surveyed. It is observed that even small oligomers typically exhibit defined structures over a wide range of pH and temperature. These structures rely on a plethora of non-standard base-base interactions in addition to the traditional Watson-Crick pairings. Stable duplexes, though typically antiparallel, can be parallel or staggered and perfect complementarity is not essential. These results imply that primitive template directed reactions do not require high fidelity. Hence, the extensive use of Watson-Crick complementarity in genes rather than being a direct consequence of the primitive condensation process, may instead reflect subsequent selection based on the advantage of accuracy in maintaining the primitive genetic machinery once it arose.
3D topography of biologic tissue by multiview imaging and structured light illumination
NASA Astrophysics Data System (ADS)
Liu, Peng; Zhang, Shiwu; Xu, Ronald
2014-02-01
Obtaining three-dimensional (3D) information of biologic tissue is important in many medical applications. This paper presents two methods for reconstructing 3D topography of biologic tissue: multiview imaging and structured light illumination. For each method, the working principle is introduced, followed by experimental validation on a diabetic foot model. To compare the performance characteristics of these two imaging methods, a coordinate measuring machine (CMM) is used as a standard control. The wound surface topography of the diabetic foot model is measured by multiview imaging and structured light illumination methods respectively and compared with the CMM measurements. The comparison results show that the structured light illumination method is a promising technique for 3D topographic imaging of biologic tissue.
Sequence-similar, structure-dissimilar protein pairs in the PDB.
Kosloff, Mickey; Kolodny, Rachel
2008-05-01
It is often assumed that in the Protein Data Bank (PDB), two proteins with similar sequences will also have similar structures. Accordingly, it has proved useful to develop subsets of the PDB from which "redundant" structures have been removed, based on a sequence-based criterion for similarity. Similarly, when predicting protein structure using homology modeling, if a template structure for modeling a target sequence is selected by sequence alone, this implicitly assumes that all sequence-similar templates are equivalent. Here, we show that this assumption is often not correct and that standard approaches to create subsets of the PDB can lead to the loss of structurally and functionally important information. We have carried out sequence-based structural superpositions and geometry-based structural alignments of a large number of protein pairs to determine the extent to which sequence similarity ensures structural similarity. We find many examples where two proteins that are similar in sequence have structures that differ significantly from one another. The source of the structural differences usually has a functional basis. The number of such proteins pairs that are identified and the magnitude of the dissimilarity depend on the approach that is used to calculate the differences; in particular sequence-based structure superpositioning will identify a larger number of structurally dissimilar pairs than geometry-based structural alignments. When two sequences can be aligned in a statistically meaningful way, sequence-based structural superpositioning provides a meaningful measure of structural differences. This approach and geometry-based structure alignments reveal somewhat different information and one or the other might be preferable in a given application. Our results suggest that in some cases, notably homology modeling, the common use of nonredundant datasets, culled from the PDB based on sequence, may mask important structural and functional information. We have established a data base of sequence-similar, structurally dissimilar protein pairs that will help address this problem (http://luna.bioc.columbia.edu/rachel/seqsimstrdiff.htm).
NASA Astrophysics Data System (ADS)
Bhanumurthy, V.; Venugopala Rao, K.; Srinivasa Rao, S.; Ram Mohan Rao, K.; Chandra, P. Satya; Vidhyasagar, J.; Diwakar, P. G.; Dadhwal, V. K.
2014-11-01
Geographical Information Science (GIS) is now graduated from traditional desktop system to Internet system. Internet GIS is emerging as one of the most promising technologies for addressing Emergency Management. Web services with different privileges are playing an important role in dissemination of the emergency services to the decision makers. Spatial database is one of the most important components in the successful implementation of Emergency Management. It contains spatial data in the form of raster, vector, linked with non-spatial information. Comprehensive data is required to handle emergency situation in different phases. These database elements comprise core data, hazard specific data, corresponding attribute data, and live data coming from the remote locations. Core data sets are minimum required data including base, thematic, infrastructure layers to handle disasters. Disaster specific information is required to handle a particular disaster situation like flood, cyclone, forest fire, earth quake, land slide, drought. In addition to this Emergency Management require many types of data with spatial and temporal attributes that should be made available to the key players in the right format at right time. The vector database needs to be complemented with required resolution satellite imagery for visualisation and analysis in disaster management. Therefore, the database is interconnected and comprehensive to meet the requirement of an Emergency Management. This kind of integrated, comprehensive and structured database with appropriate information is required to obtain right information at right time for the right people. However, building spatial database for Emergency Management is a challenging task because of the key issues such as availability of data, sharing policies, compatible geospatial standards, data interoperability etc. Therefore, to facilitate using, sharing, and integrating the spatial data, there is a need to define standards to build emergency database systems. These include aspects such as i) data integration procedures namely standard coding scheme, schema, meta data format, spatial format ii) database organisation mechanism covering data management, catalogues, data models iii) database dissemination through a suitable environment, as a standard service for effective service dissemination. National Database for Emergency Management (NDEM) is such a comprehensive database for addressing disasters in India at the national level. This paper explains standards for integrating, organising the multi-scale and multi-source data with effective emergency response using customized user interfaces for NDEM. It presents standard procedure for building comprehensive emergency information systems for enabling emergency specific functions through geospatial technologies.
Sudell, Maria; Kolamunnage-Dona, Ruwanthi; Tudur-Smith, Catrin
2016-12-05
Joint models for longitudinal and time-to-event data are commonly used to simultaneously analyse correlated data in single study cases. Synthesis of evidence from multiple studies using meta-analysis is a natural next step but its feasibility depends heavily on the standard of reporting of joint models in the medical literature. During this review we aim to assess the current standard of reporting of joint models applied in the literature, and to determine whether current reporting standards would allow or hinder future aggregate data meta-analyses of model results. We undertook a literature review of non-methodological studies that involved joint modelling of longitudinal and time-to-event medical data. Study characteristics were extracted and an assessment of whether separate meta-analyses for longitudinal, time-to-event and association parameters were possible was made. The 65 studies identified used a wide range of joint modelling methods in a selection of software. Identified studies concerned a variety of disease areas. The majority of studies reported adequate information to conduct a meta-analysis (67.7% for longitudinal parameter aggregate data meta-analysis, 69.2% for time-to-event parameter aggregate data meta-analysis, 76.9% for association parameter aggregate data meta-analysis). In some cases model structure was difficult to ascertain from the published reports. Whilst extraction of sufficient information to permit meta-analyses was possible in a majority of cases, the standard of reporting of joint models should be maintained and improved. Recommendations for future practice include clear statement of model structure, of values of estimated parameters, of software used and of statistical methods applied.
Hébert-Dufresne, Laurent; Grochow, Joshua A; Allard, Antoine
2016-08-18
We introduce a network statistic that measures structural properties at the micro-, meso-, and macroscopic scales, while still being easy to compute and interpretable at a glance. Our statistic, the onion spectrum, is based on the onion decomposition, which refines the k-core decomposition, a standard network fingerprinting method. The onion spectrum is exactly as easy to compute as the k-cores: It is based on the stages at which each vertex gets removed from a graph in the standard algorithm for computing the k-cores. Yet, the onion spectrum reveals much more information about a network, and at multiple scales; for example, it can be used to quantify node heterogeneity, degree correlations, centrality, and tree- or lattice-likeness. Furthermore, unlike the k-core decomposition, the combined degree-onion spectrum immediately gives a clear local picture of the network around each node which allows the detection of interesting subgraphs whose topological structure differs from the global network organization. This local description can also be leveraged to easily generate samples from the ensemble of networks with a given joint degree-onion distribution. We demonstrate the utility of the onion spectrum for understanding both static and dynamic properties on several standard graph models and on many real-world networks.
Garbarski, Dana; Schaeffer, Nora Cate; Dykema, Jennifer
2016-08-01
"Rapport" has been used to refer to a range of positive psychological features of an interaction -- including a situated sense of connection or affiliation between interactional partners, comfort, willingness to disclose or share sensitive information, motivation to please, or empathy. Rapport could potentially benefit survey participation and response quality by increasing respondents' motivation to participate, disclose, or provide accurate information. Rapport could also harm data quality if motivation to ingratiate or affiliate caused respondents to suppress undesirable information. Some previous research suggests that motives elicited when rapport is high conflict with the goals of standardized interviewing. We examine rapport as an interactional phenomenon, attending to both the content and structure of talk. Using questions about end-of-life planning in the 2003-2005 wave of the Wisconsin Longitudinal Study, we observe that rapport consists of behaviors that can be characterized as dimensions of responsiveness by interviewers and engagement by respondents. We identify and describe types of responsiveness and engagement in selected question-answer sequences and then devise a coding scheme to examine their analytic potential with respect to the criterion of future study participation. Our analysis suggests that responsive and engaged behaviors vary with respect to the goals of standardization-some conflict with these goals, while others complement them.
Digital information management: a progress report on the National Digital Mammography Archive
NASA Astrophysics Data System (ADS)
Beckerman, Barbara G.; Schnall, Mitchell D.
2002-05-01
Digital mammography creates very large images, which require new approaches to storage, retrieval, management, and security. The National Digital Mammography Archive (NDMA) project, funded by the National Library of Medicine (NLM), is developing a limited testbed that demonstrates the feasibility of a national breast imaging archive, with access to prior exams; patient information; computer aids for image processing, teaching, and testing tools; and security components to ensure confidentiality of patient information. There will be significant benefits to patients and clinicians in terms of accessible data with which to make a diagnosis and to researchers performing studies on breast cancer. Mammography was chosen for the project, because standards were already available for digital images, report formats, and structures. New standards have been created for communications protocols between devices, front- end portal and archive. NDMA is a distributed computing concept that provides for sharing and access across corporate entities. Privacy, auditing, and patient consent are all integrated into the system. Five sites, Universities of Pennsylvania, Chicago, North Carolina and Toronto, and BWXT Y12, are connected through high-speed networks to demonstrate functionality. We will review progress, including technical challenges, innovative research and development activities, standards and protocols being implemented, and potential benefits to healthcare systems.
An Information System for European culture collections: the way forward.
Casaregola, Serge; Vasilenko, Alexander; Romano, Paolo; Robert, Vincent; Ozerskaya, Svetlana; Kopf, Anna; Glöckner, Frank O; Smith, David
2016-01-01
Culture collections contain indispensable information about the microorganisms preserved in their repositories, such as taxonomical descriptions, origins, physiological and biochemical characteristics, bibliographic references, etc. However, information currently accessible in databases rarely adheres to common standard protocols. The resultant heterogeneity between culture collections, in terms of both content and format, notably hampers microorganism-based research and development (R&D). The optimized exploitation of these resources thus requires standardized, and simplified, access to the associated information. To this end, and in the interest of supporting R&D in the fields of agriculture, health and biotechnology, a pan-European distributed research infrastructure, MIRRI, including over 40 public culture collections and research institutes from 19 European countries, was established. A prime objective of MIRRI is to unite and provide universal access to the fragmented, and untapped, resources, information and expertise available in European public collections of microorganisms; a key component of which is to develop a dynamic Information System. For the first time, both culture collection curators as well as their users have been consulted and their feedback, concerning the needs and requirements for collection databases and data accessibility, utilised. Users primarily noted that databases were not interoperable, thus rendering a global search of multiple databases impossible. Unreliable or out-of-date and, in particular, non-homogenous, taxonomic information was also considered to be a major obstacle to searching microbial data efficiently. Moreover, complex searches are rarely possible in online databases thus limiting the extent of search queries. Curators also consider that overall harmonization-including Standard Operating Procedures, data structure, and software tools-is necessary to facilitate their work and to make high-quality data easily accessible to their users. Clearly, the needs of culture collection curators coincide with those of users on the crucial point of database interoperability. In this regard, and in order to design an appropriate Information System, important aspects on which the culture collection community should focus include: the interoperability of data sets with the ontologies to be used; setting best practice in data management, and the definition of an appropriate data standard.
Spaner, Donna; Caraiscos, Valerie B; Muystra, Christina; Furman, Margaret Lynn; Zaltz-Dubin, Jodi; Wharton, Marilyn; Whitehead, Katherine
Optimal care for patients in the palliative care setting requires effective clinical teamwork. Communication may be challenging for health-care workers from different disciplines. Daily rounds are one way for clinical teams to share information and develop care plans for patients. The objective of this initiative was to improve the structure and process of daily palliative care rounds by incorporating the use of standardized tools and improved documentation into the meeting. We chose a quality improvement (QI) approach to address this initiative. Our aims were to increase the use of assessment tools when discussing patient care in rounds and to improve the documentation and accessibility of important information in the health record, including goals of care. This QI initiative used a preintervention and postintervention comparison of the outcome measures of interest. The initiative was tested in a palliative care unit (PCU) over a 22-month period from April 2014 to January 2016. Participants were clinical staff in the PCU. Data collected after the completion of several plan-do-study-act cycles showed increased use and incorporation of the Edmonton Symptom Assessment System and Palliative Performance Scale into patient care discussions as well as improvement in inclusion of goals of care into the patient plan of care. Our findings demonstrate that the effectiveness of daily palliative care rounds can be improved by incorporating the use of standard assessment tools and changes into the meeting structure to better focus and direct patient care discussions.
An approach for software-driven and standard-based support of cross-enterprise tumor boards.
Mangesius, Patrick; Fischer, Bernd; Schabetsberger, Thomas
2015-01-01
For tumor boards, the networking of different medical disciplines' expertise continues to gain importance. However, interdisciplinary tumor boards spread across several institutions are rarely supported by information technology tools today. The aim of this paper is to point out an approach for a tumor board management system prototype. For analyzing the requirements, an incremental process was used. The requirements were surveyed using Informal Conversational Interview and documented with Use Case Diagrams defined by the Unified Modeling Language (UML). Analyses of current EHR standards were conducted to evaluate technical requirements. Functional and technical requirements of clinical conference applications were evaluated and documented. In several steps, workflows were derived and application mockups were created. Although there is a vast amount of common understanding concerning how clinical conferences should be conducted and how their workflows should be structured, these are hardly standardized, neither on a functional nor on a technical level. This results in drawbacks for participants and patients. Using modern EHR technologies based on profiles such as IHE Cross Enterprise document sharing (XDS), these deficits could be overcome.
Magnus, Manya; Franks, Julie; Griffith, Sam; Arnold, Michael P.; Goodman, Krista; Wheeler, Darrell P.
2014-01-01
Context HIV/AIDS in the United States continues to primarily impact men who have sex with men (MSM), with disproportionately high rates among black MSM. Objective The purpose of this study was to identify factors that may influence engagement and retention of black MSM in HIV research. Design and Participants This was a qualitative evaluation of study implementation within a multisite, prospective, observational study (HIV Prevention Trials Network 061, BROTHERS) that enrolled 1553 black MSM in 6 cities throughout the United States. Data collection for this evaluation included a written, structured survey collected from each of the sites describing site characteristics including staff and organizational structure, reviews of site standard operating procedures, and work plans; semistructured key informant interviews were conducted with site coordinators to characterize staffing, site-level factors facilitating or impeding effective community engagement, study recruitment, and retention. Data from completed surveys and site standard operating procedures were collated, and notes from key informant interviews were thematically coded for content by 2 independent reviewers. Results Several key themes emerged from the data, including the importance of inclusion of members of the community being studied as staff, institutional hiring practices that support inclusive staffing, cultivating a supportive working environment for study implementation, and ongoing relationships between research institutions and community. Conclusions This study underscores the importance of staffing in implementing research with black MSM. Investigators should consider how staffing and organizational structures affect implementation during study design and when preparing to initiate study activities. Ongoing monitoring of community engagement can inform and improve methods for engagement and ensure cultural relevance while removing barriers for participation. PMID:24406940
Magnus, Manya; Franks, Julie; Griffith, Sam; Arnold, Michael P; Goodman, Krista; Wheeler, Darrell P
2014-01-01
HIV/AIDS in the United States continues to primarily impact men who have sex with men (MSM), with disproportionately high rates among black MSM. The purpose of this study was to identify factors that may influence engagement and retention of black MSM in HIV research. This was a qualitative evaluation of study implementation within a multisite, prospective, observational study (HIV Prevention Trials Network 061, BROTHERS) that enrolled 1553 black MSM in 6 cities throughout the United States. Data collection for this evaluation included a written, structured survey collected from each of the sites describing site characteristics including staff and organizational structure, reviews of site standard operating procedures, and work plans; semistructured key informant interviews were conducted with site coordinators to characterize staffing, site-level factors facilitating or impeding effective community engagement, study recruitment, and retention. Data from completed surveys and site standard operating procedures were collated, and notes from key informant interviews were thematically coded for content by 2 independent reviewers. Several key themes emerged from the data, including the importance of inclusion of members of the community being studied as staff, institutional hiring practices that support inclusive staffing, cultivating a supportive working environment for study implementation, and ongoing relationships between research institutions and community. This study underscores the importance of staffing in implementing research with black MSM. Investigators should consider how staffing and organizational structures affect implementation during study design and when preparing to initiate study activities. Ongoing monitoring of community engagement can inform and improve methods for engagement and ensure cultural relevance while removing barriers for participation.
NASA Astrophysics Data System (ADS)
Vaitheeswaran, G.; Kanchana, V.; Zhang, Xinxin; Ma, Yanming; Svane, A.; Christensen, N. E.
2016-08-01
A detailed study of the high-pressure structural properties, lattice dynamics and band structures of perovskite structured fluorides KZnF3, CsCaF3 and BaLiF3 has been carried out by means of density functional theory. The calculated structural properties including elastic constants and equation of state agree well with available experimental information. The phonon dispersion curves are in good agreement with available experimental inelastic neutron scattering data. The electronic structures of these fluorides have been calculated using the quasi particle self-consistent GW approximation. The GW calculations reveal that all the fluorides studied are wide band gap insulators, and the band gaps are significantly larger than those obtained by the standard local density approximation, thus emphasizing the importance of quasi particle corrections in perovskite fluorides.
Vaitheeswaran, G; Kanchana, V; Zhang, Xinxin; Ma, Yanming; Svane, A; Christensen, N E
2016-08-10
A detailed study of the high-pressure structural properties, lattice dynamics and band structures of perovskite structured fluorides KZnF3, CsCaF3 and BaLiF3 has been carried out by means of density functional theory. The calculated structural properties including elastic constants and equation of state agree well with available experimental information. The phonon dispersion curves are in good agreement with available experimental inelastic neutron scattering data. The electronic structures of these fluorides have been calculated using the quasi particle self-consistent [Formula: see text] approximation. The [Formula: see text] calculations reveal that all the fluorides studied are wide band gap insulators, and the band gaps are significantly larger than those obtained by the standard local density approximation, thus emphasizing the importance of quasi particle corrections in perovskite fluorides.
Implementation of UML Schema to RDBM
NASA Astrophysics Data System (ADS)
Nagni, M.; Ventouras, S.; Parton, G.
2012-04-01
Multiple disciplines - especially those within the earth and physical sciences, and increasingly those within social science and medical fields - require Geographic Information (GI) i.e. information concerning phenomena implicitly or explicitly associated with a location relative to the Earth [1]. Therefore geographic datasets are increasingly being shared, exchanged and frequently used for purposes other than those for which they were originally intended. The ISO Technical Committee 211 (ISO/TC 211) together with Open Geospatial Consortium (OGC) provide a series of standards and guidelines for developing application schemas which should: a) capture relevant conceptual aspects of the data involved; and b) be sufficient to satisfy previously defined use-cases of a specific or cross-domain concerns. In addition, the Hollow World technology offers an accessible and industry-standardised methodology for creating and editing Application Schema UML models which conform to international standards for interoperable GI [2]. We present a technology which seamlessly transforms an Application Schema UML model to a relational database model (RDBM). This technology, using the same UML information model, complements the XML transformation of an information model produced by the FullMoon tool [2]. In preparation for the generation of a RDBM the UML model is first mapped to a collection of OO classes and relationships. Any external dependencies that exist are then resolved through the same mechanism. However, a RDBM does not support a hierarchical (relational) data structure - a function that may be required by UML models. Previous approaches have addressed this problem through use of nested sets or an adjacent list to represent such structure. Our unique strategy addresses the hierarchical data structure issue, whether singular or multiple inheritance, by hiding a delegation pattern within an OO class. This permits the object-relational mapping (ORM) software used to generate the RDBM to easily map the class into the RDBM. In other words the particular structure of the resulting OO class may expose a "composition-like aspect" to the ORM whilst maintaining an "inherited-like aspect" for use within an OO program. This methodology has been used to implement a software application to manages the new CEDA metadata model which is based on MOLES 3.4, Python, Django and SQLAlchemy.
Automated global structure extraction for effective local building block processing in XCS.
Butz, Martin V; Pelikan, Martin; Llorà, Xavier; Goldberg, David E
2006-01-01
Learning Classifier Systems (LCSs), such as the accuracy-based XCS, evolve distributed problem solutions represented by a population of rules. During evolution, features are specialized, propagated, and recombined to provide increasingly accurate subsolutions. Recently, it was shown that, as in conventional genetic algorithms (GAs), some problems require efficient processing of subsets of features to find problem solutions efficiently. In such problems, standard variation operators of genetic and evolutionary algorithms used in LCSs suffer from potential disruption of groups of interacting features, resulting in poor performance. This paper introduces efficient crossover operators to XCS by incorporating techniques derived from competent GAs: the extended compact GA (ECGA) and the Bayesian optimization algorithm (BOA). Instead of simple crossover operators such as uniform crossover or one-point crossover, ECGA or BOA-derived mechanisms are used to build a probabilistic model of the global population and to generate offspring classifiers locally using the model. Several offspring generation variations are introduced and evaluated. The results show that it is possible to achieve performance similar to runs with an informed crossover operator that is specifically designed to yield ideal problem-dependent exploration, exploiting provided problem structure information. Thus, we create the first competent LCSs, XCS/ECGA and XCS/BOA, that detect dependency structures online and propagate corresponding lower-level dependency structures effectively without any information about these structures given in advance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sturgeon, J I
This volume relates primarily to Time-of-Day rates standard, PURPA IB(d)3, and deals with the content and methods of providing rate and conservation information to customers when Time-of-Day rates are used. Information to customers in the Demonstration and Pilot Projects fell mainly into four categories: administrative communications; explanations of new rate structures; information and advice on load management; and facts, recommendations and encouragements about energy conservation and end-use improvement. Administrative communications were about such matters as the existence of Projects, their funding, their periods of performance, the selection of their test customers, conditions of participation, procedural changes during the tests, andmore » the time and conditions of ending the tests. These communications were important to good customer cooperation. All Demonstration Projects devoted considerable effort to the crucial task of clearly explaining the rationale of Time-of-Use (TOU) pricing and the test rate structures. The Projects then presented the concept of TOU pricing as a means of (a) fairly charging customers the true cost of their electricity and (b) rewarding them for shifting consumption to times when costs are less. For the most part, Demonstration Projects gave specific information on the individual customer's own rate structure and none on any others that were under test. The information was presented in face-to-face interviews, group presentations, television, radio, and print media, and traveling exhibits. The results are evaluated. (LCL)« less
Archetype-based semantic integration and standardization of clinical data.
Moner, David; Maldonado, Jose A; Bosca, Diego; Fernandez, Jesualdo T; Angulo, Carlos; Crespo, Pere; Vivancos, Pedro J; Robles, Montserrat
2006-01-01
One of the basic needs for any healthcare professional is to be able to access to clinical information of patients in an understandable and normalized way. The lifelong clinical information of any person supported by electronic means configures his/her Electronic Health Record (EHR). This information is usually distributed among several independent and heterogeneous systems that may be syntactically or semantically incompatible. The Dual Model architecture has appeared as a new proposal for maintaining a homogeneous representation of the EHR with a clear separation between information and knowledge. Information is represented by a Reference Model which describes common data structures with minimal semantics. Knowledge is specified by archetypes, which are formal representations of clinical concepts built upon a particular Reference Model. This kind of architecture is originally thought for implantation of new clinical information systems, but archetypes can be also used for integrating data of existing and not normalized systems, adding at the same time a semantic meaning to the integrated data. In this paper we explain the possible use of a Dual Model approach for semantic integration and standardization of heterogeneous clinical data sources and present LinkEHR-Ed, a tool for developing archetypes as elements for integration purposes. LinkEHR-Ed has been designed to be easily used by the two main participants of the creation process of archetypes for clinical data integration: the Health domain expert and the Information Technologies domain expert.
A common type system for clinical natural language processing
2013-01-01
Background One challenge in reusing clinical data stored in electronic medical records is that these data are heterogenous. Clinical Natural Language Processing (NLP) plays an important role in transforming information in clinical text to a standard representation that is comparable and interoperable. Information may be processed and shared when a type system specifies the allowable data structures. Therefore, we aim to define a common type system for clinical NLP that enables interoperability between structured and unstructured data generated in different clinical settings. Results We describe a common type system for clinical NLP that has an end target of deep semantics based on Clinical Element Models (CEMs), thus interoperating with structured data and accommodating diverse NLP approaches. The type system has been implemented in UIMA (Unstructured Information Management Architecture) and is fully functional in a popular open-source clinical NLP system, cTAKES (clinical Text Analysis and Knowledge Extraction System) versions 2.0 and later. Conclusions We have created a type system that targets deep semantics, thereby allowing for NLP systems to encapsulate knowledge from text and share it alongside heterogenous clinical data sources. Rather than surface semantics that are typically the end product of NLP algorithms, CEM-based semantics explicitly build in deep clinical semantics as the point of interoperability with more structured data types. PMID:23286462
A general natural-language text processor for clinical radiology.
Friedman, C; Alderson, P O; Austin, J H; Cimino, J J; Johnson, S B
1994-01-01
OBJECTIVE: Development of a general natural-language processor that identifies clinical information in narrative reports and maps that information into a structured representation containing clinical terms. DESIGN: The natural-language processor provides three phases of processing, all of which are driven by different knowledge sources. The first phase performs the parsing. It identifies the structure of the text through use of a grammar that defines semantic patterns and a target form. The second phase, regularization, standardizes the terms in the initial target structure via a compositional mapping of multi-word phrases. The third phase, encoding, maps the terms to a controlled vocabulary. Radiology is the test domain for the processor and the target structure is a formal model for representing clinical information in that domain. MEASUREMENTS: The impression sections of 230 radiology reports were encoded by the processor. Results of an automated query of the resultant database for the occurrences of four diseases were compared with the analysis of a panel of three physicians to determine recall and precision. RESULTS: Without training specific to the four diseases, recall and precision of the system (combined effect of the processor and query generator) were 70% and 87%. Training of the query component increased recall to 85% without changing precision. PMID:7719797
NASA Astrophysics Data System (ADS)
Bastos, Isadora T. S.; Costa, Fanny N.; Silva, Tiago F.; Barreiro, Eliezer J.; Lima, Lídia M.; Braz, Delson; Lombardo, Giuseppe M.; Punzo, Francesco; Ferreira, Fabio F.; Barroso, Regina C.
2017-10-01
LASSBio-1755 is a new cycloalkyl-N-acylhydrazone parent compound designed for the development of derivatives with antinociceptive and anti-inflammatory activities. Although single crystal X-ray diffraction has been considered as the golden standard in structure determination, we successfully used X-ray powder diffraction data in the structural determination of new synthesized compounds, in order to overcome the bottle-neck due to the difficulties experienced in harvesting good quality single crystals of the compounds. We therefore unequivocally assigned the relative configuration (E) to the imine double bond and a s-cis conformation of the amide function of the N-acylhydrazone compound. These features are confirmed by a computational analysis performed on the basis of molecular dynamics calculations, which are extended not only to the structural characteristics but also to the analysis of the anisotropic atomic displacement parameters, a further information - missed in a typical powder diffraction analysis. The so inferred data were used to perform additional cycles of refinement and eventually generate a new cif file with additional physical information. Furthermore, crystal morphology prediction was performed, which is in agreement with the experimental images acquired by scanning electron microscopy, thus providing useful information on possible alternative paths for better crystallization strategies.
A common type system for clinical natural language processing.
Wu, Stephen T; Kaggal, Vinod C; Dligach, Dmitriy; Masanz, James J; Chen, Pei; Becker, Lee; Chapman, Wendy W; Savova, Guergana K; Liu, Hongfang; Chute, Christopher G
2013-01-03
One challenge in reusing clinical data stored in electronic medical records is that these data are heterogenous. Clinical Natural Language Processing (NLP) plays an important role in transforming information in clinical text to a standard representation that is comparable and interoperable. Information may be processed and shared when a type system specifies the allowable data structures. Therefore, we aim to define a common type system for clinical NLP that enables interoperability between structured and unstructured data generated in different clinical settings. We describe a common type system for clinical NLP that has an end target of deep semantics based on Clinical Element Models (CEMs), thus interoperating with structured data and accommodating diverse NLP approaches. The type system has been implemented in UIMA (Unstructured Information Management Architecture) and is fully functional in a popular open-source clinical NLP system, cTAKES (clinical Text Analysis and Knowledge Extraction System) versions 2.0 and later. We have created a type system that targets deep semantics, thereby allowing for NLP systems to encapsulate knowledge from text and share it alongside heterogenous clinical data sources. Rather than surface semantics that are typically the end product of NLP algorithms, CEM-based semantics explicitly build in deep clinical semantics as the point of interoperability with more structured data types.
Botsivaly, M.; Spyropoulos, B.; Koutsourakis, K.; Mertika, K.
2006-01-01
Sharing of healthcare related information among the different healthcare providers is a crucial aspect for the continuity of the provided care The purpose of this study is the presentation of a system appropriate to be used upon the transition or the referral of a patient, and especially in transition from hospital to homecare. The function of the developed system is based upon the creation of a structured subset of data, concerning the most relevant facts about a patient’s healthcare, organized and transportable, in order to be employed during the post-discharge homecare period, enabling simultaneously the planning and the optimal documentation of the provided homecare. The structure and the content of the created data sets are complying with the ASTM E2369-0 Standard, Specification for Continuity of Care Record. PMID:17238304
Botsivaly, M; Spyropoulos, B; Koutsourakis, K; Mertika, K
2006-01-01
Sharing of healthcare related information among the different healthcare providers is a crucial aspect for the continuity of the provided care The purpose of this study is the presentation of a system appropriate to be used upon the transition or the referral of a patient, and especially in transition from hospital to homecare. The function of the developed system is based upon the creation of a structured subset of data, concerning the most relevant facts about a patient's healthcare, organized and transportable, in order to be employed during the post-discharge homecare period, enabling simultaneously the planning and the optimal documentation of the provided homecare. The structure and the content of the created data sets are complying with the ASTM E2369-0 Standard, Specification for Continuity of Care Record.
Towards a Formal Basis for Modular Safety Cases
NASA Technical Reports Server (NTRS)
Denney, Ewen; Pai, Ganesh
2015-01-01
Safety assurance using argument-based safety cases is an accepted best-practice in many safety-critical sectors. Goal Structuring Notation (GSN), which is widely used for presenting safety arguments graphically, provides a notion of modular arguments to support the goal of incremental certification. Despite the efforts at standardization, GSN remains an informal notation whereas the GSN standard contains appreciable ambiguity especially concerning modular extensions. This, in turn, presents challenges when developing tools and methods to intelligently manipulate modular GSN arguments. This paper develops the elements of a theory of modular safety cases, leveraging our previous work on formalizing GSN arguments. Using example argument structures we highlight some ambiguities arising through the existing guidance, present the intuition underlying the theory, clarify syntax, and address modular arguments, contracts, well-formedness and well-scopedness of modules. Based on this theory, we have a preliminary implementation of modular arguments in our toolset, AdvoCATE.
Using School-Level Interviews to Develop a Multisite PE Intervention Program
Moe, Stacey G.; Pickrel, Julie; McKenzie, Thomas L.; Strikmiller, Patricia K.; Coombs, Derek; Murrie, Dale
2008-01-01
The Trial of Activity for Adolescent Girls (TAAG) is a randomized, multicenter field trial in middle schools that aims to reduce the decline of physical activity in adolescent girls. To inform the development of the TAAG intervention, two phases of formative research are conducted to gain information on school structure and environment and on the conduct of physical education classes. Principals and designated staff at 64 eligible middle schools were interviewed using the School Survey during Phase 1. The following year (Phase 2), physical education department heads of the 36 schools selected into TAAG were interviewed. Responses were examined to design a standardized, multicomponent physical activity intervention for six regions of the United States. This article describes the contribution of formative research to the development of the physical education intervention component and summarizes the alignment of current school policies and practices with national and state standards. PMID:16397159
Picture This... Developing Standards for Electronic Images at the National Library of Medicine
Masys, Daniel R.
1990-01-01
New computer technologies have made it feasible to represent, store, and communicate high resolution biomedical images via electronic means. Traditional two dimensional medical images such as those on printed pages have been supplemented by three dimensional images which can be rendered, rotated, and “dissected” from any point of view. The library of the future will provide electronic access not only to words and numbers, but to pictures, sounds, and other nontextual information. There currently exist few widely-accepted standards for the representation and communication of complex images, yet such standards will be critical to the feasibility and usefulness of digital image collections in the life sciences. The National Library of Medicine is embarked on a project to develop a complete digital volumetric representation of an adult human male and female. This “Visible Human Project” will address the issue of standards for computer representation of biological structure.
Quality standards for bone conduction implants.
Gavilan, Javier; Adunka, Oliver; Agrawal, Sumit; Atlas, Marcus; Baumgartner, Wolf-Dieter; Brill, Stefan; Bruce, Iain; Buchman, Craig; Caversaccio, Marco; De Bodt, Marc T; Dillon, Meg; Godey, Benoit; Green, Kevin; Gstoettner, Wolfgang; Hagen, Rudolf; Hagr, Abdulrahman; Han, Demin; Kameswaran, Mohan; Karltorp, Eva; Kompis, Martin; Kuzovkov, Vlad; Lassaletta, Luis; Li, Yongxin; Lorens, Artur; Martin, Jane; Manoj, Manikoth; Mertens, Griet; Mlynski, Robert; Mueller, Joachim; O'Driscoll, Martin; Parnes, Lorne; Pulibalathingal, Sasidharan; Radeloff, Andreas; Raine, Christopher H; Rajan, Gunesh; Rajeswaran, Ranjith; Schmutzhard, Joachim; Skarzynski, Henryk; Skarzynski, Piotr; Sprinzl, Georg; Staecker, Hinrich; Stephan, Kurt; Sugarova, Serafima; Tavora, Dayse; Usami, Shin-Ichi; Yanov, Yuri; Zernotti, Mario; Zorowka, Patrick; de Heyning, Paul Van
2015-01-01
Bone conduction implants are useful in patients with conductive and mixed hearing loss for whom conventional surgery or hearing aids are no longer an option. They may also be used in patients affected by single-sided deafness. To establish a consensus on the quality standards required for centers willing to create a bone conduction implant program. To ensure a consistently high level of service and to provide patients with the best possible solution the members of the HEARRING network have established a set of quality standards for bone conduction implants. These standards constitute a realistic minimum attainable by all implant clinics and should be employed alongside current best practice guidelines. Fifteen items are thoroughly analyzed. They include team structure, accommodation and clinical facilities, selection criteria, evaluation process, complete preoperative and surgical information, postoperative fitting and assessment, follow-up, device failure, clinical management, transfer of care and patient complaints.
Lessons Learned and Technical Standards: A Logical Marriage for Future Space Systems Design
NASA Technical Reports Server (NTRS)
Gill, Paul S.; Garcia, Danny; Vaughan, William W.; Parker, Nelson C. (Technical Monitor)
2002-01-01
A comprehensive database of engineering lessons learned that corresponds with relevant technical standards will be a valuable asset to those engaged in studies on future space vehicle developments, especially for structures, materials, propulsion, control, operations and associated elements. In addition, this will enable the capturing of technology developments applicable to the design, development, and operation of future space vehicles as planned in the Space Launch Initiative. Using the time-honored tradition of passing on lessons learned while utilizing the newest information technology, NASA has launched an intensive effort to link lessons learned acquired through various Internet databases with applicable technical standards. This paper will discuss the importance of lessons learned, the difficulty in finding relevant lessons learned while engaged in a space vehicle development, and the new NASA effort to relate them to technical standards that can help alleviate this difficulty.
An Efficient, Scalable and Robust P2P Overlay for Autonomic Communication
NASA Astrophysics Data System (ADS)
Li, Deng; Liu, Hui; Vasilakos, Athanasios
The term Autonomic Communication (AC) refers to self-managing systems which are capable of supporting self-configuration, self-healing and self-optimization. However, information reflection and collection, lack of centralized control, non-cooperation and so on are just some of the challenges within AC systems. Since many self-* properties (e.g. selfconfiguration, self-optimization, self-healing, and self-protecting) are achieved by a group of autonomous entities that coordinate in a peer-to-peer (P2P) fashion, it has opened the door to migrating research techniques from P2P systems. P2P's meaning can be better understood with a set of key characteristics similar to AC: Decentralized organization, Self-organizing nature (i.e. adaptability), Resource sharing and aggregation, and Fault-tolerance. However, not all P2P systems are compatible with AC. Unstructured systems are designed more specifically than structured systems for the heterogeneous Internet environment, where the nodes' persistence and availability are not guaranteed. Motivated by the challenges in AC and based on comprehensive analysis of popular P2P applications, three correlative standards for evaluating the compatibility of a P2P system with AC are presented in this chapter. According to these standards, a novel Efficient, Scalable and Robust (ESR) P2P overlay is proposed. Differing from current structured and unstructured, or meshed and tree-like P2P overlay, the ESR is a whole new three dimensional structure to improve the efficiency of routing, while information exchanges take in immediate neighbors with local information to make the system scalable and fault-tolerant. Furthermore, rather than a complex game theory or incentive mechanism, asimple but effective punish mechanism has been presented based on a new ID structure which can guarantee the continuity of each node's record in order to discourage negative behavior on an autonomous environment as AC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopkins, Rebecca J.; Tivanski, Alexei V.; Marten, Bryan D.
2007-04-25
The carbon-to-oxygen ratios and graphitic nature of a rangeof black carbon standard reference materials (BC SRMs), high molecularmass humic-like substances (HULIS) and atmospheric particles are examinedusing scanning transmission X-ray microscopy (STXM) coupled with nearedge X-ray absorption fine structure (NEXAFS) spectroscopy. UsingSTXM/NEXAFS, individual particles with diameter>100 nm are studied,thus the diversity of atmospheric particles collected during a variety offield missions is assessed. Applying a semi-quantitative peak fittingmethod to the NEXAFS spectra enables a comparison of BC SRMs and HULIS toparticles originating from anthropogenic combustion and biomass burns,thus allowing determination of the suitability of these materials forrepresenting atmospheric particles. Anthropogenic combustion andmore » biomassburn particles can be distinguished from one another using both chemicalbonding and structural ordering information. While anthropogeniccombustion particles are characterized by a high proportion ofaromatic-C, the presence of benzoquinone and are highly structurallyordered, biomass burn particles exhibit lower structural ordering, asmaller proportion of aromatic-C and contain a much higher proportion ofoxygenated functional groups.« less
Individual differences in mental rotation: what does gesture tell us?
Göksun, Tilbe; Goldin-Meadow, Susan; Newcombe, Nora; Shipley, Thomas
2013-05-01
Gestures are common when people convey spatial information, for example, when they give directions or describe motion in space. Here, we examine the gestures speakers produce when they explain how they solved mental rotation problems (Shepard and Meltzer in Science 171:701-703, 1971). We asked whether speakers gesture differently while describing their problems as a function of their spatial abilities. We found that low-spatial individuals (as assessed by a standard paper-and-pencil measure) gestured more to explain their solutions than high-spatial individuals. While this finding may seem surprising, finer-grained analyses showed that low-spatial participants used gestures more often than high-spatial participants to convey "static only" information but less often than high-spatial participants to convey dynamic information. Furthermore, the groups differed in the types of gestures used to convey static information: high-spatial individuals were more likely than low-spatial individuals to use gestures that captured the internal structure of the block forms. Our gesture findings thus suggest that encoding block structure may be as important as rotating the blocks in mental spatial transformation.
75 FR 79354 - Assessment Technology Standards Request for Information (RFI)
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-20
... DEPARTMENT OF EDUCATION Assessment Technology Standards Request for Information (RFI) AGENCY... information to gather technical expertise pertaining to assessment technology standards. SUMMARY: The purpose of this RFI is to collect information relating to assessment technology standards. Toward that end...
Governmental standard drink definitions and low-risk alcohol consumption guidelines in 37 countries.
Kalinowski, Agnieszka; Humphreys, Keith
2016-07-01
One of the challenges of international alcohol research and policy is the variability in and lack of knowledge of how governments in different nations define a standard drink and low-risk drinking. This study gathered such information from governmental agencies in 37 countries. A pool of 75 countries that might have definitions was created using World Health Organization (WHO) information and the authors' own judgement. Structured internet searches of relevant terms for each country were supplemented by efforts to contact government agencies directly and to consult with alcohol experts in the country. Most of the 75 national governments examined were not identified as having adopted a standard drink definition. Among the 37 that were so identified, the modal standard drink size was 10 g pure ethanol, but variation was wide (8-20 g). Significant variability was also evident for low-risk drinking guidelines, ranging from 10-42 g per day for women and 10-56 g per day for men to 98-140 g per week for women and 150-280 g per week for men. Researchers working and communicating across national boundaries should be sensitive to the substantial variability in 'standard' drink definitions and low-risk drinking guidelines. The potential impact of guidelines, both in general and in specific national cases, remains an important question for public health research. © 2016 Society for the Study of Addiction.
Quantifiable Assessment of SWNT Dispersion in Polymer Composites
NASA Technical Reports Server (NTRS)
Park, Cheol; Kim, Jae-Woo; Wise, Kristopher E.; Working, Dennis; Siochi, Mia; Harrison, Joycelyn; Gibbons, Luke; Siochi, Emilie J.; Lillehei, Peter T.; Cantrell, Sean;
2007-01-01
NASA LaRC has established a new protocol for visualizing the nanomaterials in structural polymer matrix resins. Using this new technique and reconstructing the 3D distribution of the nanomaterials allows us to compare this distribution against a theoretically perfect distribution. Additional tertiary structural information can now be obtained and quantified with the electron tomography studies. These tools will be necessary to establish the structural-functional relationships between the nano and the bulk. This will also help define the critical length scales needed for functional properties. Field ready tool development and calibration can begin by using these same samples and comparing the response. i.e. gold standards of good and bad dispersion.
Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario
2018-01-01
This research shows a protocol to assess the computational complexity of querying relational and non-relational (NoSQL (not only Structured Query Language)) standardized electronic health record (EHR) medical information database systems (DBMS). It uses a set of three doubling-sized databases, i.e. databases storing 5000, 10,000 and 20,000 realistic standardized EHR extracts, in three different database management systems (DBMS): relational MySQL object-relational mapping (ORM), document-based NoSQL MongoDB, and native extensible markup language (XML) NoSQL eXist. The average response times to six complexity-increasing queries were computed, and the results showed a linear behavior in the NoSQL cases. In the NoSQL field, MongoDB presents a much flatter linear slope than eXist. NoSQL systems may also be more appropriate to maintain standardized medical information systems due to the special nature of the updating policies of medical information, which should not affect the consistency and efficiency of the data stored in NoSQL databases. One limitation of this protocol is the lack of direct results of improved relational systems such as archetype relational mapping (ARM) with the same data. However, the interpolation of doubling-size database results to those presented in the literature and other published results suggests that NoSQL systems might be more appropriate in many specific scenarios and problems to be solved. For example, NoSQL may be appropriate for document-based tasks such as EHR extracts used in clinical practice, or edition and visualization, or situations where the aim is not only to query medical information, but also to restore the EHR in exactly its original form. PMID:29608174
Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario
2018-03-19
This research shows a protocol to assess the computational complexity of querying relational and non-relational (NoSQL (not only Structured Query Language)) standardized electronic health record (EHR) medical information database systems (DBMS). It uses a set of three doubling-sized databases, i.e. databases storing 5000, 10,000 and 20,000 realistic standardized EHR extracts, in three different database management systems (DBMS): relational MySQL object-relational mapping (ORM), document-based NoSQL MongoDB, and native extensible markup language (XML) NoSQL eXist. The average response times to six complexity-increasing queries were computed, and the results showed a linear behavior in the NoSQL cases. In the NoSQL field, MongoDB presents a much flatter linear slope than eXist. NoSQL systems may also be more appropriate to maintain standardized medical information systems due to the special nature of the updating policies of medical information, which should not affect the consistency and efficiency of the data stored in NoSQL databases. One limitation of this protocol is the lack of direct results of improved relational systems such as archetype relational mapping (ARM) with the same data. However, the interpolation of doubling-size database results to those presented in the literature and other published results suggests that NoSQL systems might be more appropriate in many specific scenarios and problems to be solved. For example, NoSQL may be appropriate for document-based tasks such as EHR extracts used in clinical practice, or edition and visualization, or situations where the aim is not only to query medical information, but also to restore the EHR in exactly its original form.
Computational Study of the Structure and Mechanical Properties of the Molecular Crystal RDX
2011-01-01
Doctor of Philosophy, 2011 Directed By: Assistant Professor Santiago D. Solares , Department of Mechanical Engineering Molecular crystals...Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response...NAME OF RESPONSIBLE PERSON a. REPORT unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed
ERIC Educational Resources Information Center
Christman, Jolley Bruce
Philadelphia's school reform initiative, Children Achieving" was evaluated. The focus in this report is on decentralization, exploring how Children Achieving is strengthening schools' capacity to make and carry out informed decisions that lead to schoolwide standards, how the new structures are working at various levels of the system, and how…
Sharko, Marianne; Wilcox, Lauren; Hong, Matthew K; Ancker, Jessica S
2018-05-17
Medical privacy policies, which are clear-cut for adults and young children, become ambiguous during adolescence. Yet medical organizations must establish unambiguous rules about patient and parental access to electronic patient portals. We conducted a national interview study to characterize the diversity in adolescent portal policies across a range of institutions and determine the factors influencing decisions about these policies. Within a sampling framework that ensured diversity of geography and medical organization type, we used purposive and snowball sampling to identify key informants. Semi-structured interviews were conducted and analyzed with inductive thematic analysis, followed by a member check. We interviewed informants from 25 medical organizations. Policies established different degrees of adolescent access (from none to partial to complete), access ages (from 10 to 18 years), degrees of parental access, and types of information considered sensitive. Federal and state law did not dominate policy decisions. Other factors in the decision process were: technology capabilities; differing patient population needs; resources; community expectations; balance between information access and privacy; balance between promoting autonomy and promoting family shared decision-making; and tension between teen privacy and parental preferences. Some informants believed that clearer standards would simplify policy-making; others worried that standards could restrict high-quality polices. In the absence of universally accepted standards, medical organizations typically undergo an arduous decision-making process to develop teen portal policies, weighing legal, economic, social, clinical, and technological factors. As a result, portal access policies are highly inconsistent across the United States and within individual states.
Field documentation and client presentation of IR inspections on new masonry structures
NASA Astrophysics Data System (ADS)
McMullan, Phillip C.
1991-03-01
With the adoption of American Concrete Institute's Design Standard 530 (ACI 530-88/ASCE 5-88) and Specifications (ACI 530.1-88/ASCE 6-88) by more governing bodies throughout the United States, the level and method of inspecting masonry structures is rapidly changing. These new standards set forth inspection criteria such that the Professional of Record (i.e., Architect), can determine the level of inspection based on the type and complexity of the structure being built. For example, a hospital would require considerably more inspection than a Seven-Eleven mini-market. However, the standards require that all new masonry buildings must be inspected. Infrared thermography has proven to be an effective tool to assist in the required inspections. These inspections focus on evaluating masonry for compliance with the design specifications with regard to material, structural strength and thermal performance, the use of video infrared thermography provides a thorough systematic method for inspection of structural solids and thermal integrity of masonry structures. In conducting masonry inspections, the creation of a permanent, well-documented record is valuable in avoiding potential controversy over the inspection findings. Therefore, the inspection method, verification of findings, and presentation of the inspection data are key to the successful use of infrared thermography as an inspection tool. This paper will focus on the method of inspection which TSI employs in conducting new masonry inspections. Additionally, an important component of any work is the presentation of the data. We will look at the information which is generated during this type of inspection and how that data can be converted into a usable report for the various parties involved in construction of a new masonry building.
Electronic Health Records Data and Metadata: Challenges for Big Data in the United States.
Sweet, Lauren E; Moulaison, Heather Lea
2013-12-01
This article, written by researchers studying metadata and standards, represents a fresh perspective on the challenges of electronic health records (EHRs) and serves as a primer for big data researchers new to health-related issues. Primarily, we argue for the importance of the systematic adoption of standards in EHR data and metadata as a way of promoting big data research and benefiting patients. EHRs have the potential to include a vast amount of longitudinal health data, and metadata provides the formal structures to govern that data. In the United States, electronic medical records (EMRs) are part of the larger EHR. EHR data is submitted by a variety of clinical data providers and potentially by the patients themselves. Because data input practices are not necessarily standardized, and because of the multiplicity of current standards, basic interoperability in EHRs is hindered. Some of the issues with EHR interoperability stem from the complexities of the data they include, which can be both structured and unstructured. A number of controlled vocabularies are available to data providers. The continuity of care document standard will provide interoperability in the United States between the EMR and the larger EHR, potentially making data input by providers directly available to other providers. The data involved is nonetheless messy. In particular, the use of competing vocabularies such as the Systematized Nomenclature of Medicine-Clinical Terms, MEDCIN, and locally created vocabularies inhibits large-scale interoperability for structured portions of the records, and unstructured portions, although potentially not machine readable, remain essential. Once EMRs for patients are brought together as EHRs, the EHRs must be managed and stored. Adequate documentation should be created and maintained to assure the secure and accurate use of EHR data. There are currently a few notable international standards initiatives for EHRs. Organizations such as Health Level Seven International and Clinical Data Interchange Standards Consortium are developing and overseeing implementation of interoperability standards. Denmark and Singapore are two countries that have successfully implemented national EHR systems. Future work in electronic health information initiatives should underscore the importance of standards and reinforce interoperability of EHRs for big data research and for the sake of patients.
16 CFR 314.3 - Standards for safeguarding customer information.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Standards for safeguarding customer... OF CONGRESS STANDARDS FOR SAFEGUARDING CUSTOMER INFORMATION § 314.3 Standards for safeguarding customer information. (a) Information security program. You shall develop, implement, and maintain a...
Code of Federal Regulations, 2011 CFR
2011-10-01
... HEALTH AND HUMAN SERVICES HEALTH INFORMATION TECHNOLOGY HEALTH INFORMATION TECHNOLOGY STANDARDS... TECHNOLOGY Standards and Implementation Specifications for Health Information Technology § 170.205 Content.... The Healthcare Information Technology Standards Panel (HITSP) Summary Documents Using HL7 CCD...
Code of Federal Regulations, 2010 CFR
2010-10-01
... HEALTH AND HUMAN SERVICES HEALTH INFORMATION TECHNOLOGY HEALTH INFORMATION TECHNOLOGY STANDARDS... TECHNOLOGY Standards and Implementation Specifications for Health Information Technology § 170.205 Content.... The Healthcare Information Technology Standards Panel (HITSP) Summary Documents Using HL7 CCD...
Current Status of Multidisciplinary Care in Psoriatic Arthritis in Spain: NEXUS 2.0 Project.
Queiro, Rubén; Coto, Pablo; Joven, Beatriz; Rivera, Raquel; Navío Marco, Teresa; de la Cueva, Pablo; Alvarez Vega, Jose Luis; Narváez Moreno, Basilio; Rodriguez Martínez, Fernando José; Pardo Sánchez, José; Feced Olmos, Carlos; Pujol, Conrad; Rodríguez, Jesús; Notario, Jaume; Pujol Busquets, Manel; García Font, Mercè; Galindez, Eva; Pérez Barrio, Silvia; Urruticoechea-Arana, Ana; Hergueta, Merce; López Montilla, M Dolores; Vélez García-Nieto, Antonio; Maceiras, Francisco; Rodríguez Pazos, Laura; Rubio Romero, Esteban; Rodríguez Fernandez Freire, Lourdes; Luelmo, Jesús; Gratacós, Jordi
2018-02-26
1) To analyze the implementation of multidisciplinary care models in psoriatic arthritis (PsA) patients, 2) To define minimum and excellent standards of care. A survey was sent to clinicians who already performed multidisciplinary care or were in the process of undertaking it, asking: 1) Type of multidisciplinary care model implemented; 2) Degree, priority and feasibility of the implementation of quality standards in the structure, process and result for care. In 6 regional meetings the results of the survey were presented and discussed, and the ultimate priority of quality standards for care was defined. At a nominal meeting group, 11 experts (rheumatologists and dermatologists) analyzed the results of the survey and the regional meetings. With this information, they defined which standards of care are currently considered as minimum and which are excellent. The simultaneous and parallel models of multidisciplinary care are those most widely implemented, but the implementation of quality standards is highly variable. In terms of structure it ranges from 22% to 74%, in those related to process from 17% to 54% and in the results from 2% to 28%. Of the 25 original quality standards for care, 9 were considered only minimum, 4 were excellent and 12 defined criteria for minimum level and others for excellence. The definition of minimum and excellent quality standards for care will help achieve the goal of multidisciplinary care for patients with PAs, which is the best healthcare possible. Copyright © 2018 Elsevier España, S.L.U. and Sociedad Española de Reumatología y Colegio Mexicano de Reumatología. All rights reserved.
Fujino, Masayuki A; Bito, Shigeru; Takei, Kazuko; Mizuno, Shigeto; Yokoi, Hideto
2006-01-01
Since 1994, following the leading efforts by the European Society for Gastrointestinal Endoscopy, Organisation Mondiale d'Endoscopie Digestive (OMED) has succeeded in compiling minimal number of terms required for computer generation of digestive endoscopy reports nicknamed MST (Minimal Standard Terminology). Though with some insufficiencies, and though developed only for digestive endoscopy, MST has been the only available terminology that is globally standardized in medicine. By utilizing the merits of a unified, structured terminology that can be used in multiple languages we can utilize the data stored in different languages as a common database. For this purpose, a standing, terminology-managing organization that manages and maintains and, when required, expands the terminology on a global level, is absolutely necessary. Unfortunately, however, the organization that performs version control of MST (OMED terminology, standardization and data processing committee) is currently suspending its activity. Medical practice of the world demands more and more specialization, with resultant needs for information exchange among specialized territories. As the cooperation between endoscopy and pathology has become currently the most important problem in the Endoscopy Working Group of Integrating Healthcare Enterprise-Japan (IHE-J,) the cooperation among different specialties is essential. There are DICOM or HL7 standards as the protocols for storage, and exchange (communication) of the data, but there is yet no organization that manages the terminology itself astride different specialties. We hereby propose to establish, within IEEE, for example, a system that promotes standardization of the terminology that can transversely describe a patient, and that can control different societies and groups, as far as the terminology is concerned.
Makadia, Rupa; Matcho, Amy; Ma, Qianli; Knoll, Chris; Schuemie, Martijn; DeFalco, Frank J; Londhe, Ajit; Zhu, Vivienne; Ryan, Patrick B
2015-01-01
Objectives To evaluate the utility of applying the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) across multiple observational databases within an organization and to apply standardized analytics tools for conducting observational research. Materials and methods Six deidentified patient-level datasets were transformed to the OMOP CDM. We evaluated the extent of information loss that occurred through the standardization process. We developed a standardized analytic tool to replicate the cohort construction process from a published epidemiology protocol and applied the analysis to all 6 databases to assess time-to-execution and comparability of results. Results Transformation to the CDM resulted in minimal information loss across all 6 databases. Patients and observations excluded were due to identified data quality issues in the source system, 96% to 99% of condition records and 90% to 99% of drug records were successfully mapped into the CDM using the standard vocabulary. The full cohort replication and descriptive baseline summary was executed for 2 cohorts in 6 databases in less than 1 hour. Discussion The standardization process improved data quality, increased efficiency, and facilitated cross-database comparisons to support a more systematic approach to observational research. Comparisons across data sources showed consistency in the impact of inclusion criteria, using the protocol and identified differences in patient characteristics and coding practices across databases. Conclusion Standardizing data structure (through a CDM), content (through a standard vocabulary with source code mappings), and analytics can enable an institution to apply a network-based approach to observational research across multiple, disparate observational health databases. PMID:25670757
Adding Hierarchical Objects to Relational Database General-Purpose XML-Based Information Managements
NASA Technical Reports Server (NTRS)
Lin, Shu-Chun; Knight, Chris; La, Tracy; Maluf, David; Bell, David; Tran, Khai Peter; Gawdiak, Yuri
2006-01-01
NETMARK is a flexible, high-throughput software system for managing, storing, and rapid searching of unstructured and semi-structured documents. NETMARK transforms such documents from their original highly complex, constantly changing, heterogeneous data formats into well-structured, common data formats in using Hypertext Markup Language (HTML) and/or Extensible Markup Language (XML). The software implements an object-relational database system that combines the best practices of the relational model utilizing Structured Query Language (SQL) with those of the object-oriented, semantic database model for creating complex data. In particular, NETMARK takes advantage of the Oracle 8i object-relational database model using physical-address data types for very efficient keyword searches of records across both context and content. NETMARK also supports multiple international standards such as WEBDAV for drag-and-drop file management and SOAP for integrated information management using Web services. The document-organization and -searching capabilities afforded by NETMARK are likely to make this software attractive for use in disciplines as diverse as science, auditing, and law enforcement.
Bruland, Philipp; Doods, Justin; Storck, Michael; Dugas, Martin
2017-01-01
Data dictionaries provide structural meta-information about data definitions in health information technology (HIT) systems. In this regard, reusing healthcare data for secondary purposes offers several advantages (e.g. reduce documentation times or increased data quality). Prerequisites for data reuse are its quality, availability and identical meaning of data. In diverse projects, research data warehouses serve as core components between heterogeneous clinical databases and various research applications. Given the complexity (high number of data elements) and dynamics (regular updates) of electronic health record (EHR) data structures, we propose a clinical metadata warehouse (CMDW) based on a metadata registry standard. Metadata of two large hospitals were automatically inserted into two CMDWs containing 16,230 forms and 310,519 data elements. Automatic updates of metadata are possible as well as semantic annotations. A CMDW allows metadata discovery, data quality assessment and similarity analyses. Common data models for distributed research networks can be established based on similarity analyses.
Effective 3-D surface modeling for geographic information systems
NASA Astrophysics Data System (ADS)
Yüksek, K.; Alparslan, M.; Mendi, E.
2013-11-01
In this work, we propose a dynamic, flexible and interactive urban digital terrain platform (DTP) with spatial data and query processing capabilities of Geographic Information Systems (GIS), multimedia database functionality and graphical modeling infrastructure. A new data element, called Geo-Node, which stores image, spatial data and 3-D CAD objects is developed using an efficient data structure. The system effectively handles data transfer of Geo-Nodes between main memory and secondary storage with an optimized Directional Replacement Policy (DRP) based buffer management scheme. Polyhedron structures are used in Digital Surface Modeling (DSM) and smoothing process is performed by interpolation. The experimental results show that our framework achieves high performance and works effectively with urban scenes independent from the amount of spatial data and image size. The proposed platform may contribute to the development of various applications such as Web GIS systems based on 3-D graphics standards (e.g. X3-D and VRML) and services which integrate multi-dimensional spatial information and satellite/aerial imagery.
FigSum: automatically generating structured text summaries for figures in biomedical literature.
Agarwal, Shashank; Yu, Hong
2009-11-14
Figures are frequently used in biomedical articles to support research findings; however, they are often difficult to comprehend based on their legends alone and information from the full-text articles is required to fully understand them. Previously, we found that the information associated with a single figure is distributed throughout the full-text article the figure appears in. Here, we develop and evaluate a figure summarization system - FigSum, which aggregates this scattered information to improve figure comprehension. For each figure in an article, FigSum generates a structured text summary comprising one sentence from each of the four rhetorical categories - Introduction, Methods, Results and Discussion (IMRaD). The IMRaD category of sentences is predicted by an automated machine learning classifier. Our evaluation shows that FigSum captures 53% of the sentences in the gold standard summaries annotated by biomedical scientists and achieves an average ROUGE-1 score of 0.70, which is higher than a baseline system.
FigSum: Automatically Generating Structured Text Summaries for Figures in Biomedical Literature
Agarwal, Shashank; Yu, Hong
2009-01-01
Figures are frequently used in biomedical articles to support research findings; however, they are often difficult to comprehend based on their legends alone and information from the full-text articles is required to fully understand them. Previously, we found that the information associated with a single figure is distributed throughout the full-text article the figure appears in. Here, we develop and evaluate a figure summarization system – FigSum, which aggregates this scattered information to improve figure comprehension. For each figure in an article, FigSum generates a structured text summary comprising one sentence from each of the four rhetorical categories – Introduction, Methods, Results and Discussion (IMRaD). The IMRaD category of sentences is predicted by an automated machine learning classifier. Our evaluation shows that FigSum captures 53% of the sentences in the gold standard summaries annotated by biomedical scientists and achieves an average ROUGE-1 score of 0.70, which is higher than a baseline system. PMID:20351812
Effective 3-D surface modeling for geographic information systems
NASA Astrophysics Data System (ADS)
Yüksek, K.; Alparslan, M.; Mendi, E.
2016-01-01
In this work, we propose a dynamic, flexible and interactive urban digital terrain platform with spatial data and query processing capabilities of geographic information systems, multimedia database functionality and graphical modeling infrastructure. A new data element, called Geo-Node, which stores image, spatial data and 3-D CAD objects is developed using an efficient data structure. The system effectively handles data transfer of Geo-Nodes between main memory and secondary storage with an optimized directional replacement policy (DRP) based buffer management scheme. Polyhedron structures are used in digital surface modeling and smoothing process is performed by interpolation. The experimental results show that our framework achieves high performance and works effectively with urban scenes independent from the amount of spatial data and image size. The proposed platform may contribute to the development of various applications such as Web GIS systems based on 3-D graphics standards (e.g., X3-D and VRML) and services which integrate multi-dimensional spatial information and satellite/aerial imagery.
FDA toxicity databases and real-time data entry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arvidson, Kirk B.
Structure-searchable electronic databases are valuable new tools that are assisting the FDA in its mission to promptly and efficiently review incoming submissions for regulatory approval of new food additives and food contact substances. The Center for Food Safety and Applied Nutrition's Office of Food Additive Safety (CFSAN/OFAS), in collaboration with Leadscope, Inc., is consolidating genetic toxicity data submitted in food additive petitions from the 1960s to the present day. The Center for Drug Evaluation and Research, Office of Pharmaceutical Science's Informatics and Computational Safety Analysis Staff (CDER/OPS/ICSAS) is separately gathering similar information from their submissions. Presently, these data are distributedmore » in various locations such as paper files, microfiche, and non-standardized toxicology memoranda. The organization of the data into a consistent, searchable format will reduce paperwork, expedite the toxicology review process, and provide valuable information to industry that is currently available only to the FDA. Furthermore, by combining chemical structures with genetic toxicity information, biologically active moieties can be identified and used to develop quantitative structure-activity relationship (QSAR) modeling and testing guidelines. Additionally, chemicals devoid of toxicity data can be compared to known structures, allowing for improved safety review through the identification and analysis of structural analogs. Four database frameworks have been created: bacterial mutagenesis, in vitro chromosome aberration, in vitro mammalian mutagenesis, and in vivo micronucleus. Controlled vocabularies for these databases have been established. The four separate genetic toxicity databases are compiled into a single, structurally-searchable database for easy accessibility of the toxicity information. Beyond the genetic toxicity databases described here, additional databases for subchronic, chronic, and teratogenicity studies have been prepared.« less
Code of Federal Regulations, 2013 CFR
2013-10-01
... 45 Public Welfare 1 2013-10-01 2013-10-01 false Standards for health information technology to... Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES HEALTH INFORMATION TECHNOLOGY HEALTH INFORMATION... FOR HEALTH INFORMATION TECHNOLOGY Standards and Implementation Specifications for Health Information...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 45 Public Welfare 1 2012-10-01 2012-10-01 false Standards for health information technology to... Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES HEALTH INFORMATION TECHNOLOGY HEALTH INFORMATION... FOR HEALTH INFORMATION TECHNOLOGY Standards and Implementation Specifications for Health Information...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 45 Public Welfare 1 2014-10-01 2014-10-01 false Standards for health information technology to... Welfare Department of Health and Human Services HEALTH INFORMATION TECHNOLOGY HEALTH INFORMATION... FOR HEALTH INFORMATION TECHNOLOGY Standards and Implementation Specifications for Health Information...
Code of Federal Regulations, 2011 CFR
2011-10-01
... 45 Public Welfare 1 2011-10-01 2011-10-01 false Standards for health information technology to... Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES HEALTH INFORMATION TECHNOLOGY HEALTH INFORMATION... FOR HEALTH INFORMATION TECHNOLOGY Standards and Implementation Specifications for Health Information...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 1 2010-10-01 2010-10-01 false Standards for health information technology to... Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES HEALTH INFORMATION TECHNOLOGY HEALTH INFORMATION... FOR HEALTH INFORMATION TECHNOLOGY Standards and Implementation Specifications for Health Information...
Contour sensitive saliency and depth application in image retargeting
NASA Astrophysics Data System (ADS)
Lu, Hongju; Yue, Pengfei; Zhao, Yanhui; Liu, Rui; Fu, Yuanbin; Zheng, Yuanjie; Cui, Jia
2018-04-01
Image retargeting technique requires important information preservation and less edge distortion during increasing/decreasing image size. The major existed content-aware methods perform well. However, there are two problems should be improved: the slight distortion appeared at the object edges and the structure distortion in the nonsalient area. According to psychological theories, people evaluate image quality based on multi-level judgments and comparison between different areas, both image content and image structure. The paper proposes a new standard: the structure preserving in non-salient area. After observation and image analysis, blur (slight blur) is generally existed at the edge of objects. The blur feature is used to estimate the depth cue, named blur depth descriptor. It can be used in the process of saliency computation for balanced image retargeting result. In order to keep the structure information in nonsalient area, the salient edge map is presented in Seam Carving process, instead of field-based saliency computation. The derivative saliency from x- and y-direction can avoid the redundant energy seam around salient objects causing structure distortion. After the comparison experiments between classical approaches and ours, the feasibility of our algorithm is proved.
C-ME: A 3D Community-Based, Real-Time Collaboration Tool for Scientific Research and Training
Kolatkar, Anand; Kennedy, Kevin; Halabuk, Dan; Kunken, Josh; Marrinucci, Dena; Bethel, Kelly; Guzman, Rodney; Huckaby, Tim; Kuhn, Peter
2008-01-01
The need for effective collaboration tools is growing as multidisciplinary proteome-wide projects and distributed research teams become more common. The resulting data is often quite disparate, stored in separate locations, and not contextually related. Collaborative Molecular Modeling Environment (C-ME) is an interactive community-based collaboration system that allows researchers to organize information, visualize data on a two-dimensional (2-D) or three-dimensional (3-D) basis, and share and manage that information with collaborators in real time. C-ME stores the information in industry-standard databases that are immediately accessible by appropriate permission within the computer network directory service or anonymously across the internet through the C-ME application or through a web browser. The system addresses two important aspects of collaboration: context and information management. C-ME allows a researcher to use a 3-D atomic structure model or a 2-D image as a contextual basis on which to attach and share annotations to specific atoms or molecules or to specific regions of a 2-D image. These annotations provide additional information about the atomic structure or image data that can then be evaluated, amended or added to by other project members. PMID:18286178
NASA Technical Reports Server (NTRS)
Pascucci, R. F.; Smith, A.
1982-01-01
To assist the U.S. Geological Survey in carrying out a Congressional mandate to investigate the use of side-looking airborne radar (SLAR) for resources exploration, a research program was conducted to define the contribution of SLAR imagery to structural geologic mapping and to compare this with contributions from other remote sensing systems. Imagery from two SLAR systems and from three other remote sensing systems was interpreted, and the resulting information was digitized, quantified and intercompared using a computer-assisted geographic information system (GIS). The study area covers approximately 10,000 square miles within the Naval Petroleum Reserve, Alaska, and is situated between the foothills of the Brooks Range and the North Slope. The principal objectives were: (1) to establish quantitatively, the total information contribution of each of the five remote sensing systems to the mapping of structural geology; (2) to determine the amount of information detected in common when the sensors are used in combination; and (3) to determine the amount of unique, incremental information detected by each sensor when used in combination with others. The remote sensor imagery that was investigated included real-aperture and synthetic-aperture radar imagery, standard and digitally enhanced LANDSAT MSS imagery, and aerial photos.
NASA Astrophysics Data System (ADS)
Cao, Zhenggang; Ding, Zengqian; Hu, Zhixiong; Wen, Tao; Qiao, Wen; Liu, Wenli
2016-10-01
Optical coherence tomography (OCT) has been widely applied in diagnosis of eye diseases during the last 20 years. Differing from traditional two-dimension imaging technologies, OCT could also provide cross-sectional information of target tissues simultaneously and precisely. As well known, axial resolution is one of the most critical parameters impacting the OCT image quality, which determines whether an accurate diagnosis could be obtained. Therefore, it is important to evaluate the axial resolution of an OCT equipment. Phantoms always play an important role in the standardization and validation process. Here, a standard model eye with micro-scale multilayer structure was custom designed and manufactured. Mimicking a real human eye, analyzing the physical characteristic of layer structures of retina and cornea in-depth, appropriate materials were selected by testing the scattering coefficient of PDMS phantoms with difference concentration of TiO2 or BaSO4 particles. An artificial retina and cornea with multilayer-films which have a thickness of 10 to 60 micrometers for each layer were fabricated using spin coating technology. Considering key parameters of the standard model eye need to be traceable as well as accurate, the optical refractive index and layer structure thicknesses of phantoms were verified by utilizing Thickness Monitoring System. Consequently, a standard OCT model eye was obtained after the retinal or corneal phantom was embedded into a water-filled model eye which has been fabricated by 3D printing technology to simulate ocular dispersion and emmetropic refraction. The eye model was manufactured with a transparent resin to simulate realistic ophthalmic testing environment, and most key optical elements including cornea, lens and vitreous body were realized. By investigating with a research and a clinical OCT system respectively, the OCT model eye was demonstrated with similar physical properties as natural eye, and the multilayer film measurement provided an effective method to rapidly evaluate the axial resolution of ophthalmic OCT devices.
Metadata for WIS and WIGOS: GAW Profile of ISO19115 and Draft WIGOS Core Metadata Standard
NASA Astrophysics Data System (ADS)
Klausen, Jörg; Howe, Brian
2014-05-01
The World Meteorological Organization (WMO) Integrated Global Observing System (WIGOS) is a key WMO priority to underpin all WMO Programs and new initiatives such as the Global Framework for Climate Services (GFCS). The development of the WIGOS Operational Information Resource (WIR) is central to the WIGOS Framework Implementation Plan (WIGOS-IP). The WIR shall provide information on WIGOS and its observing components, as well as requirements of WMO application areas. An important aspect is the description of the observational capabilities by way of structured metadata. The Global Atmosphere Watch is the WMO program addressing the chemical composition and selected physical properties of the atmosphere. Observational data are collected and archived by GAW World Data Centres (WDCs) and related data centres. The Task Team on GAW WDCs (ET-WDC) have developed a profile of the ISO19115 metadata standard that is compliant with the WMO Information System (WIS) specification for the WMO Core Metadata Profile v1.3. This profile is intended to harmonize certain aspects of the documentation of observations as well as the interoperability of the WDCs. The Inter-Commission-Group on WIGOS (ICG-WIGOS) has established the Task Team on WIGOS Metadata (TT-WMD) with representation of all WMO Technical Commissions and the objective to define the WIGOS Core Metadata. The result of this effort is a draft semantic standard comprising of a set of metadata classes that are considered to be of critical importance for the interpretation of observations relevant to WIGOS. The purpose of the presentation is to acquaint the audience with the standard and to solicit informal feed-back from experts in the various disciplines of meteorology and climatology. This feed-back will help ET-WDC and TT-WMD to refine the GAW metadata profile and the draft WIGOS metadata standard, thereby increasing their utility and acceptance.
Open Access Internet Resources for Nano-Materials Physics Education
NASA Astrophysics Data System (ADS)
Moeck, Peter; Seipel, Bjoern; Upreti, Girish; Harvey, Morgan; Garrick, Will
2006-05-01
Because a great deal of nano-material science and engineering relies on crystalline materials, materials physicists have to provide their own specific contributions to the National Nanotechnology Initiative. Here we briefly review two freely accessible internet-based crystallographic databases, the Nano-Crystallography Database (http://nanocrystallography.research.pdx.edu) and the Crystallography Open Database (http://crystallography.net). Information on over 34,000 full structure determinations are stored in these two databases in the Crystallographic Information File format. The availability of such crystallographic data on the internet in a standardized format allows for all kinds of web-based crystallographic calculations and visualizations. Two examples of which that are dealt with in this paper are: interactive crystal structure visualizations in three dimensions and calculations of lattice-fringe fingerprints for the identification of unknown nanocrystals from their atomic-resolution transmission electron microscopy images.
Learning physical descriptors for materials science by compressed sensing
NASA Astrophysics Data System (ADS)
Ghiringhelli, Luca M.; Vybiral, Jan; Ahmetcik, Emre; Ouyang, Runhai; Levchenko, Sergey V.; Draxl, Claudia; Scheffler, Matthias
2017-02-01
The availability of big data in materials science offers new routes for analyzing materials properties and functions and achieving scientific understanding. Finding structure in these data that is not directly visible by standard tools and exploitation of the scientific information requires new and dedicated methodology based on approaches from statistical learning, compressed sensing, and other recent methods from applied mathematics, computer science, statistics, signal processing, and information science. In this paper, we explain and demonstrate a compressed-sensing based methodology for feature selection, specifically for discovering physical descriptors, i.e., physical parameters that describe the material and its properties of interest, and associated equations that explicitly and quantitatively describe those relevant properties. As showcase application and proof of concept, we describe how to build a physical model for the quantitative prediction of the crystal structure of binary compound semiconductors.
A National Medical Information System for Senegal: Architecture and Services.
Camara, Gaoussou; Diallo, Al Hassim; Lo, Moussa; Tendeng, Jacques-Noël; Lo, Seynabou
2016-01-01
In Senegal, great amounts of data are daily generated by medical activities such as consultation, hospitalization, blood test, x-ray, birth, death, etc. These data are still recorded in register, printed images, audios and movies which are manually processed. However, some medical organizations have their own software for non-standardized patient record management, appointment, wages, etc. without any possibility of sharing these data or communicating with other medical structures. This leads to lots of limitations in reusing or sharing these data because of their possible structural and semantic heterogeneity. To overcome these problems we have proposed a National Medical Information System for Senegal (SIMENS). As an integrated platform, SIMENS provides an EHR system that supports healthcare activities, a mobile version and a web portal. The SIMENS architecture proposes also a data and application integration services for supporting interoperability and decision making.
Information Technology Standards: A Component of Federal Information Policy.
ERIC Educational Resources Information Center
Moen, William E.
1994-01-01
Discusses the need for proposing and implementing an information technology standards policy for the federal government. Topics addressed include the National Information Infrastructure (NII); voluntary standards among federal agencies; private sector organizations; coordinating the use of standards; enforcing compliance; policy goals; a framework…
Wirshing, Donna A; Sergi, Mark J; Mintz, Jim
2005-01-01
This study evaluated a brief educational video designed to enhance the informed consent process for people with serious mental and medical illnesses who are considering participating in treatment research. Individuals with schizophrenia who were being recruited for ongoing clinical trials, medical patients without self-reported psychiatric comorbidity, and university undergraduates were randomly assigned to view either a highly structured instructional videotape about the consent process in treatment research or a control videotape that presented only general information about bioethical issues in human research. Knowledge about informed consent was measured before and after viewing. Viewing the experimental videotape resulted in larger gains in knowledge about informed consent. Standardized effect sizes were large in all groups. The videotape was thus an effective teaching tool across diverse populations, ranging from individuals with severe chronic mental illness to university undergraduates.
Schadow, Gunther
2005-01-01
Prescribing errors are an important cause of adverse events, and lack of knowledge of the drug is a root cause for prescribing errors. The FDA is issuing new regulations that will make the drug labels much more useful not only to physicians, but also to computerized order entry systems that support physicians to practice safe prescribing. For this purpose, FDA works with HL7 to create the Structured Product Label (SPL) standard that includes a document format as well as a drug knowledge representation, this poster introduces the basic concepts of SPL.
NASA Technical Reports Server (NTRS)
Grant, M.; Vernucci, A.
1991-01-01
A possible Data Relay Satellite System (DRSS) topology and network architecture is introduced. An asynchronous network concept, whereby each link (Inter-orbit, Inter-satellite, Feeder) is allowed to operate on its own clock, without causing loss of information, in conjunction with packet data structures, such as those specified by the CCSDS for advanced orbiting systems is discussed. A matching OBP payload architecture is described, highlighting the advantages provided by the OBP-based concept and then giving some indications on the OBP mass/power requirements.
On the structure of critical energy levels for the cubic focusing NLS on star graphs
NASA Astrophysics Data System (ADS)
Adami, Riccardo; Cacciapuoti, Claudio; Finco, Domenico; Noja, Diego
2012-05-01
We provide information on a non-trivial structure of phase space of the cubic nonlinear Schrödinger (NLS) on a three-edge star graph. We prove that, in contrast to the case of the standard NLS on the line, the energy associated with the cubic focusing Schrödinger equation on the three-edge star graph with a free (Kirchhoff) vertex does not attain a minimum value on any sphere of constant L2-norm. We moreover show that the only stationary state with prescribed L2-norm is indeed a saddle point.
Minimum entropy density method for the time series analysis
NASA Astrophysics Data System (ADS)
Lee, Jeong Won; Park, Joongwoo Brian; Jo, Hang-Hyun; Yang, Jae-Suk; Moon, Hie-Tae
2009-01-01
The entropy density is an intuitive and powerful concept to study the complicated nonlinear processes derived from physical systems. We develop the minimum entropy density method (MEDM) to detect the structure scale of a given time series, which is defined as the scale in which the uncertainty is minimized, hence the pattern is revealed most. The MEDM is applied to the financial time series of Standard and Poor’s 500 index from February 1983 to April 2006. Then the temporal behavior of structure scale is obtained and analyzed in relation to the information delivery time and efficient market hypothesis.
NASA Technical Reports Server (NTRS)
Greene, P. H.
1972-01-01
Both in practical engineering and in control of muscular systems, low level subsystems automatically provide crude approximations to the proper response. Through low level tuning of these approximations, the proper response variant can emerge from standardized high level commands. Such systems are expressly suited to emerging large scale integrated circuit technology. A computer, using symbolic descriptions of subsystem responses, can select and shape responses of low level digital or analog microcircuits. A mathematical theory that reveals significant informational units in this style of control and software for realizing such information structures are formulated.
Diagnostic Imaging of the Hepatobiliary System: An Update.
Marolf, Angela J
2017-05-01
Recent advances in diagnostic imaging of the hepatobiliary system include MRI, computed tomography (CT), contrast-enhanced ultrasound, and ultrasound elastography. With the advent of multislice CT scanners, sedated examinations in veterinary patients are feasible, increasing the utility of this imaging modality. CT and MRI provide additional information for dogs and cats with hepatobiliary diseases due to lack of superimposition of structures, operator dependence, and through intravenous contrast administration. Advanced ultrasound methods can offer complementary information to standard ultrasound imaging. These newer imaging modalities assist clinicians by aiding diagnosis, prognostication, and surgical planning. Copyright © 2016 Elsevier Inc. All rights reserved.
A logical approach to semantic interoperability in healthcare.
Bird, Linda; Brooks, Colleen; Cheong, Yu Chye; Tun, Nwe Ni
2011-01-01
Singapore is in the process of rolling out a number of national e-health initiatives, including the National Electronic Health Record (NEHR). A critical enabler in the journey towards semantic interoperability is a Logical Information Model (LIM) that harmonises the semantics of the information structure with the terminology. The Singapore LIM uses a combination of international standards, including ISO 13606-1 (a reference model for electronic health record communication), ISO 21090 (healthcare datatypes), and SNOMED CT (healthcare terminology). The LIM is accompanied by a logical design approach, used to generate interoperability artifacts, and incorporates mechanisms for achieving unidirectional and bidirectional semantic interoperability.
A SOA-Based Solution to Monitor Vaccination Coverage Among HIV-Infected Patients in Liguria.
Giannini, Barbara; Gazzarata, Roberta; Sticchi, Laura; Giacomini, Mauro
2016-01-01
Vaccination in HIV-infected patients constitutes an essential tool in the prevention of the most common infectious diseases. The Ligurian Vaccination in HIV Program is a proposed vaccination schedule specifically dedicated to this risk group. Selective strategies are proposed within this program, employing ICT (Information and Communication) tools to identify this susceptible target group, to monitor immunization coverage over time and to manage failures and defaulting. The proposal is to connect an immunization registry system to an existing regional platform that allows clinical data re-use among several medical structures, to completely manage the vaccination process. This architecture will adopt a Service Oriented Architecture (SOA) approach and standard HSSP (Health Services Specification Program) interfaces to support interoperability. According to the presented solution, vaccination administration information retrieved from the immunization registry will be structured according to the specifications within the immunization section of the HL7 (Health Level 7) CCD (Continuity of Care Document) document. Immunization coverage will be evaluated through the continuous monitoring of serology and antibody titers gathered from the hospital LIS (Laboratory Information System) structured into a HL7 Version 3 (v3) Clinical Document Architecture Release 2 (CDA R2).
Xenobiology: State-of-the-Art, Ethics, and Philosophy of New-to-Nature Organisms.
Schmidt, Markus; Pei, Lei; Budisa, Nediljko
The basic chemical constitution of all living organisms in the context of carbon-based chemistry consists of a limited number of small molecules and polymers. Until the twenty-first century, biology was mainly an analytical science and has now reached a point where it merges with engineering science, paving the way for synthetic biology. One of the objectives of synthetic biology is to try to change the chemical compositions of living cells, that is, to create an artificial biological diversity, which in turn fosters a new sub-field of synthetic biology, xenobiology. In particular, the genetic code in living systems is based on highly standardized chemistry composed of the same "letters" or nucleotides as informational polymers (DNA, RNA) and the 20 amino acids which serve as basic building blocks for proteins. The universality of the genetic code enables not only vertical gene transfer within the same species but also horizontal gene transfer across biological taxa, which require a high degree of standardization and interconnectivity. Although some minor alterations of the standard genetic code are found in nature (e.g., proteins containing non-conical amino acids exist in nature, and some organisms use alternated coding systems), all structurally deep chemistry changes within living systems are generally lethal, making the creation of artificial biological system an extremely difficult challenge.In this context, one of the great challenges for bioscience is the development of a strategy for expanding the standard basic chemical repertoire of living cells. Attempts to alter the meaning of the genetic information stored in DNA as an informational polymer by changing the chemistry of the polymer (i.e., xeno-nucleic acids) or by changes in the genetic code have already yielded successful results. In the future this should enable the partial or full redirection of the biological information flow to generate "new" version(s) of the genetic code derived from the "old" biological world.In addition to the scientific challenges, the attempt to increase biochemical diversity also raises important ethical and philosophical issues. Although promotors of this branch of synthetic biology highlight the many potential applications to come (e.g., novel tools for diagnostics and fighting infection diseases), such developments could also bring risks affecting social, political, and other structures of nearly all societies.
Development of a core set of outcome measures for OAB treatment.
Foust-Wright, Caroline; Wissig, Stephanie; Stowell, Caleb; Olson, Elizabeth; Anderson, Anita; Anger, Jennifer; Cardozo, Linda; Cotterill, Nikki; Gormley, Elizabeth Ann; Toozs-Hobson, Philip; Heesakkers, John; Herbison, Peter; Moore, Kate; McKinney, Jessica; Morse, Abraham; Pulliam, Samantha; Szonyi, George; Wagg, Adrian; Milsom, Ian
2017-12-01
Standardized measures enable the comparison of outcomes across providers and treatments giving valuable information for improving care quality and efficacy. The aim of this project was to define a minimum standard set of outcome measures and case-mix factors for evaluating the care of patients with overactive bladder (OAB). The International Consortium for Health Outcomes Measurement (ICHOM) convened an international working group (WG) of leading clinicians and patients to engage in a structured method for developing a core outcome set. Consensus was determined by a modified Delphi process, and discussions were supported by both literature review and patient input. The standard set measures outcomes of care for adults seeking treatment for OAB, excluding residents of long-term care facilities. The WG focused on treatment outcomes identified as most important key outcome domains to patients: symptom burden and bother, physical functioning, emotional health, impact of symptoms and treatment on quality of life, and success of treatment. Demographic information and case-mix factors that may affect these outcomes were also included. The standardized outcome set for evaluating clinical care is appropriate for use by all health providers caring for patients with OAB, regardless of specialty or geographic location, and provides key data for quality improvement activities and research.
Video calls from lay bystanders to dispatch centers - risk assessment of information security.
Bolle, Stein R; Hasvold, Per; Henriksen, Eva
2011-09-30
Video calls from mobile phones can improve communication during medical emergencies. Lay bystanders can be instructed and supervised by health professionals at Emergency Medical Communication Centers. Before implementation of video mobile calls in emergencies, issues of information security should be addressed. Information security was assessed for risk, based on the information security standard ISO/IEC 27005:2008. A multi-professional team used structured brainstorming to find threats to the information security aspects confidentiality, quality, integrity, and availability. Twenty security threats of different risk levels were identified and analyzed. Solutions were proposed to reduce the risk level. Given proper implementation, we found no risks to information security that would advocate against the use of video calls between lay bystanders and Emergency Medical Communication Centers. The identified threats should be used as input to formal requirements when planning and implementing video calls from mobile phones for these call centers.
Video calls from lay bystanders to dispatch centers - risk assessment of information security
2011-01-01
Background Video calls from mobile phones can improve communication during medical emergencies. Lay bystanders can be instructed and supervised by health professionals at Emergency Medical Communication Centers. Before implementation of video mobile calls in emergencies, issues of information security should be addressed. Methods Information security was assessed for risk, based on the information security standard ISO/IEC 27005:2008. A multi-professional team used structured brainstorming to find threats to the information security aspects confidentiality, quality, integrity, and availability. Results Twenty security threats of different risk levels were identified and analyzed. Solutions were proposed to reduce the risk level. Conclusions Given proper implementation, we found no risks to information security that would advocate against the use of video calls between lay bystanders and Emergency Medical Communication Centers. The identified threats should be used as input to formal requirements when planning and implementing video calls from mobile phones for these call centers. PMID:21958387
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-04
... for OMB Review; Comment Request; Standard on Asbestos in Construction ACTION: Notice. SUMMARY: On... Administration (OSHA) sponsored information collection request (ICR) titled, ``Standard on Asbestos in... . SUPPLEMENTARY INFORMATION: The information collection requirements of the Standard on Asbestos in Construction...
The DICOM-based radiation therapy information system
NASA Astrophysics Data System (ADS)
Law, Maria Y. Y.; Chan, Lawrence W. C.; Zhang, Xiaoyan; Zhang, Jianguo
2004-04-01
Similar to DICOM for PACS (Picture Archiving and Communication System), standards for radiotherapy (RT) information have been ratified with seven DICOM-RT objects and their IODs (Information Object Definitions), which are more than just images. This presentation describes how a DICOM-based RT Information System Server can be built based on the PACS technology and its data model for a web-based distribution. Methods: The RT information System consists of a Modality Simulator, a data format translator, a RT Gateway, the DICOM RT Server, and the Web-based Application Server. The DICOM RT Server was designed based on a PACS data model and was connected to a Web application Server for distribution of the RT information including therapeutic plans, structures, dose distribution, images and records. Various DICOM RT objects of the patient transmitted to the RT Server were routed to the Web Application Server where the contents of the DICOM RT objects were decoded and mapped to the corresponding location of the RT data model for display in the specially-designed Graphic User Interface. The non-DICOM objects were first rendered to DICOM RT Objects in the translator before they were sent to the RT Server. Results: Ten clinical cases have been collected from different hopsitals for evaluation of the DICOM-based RT Information System. They were successfully routed through the data flow and displayed in the client workstation of the RT information System. Conclusion: Using the DICOM-RT standards, integration of RT data from different vendors is possible.
Whole Protein Native Fitness Potentials
NASA Astrophysics Data System (ADS)
Faraggi, Eshel; Kloczkowski, Andrzej
2013-03-01
Protein structure prediction can be separated into two tasks: sample the configuration space of the protein chain, and assign a fitness between these hypothetical models and the native structure of the protein. One of the more promising developments in this area is that of knowledge based energy functions. However, standard approaches using pair-wise interactions have shown shortcomings demonstrated by the superiority of multi-body-potentials. These shortcomings are due to residue pair-wise interaction being dependent on other residues along the chain. We developed a method that uses whole protein information filtered through machine learners to score protein models based on their likeness to native structures. For all models we calculated parameters associated with the distance to the solvent and with distances between residues. These parameters, in addition to energy estimates obtained by using a four-body-potential, DFIRE, and RWPlus were used as training for machine learners to predict the fitness of the models. Testing on CASP 9 targets showed that our method is superior to DFIRE, RWPlus, and the four-body potential, which are considered standards in the field.
NASA Technical Reports Server (NTRS)
Wiggins, R. A.
1972-01-01
The discrete general linear inverse problem reduces to a set of m equations in n unknowns. There is generally no unique solution, but we can find k linear combinations of parameters for which restraints are determined. The parameter combinations are given by the eigenvectors of the coefficient matrix. The number k is determined by the ratio of the standard deviations of the observations to the allowable standard deviations in the resulting solution. Various linear combinations of the eigenvectors can be used to determine parameter resolution and information distribution among the observations. Thus we can determine where information comes from among the observations and exactly how it constraints the set of possible models. The application of such analyses to surface-wave and free-oscillation observations indicates that (1) phase, group, and amplitude observations for any particular mode provide basically the same type of information about the model; (2) observations of overtones can enhance the resolution considerably; and (3) the degree of resolution has generally been overestimated for many model determinations made from surface waves.
Disbiome database: linking the microbiome to disease.
Janssens, Yorick; Nielandt, Joachim; Bronselaer, Antoon; Debunne, Nathan; Verbeke, Frederick; Wynendaele, Evelien; Van Immerseel, Filip; Vandewynckel, Yves-Paul; De Tré, Guy; De Spiegeleer, Bart
2018-06-04
Recent research has provided fascinating indications and evidence that the host health is linked to its microbial inhabitants. Due to the development of high-throughput sequencing technologies, more and more data covering microbial composition changes in different disease types are emerging. However, this information is dispersed over a wide variety of medical and biomedical disciplines. Disbiome is a database which collects and presents published microbiota-disease information in a standardized way. The diseases are classified using the MedDRA classification system and the micro-organisms are linked to their NCBI and SILVA taxonomy. Finally, each study included in the Disbiome database is assessed for its reporting quality using a standardized questionnaire. Disbiome is the first database giving a clear, concise and up-to-date overview of microbial composition differences in diseases, together with the relevant information of the studies published. The strength of this database lies within the combination of the presence of references to other databases, which enables both specific and diverse search strategies within the Disbiome database, and the human annotation which ensures a simple and structured presentation of the available data.
Direct femtosecond laser surface structuring of crystalline silicon at 400 nm
NASA Astrophysics Data System (ADS)
Nivas, Jijil JJ; Anoop, K. K.; Bruzzese, Riccardo; Philip, Reji; Amoruso, Salvatore
2018-03-01
We have analyzed the effects of the laser pulse wavelength (400 nm) on femtosecond laser surface structuring of silicon. The features of the produced surface structures are investigated as a function of the number of pulses, N, and compared with the surface textures produced by more standard near-infrared (800 nm) laser pulses at a similar level of excitation. Our experimental findings highlight the importance of the light wavelength for the formation of the supra-wavelength grooves, and, for a large number of pulses (N ≈ 1000), the generation of other periodic structures (stripes) at 400 nm, which are not observed at 800 nm. These results provide interesting information on the generation of various surface textures, addressing the effect of the laser pulse wavelength on the generation of grooves and stripes.
Wiltz, Jennifer L; Blanck, Heidi M; Lee, Brian; Kocot, S Lawrence; Seeff, Laura; McGuire, Lisa C; Collins, Janet
2017-10-26
Electronic information technology standards facilitate high-quality, uniform collection of data for improved delivery and measurement of health care services. Electronic information standards also aid information exchange between secure systems that link health care and public health for better coordination of patient care and better-informed population health improvement activities. We developed international data standards for healthy weight that provide common definitions for electronic information technology. The standards capture healthy weight data on the "ABCDs" of a visit to a health care provider that addresses initial obesity prevention and care: assessment, behaviors, continuity, identify resources, and set goals. The process of creating healthy weight standards consisted of identifying needs and priorities, developing and harmonizing standards, testing the exchange of data messages, and demonstrating use-cases. Healthy weight products include 2 message standards, 5 use-cases, 31 LOINC (Logical Observation Identifiers Names and Codes) question codes, 7 healthy weight value sets, 15 public-private engagements with health information technology implementers, and 2 technical guides. A logic model and action steps outline activities toward better data capture, interoperable systems, and information use. Sharing experiences and leveraging this work in the context of broader priorities can inform the development of electronic information standards for similar core conditions and guide strategic activities in electronic systems.
Blanck, Heidi M.; Lee, Brian; Kocot, S. Lawrence; Seeff, Laura; McGuire, Lisa C.; Collins, Janet
2017-01-01
Electronic information technology standards facilitate high-quality, uniform collection of data for improved delivery and measurement of health care services. Electronic information standards also aid information exchange between secure systems that link health care and public health for better coordination of patient care and better-informed population health improvement activities. We developed international data standards for healthy weight that provide common definitions for electronic information technology. The standards capture healthy weight data on the “ABCDs” of a visit to a health care provider that addresses initial obesity prevention and care: assessment, behaviors, continuity, identify resources, and set goals. The process of creating healthy weight standards consisted of identifying needs and priorities, developing and harmonizing standards, testing the exchange of data messages, and demonstrating use-cases. Healthy weight products include 2 message standards, 5 use-cases, 31 LOINC (Logical Observation Identifiers Names and Codes) question codes, 7 healthy weight value sets, 15 public–private engagements with health information technology implementers, and 2 technical guides. A logic model and action steps outline activities toward better data capture, interoperable systems, and information use. Sharing experiences and leveraging this work in the context of broader priorities can inform the development of electronic information standards for similar core conditions and guide strategic activities in electronic systems. PMID:29072985
Interpreting international governance standards for health IT use within general medical practice.
Mahncke, Rachel J; Williams, Patricia A H
2014-01-01
General practices in Australia recognise the importance of comprehensive protective security measures. Some elements of information security governance are incorporated into recommended standards, however the governance component of information security is still insufficiently addressed in practice. The International Organistion for Standardisation (ISO) released a new global standard in May 2013 entitled, ISO/IEC 27014:2013 Information technology - Security techniques - Governance of information security. This standard, applicable to organisations of all sizes, offers a framework against which to assess and implement the governance components of information security. The standard demonstrates the relationship between governance and the management of information security, provides strategic principles and processes, and forms the basis for establishing a positive information security culture. An analysis interpretation of this standard for use in Australian general practice was performed. This work is unique as such interpretation for the Australian healthcare environment has not been undertaken before. It demonstrates an application of the standard at a strategic level to inform existing development of an information security governance framework.
Topaz, Maxim; Lai, Kenneth; Dowding, Dawn; Lei, Victor J; Zisberg, Anna; Bowles, Kathryn H; Zhou, Li
2016-12-01
Electronic health records are being increasingly used by nurses with up to 80% of the health data recorded as free text. However, only a few studies have developed nursing-relevant tools that help busy clinicians to identify information they need at the point of care. This study developed and validated one of the first automated natural language processing applications to extract wound information (wound type, pressure ulcer stage, wound size, anatomic location, and wound treatment) from free text clinical notes. First, two human annotators manually reviewed a purposeful training sample (n=360) and random test sample (n=1100) of clinical notes (including 50% discharge summaries and 50% outpatient notes), identified wound cases, and created a gold standard dataset. We then trained and tested our natural language processing system (known as MTERMS) to process the wound information. Finally, we assessed our automated approach by comparing system-generated findings against the gold standard. We also compared the prevalence of wound cases identified from free-text data with coded diagnoses in the structured data. The testing dataset included 101 notes (9.2%) with wound information. The overall system performance was good (F-measure is a compiled measure of system's accuracy=92.7%), with best results for wound treatment (F-measure=95.7%) and poorest results for wound size (F-measure=81.9%). Only 46.5% of wound notes had a structured code for a wound diagnosis. The natural language processing system achieved good performance on a subset of randomly selected discharge summaries and outpatient notes. In more than half of the wound notes, there were no coded wound diagnoses, which highlight the significance of using natural language processing to enrich clinical decision making. Our future steps will include expansion of the application's information coverage to other relevant wound factors and validation of the model with external data. Copyright © 2016 Elsevier Ltd. All rights reserved.
The Development of a Korean Drug Dosing Database
Kim, Sun Ah; Kim, Jung Hoon; Jang, Yoo Jin; Jeon, Man Ho; Hwang, Joong Un; Jeong, Young Mi; Choi, Kyung Suk; Lee, Iyn Hyang; Jeon, Jin Ok; Lee, Eun Sook; Lee, Eun Kyung; Kim, Hong Bin; Chin, Ho Jun; Ha, Ji Hye; Kim, Young Hoon
2011-01-01
Objectives This report describes the development process of a drug dosing database for ethical drugs approved by the Korea Food & Drug Administration (KFDA). The goal of this study was to develop a computerized system that supports physicians' prescribing decisions, particularly in regards to medication dosing. Methods The advisory committee, comprised of doctors, pharmacists, and nurses from the Seoul National University Bundang Hospital, pharmacists familiar with drug databases, KFDA officials, and software developers from the BIT Computer Co. Ltd. analyzed approved KFDA drug dosing information, defined the fields and properties of the information structure, and designed a management program used to enter dosing information. The management program was developed using a web based system that allows multiple researchers to input drug dosing information in an organized manner. The whole process was improved by adding additional input fields and eliminating the unnecessary existing fields used when the dosing information was entered, resulting in an improved field structure. Results A total of 16,994 drugs sold in the Korean market in July 2009, excluding the exclusion criteria (e.g., radioactivity drugs, X-ray contrast medium), usage and dosing information were made into a database. Conclusions The drug dosing database was successfully developed and the dosing information for new drugs can be continually maintained through the management mode. This database will be used to develop the drug utilization review standards and to provide appropriate dosing information. PMID:22259729
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-11
... ``Performance Standard for Wood-Based Structural-Use Panels,'' and appears in the body of the notice. NIST is... correct title of the proposed standard is ``Performance Standard for Wood-Based Structural-Use Panels... Product Standard (PS) 2-04, Performance Standard for Wood-Based Structural-Use Panels. This revised...
Standardization: colorfull or dull?
NASA Astrophysics Data System (ADS)
van Nes, Floris L.
2003-01-01
After mentioning the necessity of standardization in general, this paper explains how human factors, or ergonomics standardization by ISO and the deployment of information technology were linked. Visual display standardization is the main topic; the present as well as the future situation in this field are treated, mainly from an ISO viewpoint. Some observations are made about the necessary and interesting co-operation between physicists and psychologists, of different nationality, who both may be employed by either private enterprise or governmental institutions, in determining visual display requirements. The display standard that is to succeed the present ISO standards in this area: ISO 9241-3, -7, -8 and ISO 13406-1, -2, will have a scope that is not restricted to office tasks. This means a large extension of the contexts for which display requirements have to be investigated and specified especially if mobile use of displays, under outdoor lighting conditions, is included. The new standard will be structured in such a way that it is better accessible than the present ones for different categories of standards users. The subject color in the new standard is elaborated here. A number of questions are asked as to which requirements on color rendering should be made, taking new research results into account, and how far the new standard should go in making recommendations to the display user.
Hierarchical structures of amorphous solids characterized by persistent homology
Hiraoka, Yasuaki; Nakamura, Takenobu; Hirata, Akihiko; Escolar, Emerson G.; Matsue, Kaname; Nishiura, Yasumasa
2016-01-01
This article proposes a topological method that extracts hierarchical structures of various amorphous solids. The method is based on the persistence diagram (PD), a mathematical tool for capturing shapes of multiscale data. The input to the PDs is given by an atomic configuration and the output is expressed as 2D histograms. Then, specific distributions such as curves and islands in the PDs identify meaningful shape characteristics of the atomic configuration. Although the method can be applied to a wide variety of disordered systems, it is applied here to silica glass, the Lennard-Jones system, and Cu-Zr metallic glass as standard examples of continuous random network and random packing structures. In silica glass, the method classified the atomic rings as short-range and medium-range orders and unveiled hierarchical ring structures among them. These detailed geometric characterizations clarified a real space origin of the first sharp diffraction peak and also indicated that PDs contain information on elastic response. Even in the Lennard-Jones system and Cu-Zr metallic glass, the hierarchical structures in the atomic configurations were derived in a similar way using PDs, although the glass structures and properties substantially differ from silica glass. These results suggest that the PDs provide a unified method that extracts greater depth of geometric information in amorphous solids than conventional methods. PMID:27298351
Lustenberger, Nadia A; Prodinger, Birgit; Dorjbal, Delgerjargal; Rubinelli, Sara; Schmitt, Klaus; Scheel-Sailer, Anke
2017-09-23
To illustrate how routinely written narrative admission and discharge reports of a rehabilitation program for eight youths with chronic neurological health conditions can be transformed to the International Classification of Functioning, Disability and Health. First, a qualitative content analysis was conducted by building meaningful units with text segments assigned of the reports to the five elements of the Rehab-Cycle ® : goal; assessment; assignment; intervention; evaluation. Second, the meaningful units were then linked to the ICF using the refined ICF Linking Rules. With the first step of transformation, the emphasis of the narrative reports changed to a process oriented interdisciplinary layout, revealing three thematic blocks of goals: mobility, self-care, mental, and social functions. The linked 95 unique ICF codes could be grouped in clinically meaningful goal-centered ICF codes. Between the two independent linkers, the agreement rate was improved after complementing the rules with additional agreements. The ICF Linking Rules can be used to compile standardized health information from narrative reports if prior structured. The process requires time and expertise. To implement the ICF into common practice, the findings provide the starting point for reporting rehabilitation that builds upon existing practice and adheres to international standards. Implications for Rehabilitation This study provides evidence that routinely collected health information from rehabilitation practice can be transformed to the International Classification of Functioning, Disability and Health by using the "ICF Linking Rules", however, this requires time and expertise. The Rehab-Cycle ® , including assessments, assignments, goal setting, interventions and goal evaluation, serves as feasible framework for structuring this rehabilitation program and ensures that the complexity of local practice is appropriately reflected. The refined "ICF Linking Rules" lead to a standardized transformation process of narrative text and thus a higher quality with increased transparency. As a next step, the resulting format of goal codes supplemented by goal-clarifying codes could be validated to strengthen the implementation of the International Classification of Functioning, Disability and Health into rehabilitation routine by respecting the variety of clinical practice.
45 CFR 170.204 - Functional standards.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES HEALTH INFORMATION TECHNOLOGY HEALTH INFORMATION TECHNOLOGY STANDARDS, IMPLEMENTATION SPECIFICATIONS, AND CERTIFICATION CRITERIA AND CERTIFICATION PROGRAMS FOR HEALTH INFORMATION TECHNOLOGY Standards and Implementation Specifications for Health Information...
45 CFR 170.202 - Transport standards.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES HEALTH INFORMATION TECHNOLOGY HEALTH INFORMATION TECHNOLOGY STANDARDS, IMPLEMENTATION SPECIFICATIONS, AND CERTIFICATION CRITERIA AND CERTIFICATION PROGRAMS FOR HEALTH INFORMATION TECHNOLOGY Standards and Implementation Specifications for Health Information...
45 CFR 170.204 - Functional standards.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES HEALTH INFORMATION TECHNOLOGY HEALTH INFORMATION TECHNOLOGY STANDARDS, IMPLEMENTATION SPECIFICATIONS, AND CERTIFICATION CRITERIA AND CERTIFICATION PROGRAMS FOR HEALTH INFORMATION TECHNOLOGY Standards and Implementation Specifications for Health Information...
45 CFR 170.202 - Transport standards.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Public Welfare Department of Health and Human Services HEALTH INFORMATION TECHNOLOGY HEALTH INFORMATION TECHNOLOGY STANDARDS, IMPLEMENTATION SPECIFICATIONS, AND CERTIFICATION CRITERIA AND CERTIFICATION PROGRAMS FOR HEALTH INFORMATION TECHNOLOGY Standards and Implementation Specifications for Health Information...
45 CFR 170.202 - Transport standards.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES HEALTH INFORMATION TECHNOLOGY HEALTH INFORMATION TECHNOLOGY STANDARDS, IMPLEMENTATION SPECIFICATIONS, AND CERTIFICATION CRITERIA AND CERTIFICATION PROGRAMS FOR HEALTH INFORMATION TECHNOLOGY Standards and Implementation Specifications for Health Information...
45 CFR 170.204 - Functional standards.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Public Welfare Department of Health and Human Services HEALTH INFORMATION TECHNOLOGY HEALTH INFORMATION TECHNOLOGY STANDARDS, IMPLEMENTATION SPECIFICATIONS, AND CERTIFICATION CRITERIA AND CERTIFICATION PROGRAMS FOR HEALTH INFORMATION TECHNOLOGY Standards and Implementation Specifications for Health Information...
1992-09-01
Vsurveyors’ at the technician level or even without any formal education. In this case, even the most technologically advanced instrumentation will not... technologically advanced instrumentation system will not supply the expected information. UNB Report on Deformation Monitoring, 1992 163 The worldwide review... Technology ( CANMET ) Report 77-15. Lazzarini, T. (1975). "The identification of reference points in trigonometrical and linear networks established for
ERIC Educational Resources Information Center
Bailey, Anthony
2013-01-01
The nominal group technique (NGT) is a structured process to gather information from a group. The technique was first described in 1975 and has since become a widely-used standard to facilitate working groups. The NGT is effective for generating large numbers of creative new ideas and for group priority setting. This paper describes the process of…
XML syntax for clinical laboratory procedure manuals.
Saadawi, Gilan; Harrison, James H
2003-01-01
We have developed a document type description (DTD) in Extensable Markup Language (XML) for clinical laboratory procedures. Our XML syntax can adequately structure a variety of procedure types across different laboratories and is compatible with current procedure standards. The combination of this format with an XML content management system and appropriate style sheets will allow efficient procedure maintenance, distributed access, customized display and effective searching across a large body of test information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2010-12-10
Open Energy Information (OpenEI) is an open source web platform—similar to the one used by Wikipedia—developed by the US Department of Energy (DOE) and the National Renewable Energy Laboratory (NREL) to make the large amounts of energy-related data and information more easily searched, accessed, and used both by people and automated machine processes. Built utilizing the standards and practices of the Linked Open Data community, the OpenEI platform is much more robust and powerful than typical web sites and databases. As an open platform, all users can search, edit, add, and access data in OpenEI for free. The user communitymore » contributes the content and ensures its accuracy and relevance; as the community expands, so does the content's comprehensiveness and quality. The data are structured and tagged with descriptors to enable cross-linking among related data sets, advanced search functionality, and consistent, usable formatting. Data input protocols and quality standards help ensure the content is structured and described properly and derived from a credible source. Although DOE/NREL is developing OpenEI and seeding it with initial data, it is designed to become a true community model with millions of users, a large core of active contributors, and numerous sponsors.« less
Are electronic health records ready for genomic medicine?
Scheuner, Maren T; de Vries, Han; Kim, Benjamin; Meili, Robin C; Olmstead, Sarah H; Teleki, Stephanie
2009-07-01
The goal of this project was to assess genetic/genomic content in electronic health records. Semistructured interviews were conducted with key informants. Questions addressed documentation, organization, display, decision support and security of family history and genetic test information, and challenges and opportunities relating to integrating genetic/genomics content in electronic health records. There were 56 participants: 10 electronic health record specialists, 18 primary care clinicians, 16 medical geneticists, and 12 genetic counselors. Few clinicians felt their electronic record met their current genetic/genomic medicine needs. Barriers to integration were mostly related to problems with family history data collection, documentation, and organization. Lack of demand for genetics content and privacy concerns were also mentioned as challenges. Data elements and functionality requirements that clinicians see include: pedigree drawing; clinical decision support for familial risk assessment and genetic testing indications; a patient portal for patient-entered data; and standards for data elements, terminology, structure, interoperability, and clinical decision support rules. Although most said that there is little impact of genetics/genomics on electronic records today, many stated genetics/genomics would be a driver of content in the next 5-10 years. Electronic health records have the potential to enable clinical integration of genetic/genomic medicine and improve delivery of personalized health care; however, structured and standardized data elements and functionality requirements are needed.
None
2018-02-06
Open Energy Information (OpenEI) is an open source web platformâsimilar to the one used by Wikipediaâdeveloped by the US Department of Energy (DOE) and the National Renewable Energy Laboratory (NREL) to make the large amounts of energy-related data and information more easily searched, accessed, and used both by people and automated machine processes. Built utilizing the standards and practices of the Linked Open Data community, the OpenEI platform is much more robust and powerful than typical web sites and databases. As an open platform, all users can search, edit, add, and access data in OpenEI for free. The user community contributes the content and ensures its accuracy and relevance; as the community expands, so does the content's comprehensiveness and quality. The data are structured and tagged with descriptors to enable cross-linking among related data sets, advanced search functionality, and consistent, usable formatting. Data input protocols and quality standards help ensure the content is structured and described properly and derived from a credible source. Although DOE/NREL is developing OpenEI and seeding it with initial data, it is designed to become a true community model with millions of users, a large core of active contributors, and numerous sponsors.
Kass, Nancy; Taylor, Holly; Ali, Joseph; Hallez, Kristina; Chaisson, Lelia
2014-01-01
Background Informed consent is intended to ensure that individuals understand the purpose, risks, and benefits of research studies, and then can decide, voluntarily, whether to enroll. However, research suggests that consent procedures do not always lead to adequate participant understanding and may be longer and more complex than necessary. Studies also suggest some consent interventions, including enhanced consent forms and extended discussions with patients, increase understanding, yet methodologic challenges have been raised in studying consent in actual trial settings. This study aimed to examine the feasibility of testing two consent interventions in actual studies and also to measure effectiveness of interventions in improving understanding of trials. Methods Participants enrolling in any of eight ongoing clinical trials (“collaborating studies”) were, for the purposes of this study, sequentially assigned to one of three study arms involving different informed consent procedures (one control and two intervention). Control participants received standard consent form and processes. Participants in the 1st intervention arm received a bulleted fact-sheet providing simple summaries of all study components in addition to the standard consent form. Participants in the 2nd intervention arm received the bulleted fact-sheet and standard consent materials and then also engaged with a member of the collaborating study staff in a feedback Q&A session. Following consent procedures, we administered closed and open ended questions to assess patient understanding and we assessed literacy level. Descriptive statistics, Wilcoxon-Mann-Whitney and Kruskal-Wallis tests were generated to assess correlations; regression analysis determined predictors of patient understanding. Results 144 participants enrolled. Using regression analysis participants receiving the 2nd intervention, which included a standard consent form, bulleted fact sheet and structured question and answer session with a study staff member, had open-ended question scores that were 7.6 percentage points higher (p=.02) than participants who received the control arm (standard consent only), although unadjusted comparisons did not reach statistical significance. Eleven clinical trial investigators agreed to participate and 8 trials provided sufficient data to be included, thereby demonstrating feasibility of consent research in actual settings. Conclusions Our study supports the hypothesis that patients receiving both bulleted fact sheets and a question and answer session have higher understanding compared to patients receiving standard consent form and procedures alone. Fact sheets and short structured dialog are quick to administer and easy to replicate across studies and should be tested in larger samples for effectiveness. PMID:25475879
Effects of fundamentals acquisition and strategy switch on stock price dynamics
NASA Astrophysics Data System (ADS)
Wu, Songtao; He, Jianmin; Li, Shouwei
2018-02-01
An agent-based artificial stock market is developed to simulate trading behavior of investors. In the market, acquisition and employment of information about fundamentals and strategy switch are investigated to explain stock price dynamics. Investors could obtain the information from both market and neighbors resided on their social networks. Depending on information status and performances of different strategies, an informed investor may switch to the strategy of fundamentalist. This in turn affects the information acquisition process, since fundamentalists are more inclined to search and spread the information than chartists. Further investigation into price dynamics generated from three typical networks, i.e. regular lattice, small-world network and random graph, are conducted after general relation between network structures and price dynamics is revealed. In each network, integrated effects of different combinations of information efficiency and switch intensity are investigated. Results have shown that, along with increasing switch intensity, market and social information efficiency play different roles in the formation of price distortion, standard deviation and kurtosis of returns.
Saver, Jeffrey L.; Warach, Steven; Janis, Scott; Odenkirchen, Joanne; Becker, Kyra; Benavente, Oscar; Broderick, Joseph; Dromerick, Alexander W.; Duncan, Pamela; Elkind, Mitchell S. V.; Johnston, Karen; Kidwell, Chelsea S.; Meschia, James F.; Schwamm, Lee
2012-01-01
Background and Purpose The National Institute of Neurological Disorders and Stroke initiated development of stroke-specific Common Data Elements (CDEs) as part of a project to develop data standards for funded clinical research in all fields of neuroscience. Standardizing data elements in translational, clinical and population research in cerebrovascular disease could decrease study start-up time, facilitate data sharing, and promote well-informed clinical practice guidelines. Methods A Working Group of diverse experts in cerebrovascular clinical trials, epidemiology, and biostatistics met regularly to develop a set of Stroke CDEs, selecting among, refining, and adding to existing, field-tested data elements from national registries and funded trials and studies. Candidate elements were revised based on comments from leading national and international neurovascular research organizations and the public. Results The first iteration of the NINDS stroke-specific CDEs comprises 980 data elements spanning nine content areas: 1) Biospecimens and Biomarkers; 2) Hospital Course and Acute Therapies; 3) Imaging; 4) Laboratory Tests and Vital Signs; 5) Long Term Therapies; 6) Medical History and Prior Health Status; 7) Outcomes and Endpoints; 8) Stroke Presentation; 9) Stroke Types and Subtypes. A CDE website provides uniform names and structures for each element, a data dictionary, and template case report forms (CRFs) using the CDEs. Conclusion Stroke-specific CDEs are now available as standardized, scientifically-vetted variable structures to facilitate data collection and data sharing in cerebrovascular patient-oriented research. The CDEs are an evolving resource that will be iteratively improved based on investigator use, new technologies, and emerging concepts and research findings. PMID:22308239
Terminology Services: Standard Terminologies to Control Health Vocabulary.
González Bernaldo de Quirós, Fernán; Otero, Carlos; Luna, Daniel
2018-04-22
Healthcare Information Systems should capture clinical data in a structured and preferably coded format. This is crucial for data exchange between health information systems, epidemiological analysis, quality and research, clinical decision support systems, administrative functions, among others. Structured data entry is an obstacle for the usability of electronic health record (EHR) applications and their acceptance by physicians who prefer to document patient EHRs using "free text". Natural language allows for rich expressiveness but at the same time is ambiguous; it has great dependence on context and uses jargon and acronyms. Although much progress has been made in knowledge and natural language processing techniques, the result is not yet satisfactory enough for the use of free text in all dimensions of clinical documentation. In order to address the trade-off between capturing data with free text and at the same time coding data for computer processing, numerous terminological systems for the systematic recording of clinical data have been developed. The purpose of terminology services consists of representing facts that happen in the real world through database management in order to allow for semantic interoperability and computerized applications. These systems interrelate concepts of a particular domain and provide references to related terms with standards codes. In this way, standard terminologies allow the creation of a controlled medical vocabulary, making terminology services a fundamental component for health data management in the healthcare environment. The Hospital Italiano de Buenos Aires has been working in the development of its own terminology server. This work describes its experience in the field. Georg Thieme Verlag KG Stuttgart.
Hillsdon, Kaye M; Kersten, Paula; Kirk, Hayden J S
2013-09-01
To explore individuals' experiences of receiving either standard care or comprehensive cardiac rehabilitation post minor stroke or transient ischaemic attack. A qualitative study using semi-structured interviews, alongside a randomized controlled trial, exploring the effectiveness of comprehensive cardiac rehabilitation compared with standard care. Interviews were transcribed verbatim and subjected to thematic analysis. Individuals' homes. People who have experienced a minor stroke or transient ischaemic attack and who were partaking in a secondary prevention randomized controlled trial (6-7 months post the event, 17 males, five females; mean age 67 years). Not relevant. Not relevant. Four themes were identified: information delivery, comparing oneself with others, psychological impact, attitudes and actions regarding risk factor reduction. Participants indicated a need for improved information delivery, specific to their own risk factors and lifestyle changes. Many experienced psychological impact as a result of their minor stroke. Participants were found to make two types of social comparison; the comparison of self to another affected by stroke, and the comparison of self to cardiac patients. Comprehensive cardiac rehabilitation was reported to have positive effects on people's motivation to exercise. Following a minor stroke, many individuals do not recall information given or risk factors specific to them. Downward comparison with individuals who have had a cardiovascular event led to some underplaying the significance of their minor stroke.
FPGA implementation of sparse matrix algorithm for information retrieval
NASA Astrophysics Data System (ADS)
Bojanic, Slobodan; Jevtic, Ruzica; Nieto-Taladriz, Octavio
2005-06-01
Information text data retrieval requires a tremendous amount of processing time because of the size of the data and the complexity of information retrieval algorithms. In this paper the solution to this problem is proposed via hardware supported information retrieval algorithms. Reconfigurable computing may adopt frequent hardware modifications through its tailorable hardware and exploits parallelism for a given application through reconfigurable and flexible hardware units. The degree of the parallelism can be tuned for data. In this work we implemented standard BLAS (basic linear algebra subprogram) sparse matrix algorithm named Compressed Sparse Row (CSR) that is showed to be more efficient in terms of storage space requirement and query-processing timing over the other sparse matrix algorithms for information retrieval application. Although inverted index algorithm is treated as the de facto standard for information retrieval for years, an alternative approach to store the index of text collection in a sparse matrix structure gains more attention. This approach performs query processing using sparse matrix-vector multiplication and due to parallelization achieves a substantial efficiency over the sequential inverted index. The parallel implementations of information retrieval kernel are presented in this work targeting the Virtex II Field Programmable Gate Arrays (FPGAs) board from Xilinx. A recent development in scientific applications is the use of FPGA to achieve high performance results. Computational results are compared to implementations on other platforms. The design achieves a high level of parallelism for the overall function while retaining highly optimised hardware within processing unit.
NASA Astrophysics Data System (ADS)
Zhao, Bei; Zhong, Yanfei; Zhang, Liangpei
2016-06-01
Land-use classification of very high spatial resolution remote sensing (VHSR) imagery is one of the most challenging tasks in the field of remote sensing image processing. However, the land-use classification is hard to be addressed by the land-cover classification techniques, due to the complexity of the land-use scenes. Scene classification is considered to be one of the expected ways to address the land-use classification issue. The commonly used scene classification methods of VHSR imagery are all derived from the computer vision community that mainly deal with terrestrial image recognition. Differing from terrestrial images, VHSR images are taken by looking down with airborne and spaceborne sensors, which leads to the distinct light conditions and spatial configuration of land cover in VHSR imagery. Considering the distinct characteristics, two questions should be answered: (1) Which type or combination of information is suitable for the VHSR imagery scene classification? (2) Which scene classification algorithm is best for VHSR imagery? In this paper, an efficient spectral-structural bag-of-features scene classifier (SSBFC) is proposed to combine the spectral and structural information of VHSR imagery. SSBFC utilizes the first- and second-order statistics (the mean and standard deviation values, MeanStd) as the statistical spectral descriptor for the spectral information of the VHSR imagery, and uses dense scale-invariant feature transform (SIFT) as the structural feature descriptor. From the experimental results, the spectral information works better than the structural information, while the combination of the spectral and structural information is better than any single type of information. Taking the characteristic of the spatial configuration into consideration, SSBFC uses the whole image scene as the scope of the pooling operator, instead of the scope generated by a spatial pyramid (SP) commonly used in terrestrial image classification. The experimental results show that the whole image as the scope of the pooling operator performs better than the scope generated by SP. In addition, SSBFC codes and pools the spectral and structural features separately to avoid mutual interruption between the spectral and structural features. The coding vectors of spectral and structural features are then concatenated into a final coding vector. Finally, SSBFC classifies the final coding vector by support vector machine (SVM) with a histogram intersection kernel (HIK). Compared with the latest scene classification methods, the experimental results with three VHSR datasets demonstrate that the proposed SSBFC performs better than the other classification methods for VHSR image scenes.
Preliminary report on candidates for AGARD standard aeroelastic configurations for dynamic response
NASA Technical Reports Server (NTRS)
Yates, E. Carson, Jr.
1987-01-01
At the request of the Aeroelasticity Subcommittee of the AGARD Structures and Materials Panel, a survey of member countries has been conducted to seek candidates for a prospective set of standard configurations to be used for comparison of calculated and measured dynamic aeroelastic behavior with emphasis on the transonic speed range. This set is a sequel to that established several years ago for comparisons of calculated and measured aerodynamic pressures and forces. Approximately two dozen people in the United States, and more than three dozen people in the other member countries, were contacted. This preliminary report presents the results of the survey and an analysis of those results along with recommendations for the initial set of standard configurations and for additional experimental work needed to fill significant gaps in the available information.
D'Agostino, Fabio; Vellone, Ercole; Tontini, Francesco; Zega, Maurizio; Alvaro, Rosaria
2012-01-01
The aim of a nursing data set is to provide useful information for assessing the level of care and the state of health of the population. Currently, both in Italy and in other countries, this data is incomplete due to the lack of a structured nursing documentation , making it indispensible to develop a Nursing Minimum Data Set (NMDS) using standard nursing language to evaluate care, costs and health requirements. The aim of the project described , is to create a computer system using standard nursing terms with a dedicated software which will aid the decision-making process and provide the relative documentation. This will make it possible to monitor nursing activity and costs and their impact on patients' health : adequate training and involvement of nursing staff will play a fundamental role.
Melanins and melanogenesis: methods, standards, protocols.
d'Ischia, Marco; Wakamatsu, Kazumasa; Napolitano, Alessandra; Briganti, Stefania; Garcia-Borron, José-Carlos; Kovacs, Daniela; Meredith, Paul; Pezzella, Alessandro; Picardo, Mauro; Sarna, Tadeusz; Simon, John D; Ito, Shosuke
2013-09-01
Despite considerable advances in the past decade, melanin research still suffers from the lack of universally accepted and shared nomenclature, methodologies, and structural models. This paper stems from the joint efforts of chemists, biochemists, physicists, biologists, and physicians with recognized and consolidated expertise in the field of melanins and melanogenesis, who critically reviewed and experimentally revisited methods, standards, and protocols to provide for the first time a consensus set of recommended procedures to be adopted and shared by researchers involved in pigment cell research. The aim of the paper was to define an unprecedented frame of reference built on cutting-edge knowledge and state-of-the-art methodology, to enable reliable comparison of results among laboratories and new progress in the field based on standardized methods and shared information. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Guidelines for Datacenter Energy Information System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Reshma; Mahdavi, Rod; Mathew, Paul
2013-12-01
The purpose of this document is to provide structured guidance to data center owners, operators, and designers, to empower them with information on how to specify and procure data center energy information systems (EIS) for managing the energy utilization of their data centers. Data centers are typically energy-intensive facilities that can consume up to 100 times more energy per unit area than a standard office building (FEMP 2013). This guidance facilitates “data-driven decision making,” which will be enabled by following the approach outlined in the guide. This will bring speed, clarity, and objectivity to any energy or asset management decisionsmore » because of the ability to monitor and track an energy management project’s performance.« less
NASA Technical Reports Server (NTRS)
Callender, E. David; Steinbacher, Jody
1989-01-01
This is the third of five volumes on Information System Life-Cycle and Documentation Standards which present a well organized, easily used standard for providing technical information needed for developing information systems, components, and related processes. This volume states the Software Management and Assurance Program documentation standard for a product specification document and for data item descriptions. The framework can be applied to any NASA information system, software, hardware, operational procedures components, and related processes.
Information system life-cycle and documentation standards, volume 1
NASA Technical Reports Server (NTRS)
Callender, E. David; Steinbacher, Jody
1989-01-01
The Software Management and Assurance Program (SMAP) Information System Life-Cycle and Documentation Standards Document describes the Version 4 standard information system life-cycle in terms of processes, products, and reviews. The description of the products includes detailed documentation standards. The standards in this document set can be applied to the life-cycle, i.e., to each phase in the system's development, and to the documentation of all NASA information systems. This provides consistency across the agency as well as visibility into the completeness of the information recorded. An information system is software-intensive, but consists of any combination of software, hardware, and operational procedures required to process, store, or transmit data. This document defines a standard life-cycle model and content for associated documentation.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-26
... Information Collection: Federal Labor Standards Questionnaire(s); Complaint Intake Form AGENCY: Office of... Information Collection Title of Information Collection: Federal Labor Standards Questionnaire; Complaint... Questionnaires, will be used by HUD and agencies administering HUD programs to collect information from laborers...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-19
...; Information Collection; Cost Accounting Standards Administration AGENCY: Department of Defense (DOD), General... collection requirement concerning cost accounting standards administration. Public comments are particularly... Information Collection 9000- 0129, Cost Accounting Standards Administration by any of the following methods...
Boerner, Jana; Godenschwege, Tanja Angela
2010-09-01
The Drosophila standard brain has been a useful tool that provides information about position and size of different brain structures within a wild-type brain and allows the comparison of imaging data that were collected from individual preparations. Therefore the standard can be used to reveal and visualize differences of brain regions between wild-type and mutant brains and can provide spatial description of single neurons within the nervous system. Recently the standard brain was complemented by the generation of a ventral nerve cord (VNC) standard. Here the authors have registered the major components of a simple neuronal circuit, the Giant Fiber System (GFS), into this standard. The authors show that they can also virtually reconstruct the well-characterized synaptic contact of the Giant Fiber with its motorneuronal target when they register the individual neurons from different preparations into the VNC standard. In addition to the potential application for the standard thorax in neuronal circuit reconstruction, the authors show that it is a useful tool for in-depth analysis of mutant morphology of single neurons. The authors find quantitative and qualitative differences when they compared the Giant Fibers of two different neuroglian alleles, nrg(849) and nrg(G00305), using the averaged wild-type GFS in the standard VNC as a reference.
NASA Astrophysics Data System (ADS)
Booth, N. L.; Brodaric, B.; Lucido, J. M.; Kuo, I.; Boisvert, E.; Cunningham, W. L.
2011-12-01
The need for a national groundwater monitoring network within the United States is profound and has been recognized by organizations outside government as a major data gap for managing ground-water resources. Our country's communities, industries, agriculture, energy production and critical ecosystems rely on water being available in adequate quantity and suitable quality. To meet this need the Subcommittee on Ground Water, established by the Federal Advisory Committee on Water Information, created a National Ground Water Monitoring Network (NGWMN) envisioned as a voluntary, integrated system of data collection, management and reporting that will provide the data needed to address present and future ground-water management questions raised by Congress, Federal, State and Tribal agencies and the public. The NGWMN Data Portal is the means by which policy makers, academics and the public will be able to access ground water data through one seamless web-based application from disparate data sources. Data systems in the United States exist at many organizational and geographic levels and differing vocabulary and data structures have prevented data sharing and reuse. The data portal will facilitate the retrieval of and access to groundwater data on an as-needed basis from multiple, dispersed data repositories allowing the data to continue to be housed and managed by the data provider while being accessible for the purposes of the national monitoring network. This work leverages Open Geospatial Consortium (OGC) data exchange standards and information models. To advance these standards for supporting the exchange of ground water information, an OGC Interoperability Experiment was organized among international participants from government, academia and the private sector. The experiment focused on ground water data exchange across the U.S. / Canadian border. WaterML2.0, an evolving international standard for water observations, encodes ground water levels and is exchanged using the OGC Sensor Observation Service (SOS) standard. Ground Water Markup Language (GWML) encodes well log, lithology and construction information and is exchanged using the OGC Web Feature Service (WFS) standard. Within the NGWMN Data Portal, data exchange between distributed data provider repositories is achieved through the use of these web services and a central mediation hub, which performs both format (syntactic) and nomenclature (semantic) mediation, conforming heterogeneous inputs into common standards-based outputs. Through these common standards, interoperability between the U.S. NGWMN and Canada's Groundwater Information Network (GIN) is achieved, advancing a ground water virtual observatory across North America.
Axiope tools for data management and data sharing.
Goddard, Nigel H; Cannon, Robert C; Howell, Fred W
2003-01-01
Many areas of biological research generate large volumes of very diverse data. Managing this data can be a difficult and time-consuming process, particularly in an academic environment where there are very limited resources for IT support staff such as database administrators. The most economical and efficient solutions are those that enable scientists with minimal IT expertise to control and operate their own desktop systems. Axiope provides one such solution, Catalyzer, which acts as flexible cataloging system for creating structured records describing digital resources. The user is able specify both the content and structure of the information included in the catalog. Information and resources can be shared by a variety of means, including automatically generated sets of web pages. Federation and integration of this information, where needed, is handled by Axiope's Mercat server. Where there is a need for standardization or compatibility of the structures usedby different researchers this canbe achieved later by applying user-defined mappings in Mercat. In this way, large-scale data sharing can be achieved without imposing unnecessary constraints or interfering with the way in which individual scientists choose to record and catalog their work. We summarize the key technical issues involved in scientific data management and data sharing, describe the main features and functionality of Axiope Catalyzer and Axiope Mercat, and discuss future directions and requirements for an information infrastructure to support large-scale data sharing and scientific collaboration.
Impact of Operating Context on the Use of Structure in Air Traffic Controller Cognitive Processes
NASA Technical Reports Server (NTRS)
Davison, Hayley J.; Histon, Jonathan M.; Ragnarsdottir, Margret Dora; Major, Laura M.; Hansman, R. John
2004-01-01
This paper investigates the influence of structure on air traffic controllers cognitive processes in the TRACON, En Route, and Oceanic environments. Radar data and voice command analyses were conducted to support hypotheses generated through observations and interviews conducted at the various facilities. Three general types of structure-based abstractions (standard flows, groupings, and critical points) have been identified as being used in each context, though the details of their application varied in accordance with the constraints of the particular operational environment. Projection emerged as a key cognitive process aided by the structure-based abstractions, and there appears to be a significant difference between how time-based versus spatial-based projection is performed by controllers. It is recommended that consideration be given to the value provided by the structure-based abstractions to the controller as well as to maintain consistency between the type (time or spatial) of information support provided to the controller.
NASA Astrophysics Data System (ADS)
Heinze, Thomas; Möhring, Simon; Budler, Jasmin; Weigand, Maximilian; Kemna, Andreas
2017-04-01
Rainfall-triggered landslides are a latent danger in almost any place of the world. Due to climate change heavy rainfalls might occur more often, increasing the risk of landslides. With pore pressure as mechanical trigger, knowledge of water content distribution in the ground is essential for hazard analysis during monitoring of potentially dangerous rainfall events. Geophysical methods like electrical resistivity tomography (ERT) can be utilized to determine the spatial distribution of water content using established soil physical relationships between bulk electrical resistivity and water content. However, often more dominant electrical contrasts due to lithological structures outplay these hydraulic signatures and blur the results in the inversion process. Additionally, the inversion of ERT data requires further constraints. In the standard Occam inversion method, a smoothness constraint is used, assuming that soil properties change softly in space. This applies in many scenarios, as for example during infiltration of water without a clear saturation front. Sharp lithological layers with strongly divergent hydrological parameters, as often found in landslide prone hillslopes, on the other hand, are typically badly resolved by standard ERT. We use a structurally constrained ERT inversion approach for improving water content estimation in landslide prone hills by including a-priori information about lithological layers. Here the standard smoothness constraint is reduced along layer boundaries identified using seismic data or other additional sources. This approach significantly improves water content estimations, because in landslide prone hills often a layer of rather high hydraulic conductivity is followed by a hydraulic barrier like clay-rich soil, causing higher pore pressures. One saturated layer and one almost drained layer typically result also in a sharp contrast in electrical resistivity, assuming that surface conductivity of the soil does not change in similar order. Using synthetic data, we study the influence of uncertainties in the a-priori information on the inverted resistivity and estimated water content distribution. Based on our simulation results, we provide best-practice recommendations for field applications and suggest important tests to obtain reliable, reproducible and trustworthy results. We finally apply our findings to field data, compare conventional and improved analysis results, and discuss limitations of the structurally-constrained inversion approach.
Information matrix estimation procedures for cognitive diagnostic models.
Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei
2018-03-06
Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.
Voss, Erica A; Makadia, Rupa; Matcho, Amy; Ma, Qianli; Knoll, Chris; Schuemie, Martijn; DeFalco, Frank J; Londhe, Ajit; Zhu, Vivienne; Ryan, Patrick B
2015-05-01
To evaluate the utility of applying the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) across multiple observational databases within an organization and to apply standardized analytics tools for conducting observational research. Six deidentified patient-level datasets were transformed to the OMOP CDM. We evaluated the extent of information loss that occurred through the standardization process. We developed a standardized analytic tool to replicate the cohort construction process from a published epidemiology protocol and applied the analysis to all 6 databases to assess time-to-execution and comparability of results. Transformation to the CDM resulted in minimal information loss across all 6 databases. Patients and observations excluded were due to identified data quality issues in the source system, 96% to 99% of condition records and 90% to 99% of drug records were successfully mapped into the CDM using the standard vocabulary. The full cohort replication and descriptive baseline summary was executed for 2 cohorts in 6 databases in less than 1 hour. The standardization process improved data quality, increased efficiency, and facilitated cross-database comparisons to support a more systematic approach to observational research. Comparisons across data sources showed consistency in the impact of inclusion criteria, using the protocol and identified differences in patient characteristics and coding practices across databases. Standardizing data structure (through a CDM), content (through a standard vocabulary with source code mappings), and analytics can enable an institution to apply a network-based approach to observational research across multiple, disparate observational health databases. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Soriano, Enrique; Plazzotta, Fernando; Campos, Fernando; Kaminker, Diego; Cancio, Alfredo; Aguilera Díaz, Jerónimo; Luna, Daniel; Seehaus, Alberto; Carcía Mónaco, Ricardo; de Quirós, Fernán González Bernaldo
2010-01-01
Every single piece of healthcare information should be fully integrated and transparent within the electronic health record. The Italian Hospital of Buenos Aires initiated the project Multimedia Health Record with the goal to achieve this integration while maintaining a holistic view of current structure of the systems of the Hospital, where the axis remains are the patient and longitudinal history, commencing with section Computed Tomography. Was implemented DICOM standard for communication and image storage and bought a PACS. It was necessary adapt our generic reporting system for live up to the commercial RIS. The Computerized Tomography (CT) Scanners of our hospital were easily integrated into the DICOM network and all the CT Scans generated by our radiology service were stored in the PACS, reported using the Structured Reporting System (we installed diagnostic terminals equipped with 3 monitors) and displayed in the EHR at any point of HIBA's healthcare network.
Heterogeneous Tensor Decomposition for Clustering via Manifold Optimization.
Sun, Yanfeng; Gao, Junbin; Hong, Xia; Mishra, Bamdev; Yin, Baocai
2016-03-01
Tensor clustering is an important tool that exploits intrinsically rich structures in real-world multiarray or Tensor datasets. Often in dealing with those datasets, standard practice is to use subspace clustering that is based on vectorizing multiarray data. However, vectorization of tensorial data does not exploit complete structure information. In this paper, we propose a subspace clustering algorithm without adopting any vectorization process. Our approach is based on a novel heterogeneous Tucker decomposition model taking into account cluster membership information. We propose a new clustering algorithm that alternates between different modes of the proposed heterogeneous tensor model. All but the last mode have closed-form updates. Updating the last mode reduces to optimizing over the multinomial manifold for which we investigate second order Riemannian geometry and propose a trust-region algorithm. Numerical experiments show that our proposed algorithm compete effectively with state-of-the-art clustering algorithms that are based on tensor factorization.
Goldacre, Ben; Gray, Jonathan
2016-04-08
OpenTrials is a collaborative and open database for all available structured data and documents on all clinical trials, threaded together by individual trial. With a versatile and expandable data schema, it is initially designed to host and match the following documents and data for each trial: registry entries; links, abstracts, or texts of academic journal papers; portions of regulatory documents describing individual trials; structured data on methods and results extracted by systematic reviewers or other researchers; clinical study reports; and additional documents such as blank consent forms, blank case report forms, and protocols. The intention is to create an open, freely re-usable index of all such information and to increase discoverability, facilitate research, identify inconsistent data, enable audits on the availability and completeness of this information, support advocacy for better data and drive up standards around open data in evidence-based medicine. The project has phase I funding. This will allow us to create a practical data schema and populate the database initially through web-scraping, basic record linkage techniques, crowd-sourced curation around selected drug areas, and import of existing sources of structured and documents. It will also allow us to create user-friendly web interfaces onto the data and conduct user engagement workshops to optimise the database and interface designs. Where other projects have set out to manually and perfectly curate a narrow range of information on a smaller number of trials, we aim to use a broader range of techniques and attempt to match a very large quantity of information on all trials. We are currently seeking feedback and additional sources of structured data.
Booth, Andrew
2016-05-04
Qualitative systematic reviews or qualitative evidence syntheses (QES) are increasingly recognised as a way to enhance the value of systematic reviews (SRs) of clinical trials. They can explain the mechanisms by which interventions, evaluated within trials, might achieve their effect. They can investigate differences in effects between different population groups. They can identify which outcomes are most important to patients, carers, health professionals and other stakeholders. QES can explore the impact of acceptance, feasibility, meaningfulness and implementation-related factors within a real world setting and thus contribute to the design and further refinement of future interventions. To produce valid, reliable and meaningful QES requires systematic identification of relevant qualitative evidence. Although the methodologies of QES, including methods for information retrieval, are well-documented, little empirical evidence exists to inform their conduct and reporting. This structured methodological overview examines papers on searching for qualitative research identified from the Cochrane Qualitative and Implementation Methods Group Methodology Register and from citation searches of 15 key papers. A single reviewer reviewed 1299 references. Papers reporting methodological guidance, use of innovative methodologies or empirical studies of retrieval methods were categorised under eight topical headings: overviews and methodological guidance, sampling, sources, structured questions, search procedures, search strategies and filters, supplementary strategies and standards. This structured overview presents a contemporaneous view of information retrieval for qualitative research and identifies a future research agenda. This review concludes that poor empirical evidence underpins current information practice in information retrieval of qualitative research. A trend towards improved transparency of search methods and further evaluation of key search procedures offers the prospect of rapid development of search methods.
Torres, Craig; Jones, Rachael; Boelter, Fred; Poole, James; Dell, Linda; Harper, Paul
2014-01-01
Bayesian Decision Analysis (BDA) uses Bayesian statistics to integrate multiple types of exposure information and classify exposures within the exposure rating categorization scheme promoted in American Industrial Hygiene Association (AIHA) publications. Prior distributions for BDA may be developed from existing monitoring data, mathematical models, or professional judgment. Professional judgments may misclassify exposures. We suggest that a structured qualitative risk assessment (QLRA) method can provide consistency and transparency in professional judgments. In this analysis, we use a structured QLRA method to define prior distributions (priors) for BDA. We applied this approach at three semiconductor facilities in South Korea, and present an evaluation of the performance of structured QLRA for determination of priors, and an evaluation of occupational exposures using BDA. Specifically, the structured QLRA was applied to chemical agents in similar exposure groups to identify provisional risk ratings. Standard priors were developed for each risk rating before review of historical monitoring data. Newly collected monitoring data were used to update priors informed by QLRA or historical monitoring data, and determine the posterior distribution. Exposure ratings were defined by the rating category with the highest probability--i.e., the most likely. We found the most likely exposure rating in the QLRA-informed priors to be consistent with historical and newly collected monitoring data, and the posterior exposure ratings developed with QLRA-informed priors to be equal to or greater than those developed with data-informed priors in 94% of comparisons. Overall, exposures at these facilities are consistent with well-controlled work environments. That is, the 95th percentile of exposure distributions are ≤50% of the occupational exposure limit (OEL) for all chemical-SEG combinations evaluated; and are ≤10% of the limit for 94% of chemical-SEG combinations evaluated.