Privacy-Preserving Accountable Accuracy Management Systems (PAAMS)
NASA Astrophysics Data System (ADS)
Thomas, Roshan K.; Sandhu, Ravi; Bertino, Elisa; Arpinar, Budak; Xu, Shouhuai
We argue for the design of “Privacy-preserving Accountable Accuracy Management Systems (PAAMS)”. The designs of such systems recognize from the onset that accuracy, accountability, and privacy management are intertwined. As such, these systems have to dynamically manage the tradeoffs between these (often conflicting) objectives. For example, accuracy in such systems can be improved by providing better accountability links between structured and unstructured information. Further, accuracy may be enhanced if access to private information is allowed in controllable and accountable ways. Our proposed approach involves three key elements. First, a model to link unstructured information such as that found in email, image and document repositories with structured information such as that in traditional databases. Second, a model for accuracy management and entity disambiguation by proactively preventing, detecting and tracing errors in information bases. Third, a model to provide privacy-governed operation as accountability and accuracy are managed.
Context Oriented Information Integration
NASA Astrophysics Data System (ADS)
Mohania, Mukesh; Bhide, Manish; Roy, Prasan; Chakaravarthy, Venkatesan T.; Gupta, Himanshu
Faced with growing knowledge management needs, enterprises are increasingly realizing the importance of seamlessly integrating critical business information distributed across both structured and unstructured data sources. Academicians have focused on this problem but there still remain a lot of obstacles for its widespread use in practice. One of the key problems is the absence of schema in unstructured text. In this paper we present a new paradigm for integrating information which overcomes this problem - that of Context Oriented Information Integration. The goal is to integrate unstructured data with the structured data present in the enterprise and use the extracted information to generate actionable insights for the enterprise. We present two techniques which enable context oriented information integration and show how they can be used for solving real world problems.
Wooley, Dennis S; Kinner, Tracy J
2016-11-01
The purpose was to compare perceived self-management practices of adult type 2 diabetic patients after completing an American Diabetes Association (ADA) certified diabetes self-management education (DSME) program with unstructured individualized nurse practitioner led DSME. Demographic questions and the Self-Care Inventory-Revised (SCIR) were given to two convenience sample patient groups comprising a formal DSME program group and a group within a clinical setting who received informal and unstructured individual education during patient encounters. A t-test was executed between the formal ADA certified education sample and the informal sample's SCI-R individual scores. A second t-test was performed between the two samples' SCI-R mean scores. A t-test determined no statistically significant difference between the formal ADA structured education and informal education samples' SCI-R individual scores. There was not a statistically significant difference between the samples' SCI-R mean scores. The study results suggest that there are not superior DSME settings and instructional approaches. Copyright © 2016 Elsevier Inc. All rights reserved.
Knowledge representation and management: transforming textual information into useful knowledge.
Rassinoux, A-M
2010-01-01
To summarize current outstanding research in the field of knowledge representation and management. Synopsis of the articles selected for the IMIA Yearbook 2010. Four interesting papers, dealing with structured knowledge, have been selected for the section knowledge representation and management. Combining the newest techniques in computational linguistics and natural language processing with the latest methods in statistical data analysis, machine learning and text mining has proved to be efficient for turning unstructured textual information into meaningful knowledge. Three of the four selected papers for the section knowledge representation and management corroborate this approach and depict various experiments conducted to .extract meaningful knowledge from unstructured free texts such as extracting cancer disease characteristics from pathology reports, or extracting protein-protein interactions from biomedical papers, as well as extracting knowledge for the support of hypothesis generation in molecular biology from the Medline literature. Finally, the last paper addresses the level of formally representing and structuring information within clinical terminologies in order to render such information easily available and shareable among the health informatics community. Delivering common powerful tools able to automatically extract meaningful information from the huge amount of electronically unstructured free texts is an essential step towards promoting sharing and reusability across applications, domains, and institutions thus contributing to building capacities worldwide.
Challenges in Managing Information Extraction
ERIC Educational Resources Information Center
Shen, Warren H.
2009-01-01
This dissertation studies information extraction (IE), the problem of extracting structured information from unstructured data. Example IE tasks include extracting person names from news articles, product information from e-commerce Web pages, street addresses from emails, and names of emerging music bands from blogs. IE is all increasingly…
Multi-Filter String Matching and Human-Centric Entity Matching for Information Extraction
ERIC Educational Resources Information Center
Sun, Chong
2012-01-01
More and more information is being generated in text documents, such as Web pages, emails and blogs. To effectively manage this unstructured information, one broadly used approach includes locating relevant content in documents, extracting structured information and integrating the extracted information for querying, mining or further analysis. In…
The Strategic Association between Enterprise Content Management and Decision Support
ERIC Educational Resources Information Center
Alalwan, Jaffar Ahmad
2012-01-01
To deal with the increasing information overload and with the structured and unstructured data complexity, many organizations have implemented enterprise content management (ECM) systems. Published research on ECM so far is very limited and reports on ECM implementations have been scarce until recently (Tyrvainen et al. 2006). However, the little…
36 CFR 1236.24 - What are the additional requirements for managing unstructured electronic records?
Code of Federal Regulations, 2012 CFR
2012-07-01
... requirements for managing unstructured electronic records? 1236.24 Section 1236.24 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT ELECTRONIC RECORDS MANAGEMENT Additional Requirements for Electronic Records § 1236.24 What are the additional requirements for managing...
36 CFR 1236.24 - What are the additional requirements for managing unstructured electronic records?
Code of Federal Regulations, 2010 CFR
2010-07-01
... requirements for managing unstructured electronic records? 1236.24 Section 1236.24 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT ELECTRONIC RECORDS MANAGEMENT Additional Requirements for Electronic Records § 1236.24 What are the additional requirements for managing...
36 CFR 1236.24 - What are the additional requirements for managing unstructured electronic records?
Code of Federal Regulations, 2011 CFR
2011-07-01
... requirements for managing unstructured electronic records? 1236.24 Section 1236.24 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT ELECTRONIC RECORDS MANAGEMENT Additional Requirements for Electronic Records § 1236.24 What are the additional requirements for managing...
36 CFR 1236.24 - What are the additional requirements for managing unstructured electronic records?
Code of Federal Regulations, 2014 CFR
2014-07-01
... requirements for managing unstructured electronic records? 1236.24 Section 1236.24 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT ELECTRONIC RECORDS MANAGEMENT Additional Requirements for Electronic Records § 1236.24 What are the additional requirements for managing...
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.
2003-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semistructured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.
An Extensible Schema-less Database Framework for Managing High-throughput Semi-Structured Documents
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.; La, Tracy; Clancy, Daniel (Technical Monitor)
2002-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword searches of records for both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high throughput open database framework for managing, storing, and searching unstructured or semi structured arbitrary hierarchal models, XML and HTML.
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.
2003-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoecker, Nora Kathleen
2014-03-01
A Systems Analysis Group has existed at Sandia National Laboratories since at least the mid-1950s. Much of the groups work output (reports, briefing documents, and other materials) has been retained, along with large numbers of related documents. Over time the collection has grown to hundreds of thousands of unstructured documents in many formats contained in one or more of several different shared drives or SharePoint sites, with perhaps five percent of the collection still existing in print format. This presents a challenge. How can the group effectively find, manage, and build on information contained somewhere within such a large setmore » of unstructured documents? In response, a project was initiated to identify tools that would be able to meet this challenge. This report documents the results found and recommendations made as of August 2013.« less
Code of Federal Regulations, 2013 CFR
2013-07-01
... requirements for managing unstructured electronic records? § 1236.24 Section § 1236.24 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT ELECTRONIC RECORDS MANAGEMENT Additional Requirements for Electronic Records § 1236.24 What are the additional requirements for...
ERIC Educational Resources Information Center
Tirgari, Vesal
2010-01-01
The phenomenological study explored the lived experiences and perceptions of a purposive sample of 20 IT professionals (managers, engineers, administrators, and analysts) in the state of Virginia, Texas, and Washington DC. The focus of this research study was to learn the perceptions of IT professionals who are or once were in a decision-making…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parfouru, S.; De-Beler, N.
2012-07-01
In the context of a project that is designing innovative ICT-based solutions for the organizational concept of outage management, we focus on the informational process of the OCR (Outage Control Room) underlying the execution of the outages. Informational process are based on structured and unstructured documents that have a key role in the collaborative processes and management of the outage. We especially track the structured and unstructured documents, electronically or not, from creation to sharing. Our analysis allows us to consider that the individual traces produced by an individual participant with a specific role could be multi-purpose and support sharingmore » between participants without creating duplication of work. The ultimate goal is to be able to generate an outage historian, that is not just focused on highly structured information, which could be useful to improve the continuity of information between participants. We study the implementation of this approach through web technologies and social media tools to address this issue. We also investigate the issue of data access through interactive visualization timelines coupled with other modality's to assist users in the navigation and exploration of the proposed historian. (authors)« less
Integration of Text- and Data-Mining Technologies for Use in Banking Applications
NASA Astrophysics Data System (ADS)
Maslankowski, Jacek
Unstructured data, most of it in the form of text files, typically accounts for 85% of an organization's knowledge stores, but it's not always easy to find, access, analyze or use (Robb 2004). That is why it is important to use solutions based on text and data mining. This solution is known as duo mining. This leads to improve management based on knowledge owned in organization. The results are interesting. Data mining provides to lead with structuralized data, usually powered from data warehouses. Text mining, sometimes called web mining, looks for patterns in unstructured data — memos, document and www. Integrating text-based information with structured data enriches predictive modeling capabilities and provides new stores of insightful and valuable information for driving business and research initiatives forward.
Negotiating the Digital Library: Document Delivery.
ERIC Educational Resources Information Center
Jacobs, Neil; Morris, Anne
1999-01-01
The eLib-funded FIDDO (Focused Investigation of Document Delivery Options) project provides library managers/others with information to support policy decisions. Senior libraries were interviewed about the future of document delivery and interviews were analyzed with the support of NUD*IST (Nonnumerical Unstructured Data by Indexing, Searching and…
Tree-oriented interactive processing with an application to theorem-proving, appendix E
NASA Technical Reports Server (NTRS)
Hammerslag, David; Kamin, Samuel N.; Campbell, Roy H.
1985-01-01
The concept of unstructured structure editing and ted, an editor for unstructured trees, is described. Ted is used to manipulate hierarchies of information in an unrestricted manner. The tool was implemented and applied to the problem of organizing formal proofs. As a proof management tool, it maintains the validity of a proof and its constituent lemmas independently from the methods used to validate the proof. It includes an adaptable interface which may be used to invoke theorem provers and other aids to proof construction. Using ted, a user may construct, maintain, and verify formal proofs using a variety of theorem provers, proof checkers, and formatters.
Electronic document management systems: an overview.
Kohn, Deborah
2002-08-01
For over a decade, most health care information technology (IT) professionals erroneously learned that document imaging, which is one of the many component technologies of an electronic document management system (EDMS), is the only technology of an EDMS. In addition, many health care IT professionals erroneously believed that EDMSs have either a limited role or no place in IT environments. As a result, most health care IT professionals do not understand documents and unstructured data and their value as structured data partners in most aspects of transaction and information processing systems.
Polnaszek, Brock; Gilmore-Bykovskyi, Andrea; Hovanes, Melissa; Roiland, Rachel; Ferguson, Patrick; Brown, Roger; Kind, Amy J H
2016-10-01
Unstructured data encountered during retrospective electronic medical record (EMR) abstraction has routinely been identified as challenging to reliably abstract, as these data are often recorded as free text, without limitations to format or structure. There is increased interest in reliably abstracting this type of data given its prominent role in care coordination and communication, yet limited methodological guidance exists. As standard abstraction approaches resulted in substandard data reliability for unstructured data elements collected as part of a multisite, retrospective EMR study of hospital discharge communication quality, our goal was to develop, apply and examine the utility of a phase-based approach to reliably abstract unstructured data. This approach is examined using the specific example of discharge communication for warfarin management. We adopted a "fit-for-use" framework to guide the development and evaluation of abstraction methods using a 4-step, phase-based approach including (1) team building; (2) identification of challenges; (3) adaptation of abstraction methods; and (4) systematic data quality monitoring. Unstructured data elements were the focus of this study, including elements communicating steps in warfarin management (eg, warfarin initiation) and medical follow-up (eg, timeframe for follow-up). After implementation of the phase-based approach, interrater reliability for all unstructured data elements demonstrated κ's of ≥0.89-an average increase of +0.25 for each unstructured data element. As compared with standard abstraction methodologies, this phase-based approach was more time intensive, but did markedly increase abstraction reliability for unstructured data elements within multisite EMR documentation.
Polnaszek, Brock; Gilmore-Bykovskyi, Andrea; Hovanes, Melissa; Roiland, Rachel; Ferguson, Patrick; Brown, Roger; Kind, Amy JH
2014-01-01
Background Unstructured data encountered during retrospective electronic medical record (EMR) abstraction has routinely been identified as challenging to reliably abstract, as this data is often recorded as free text, without limitations to format or structure. There is increased interest in reliably abstracting this type of data given its prominent role in care coordination and communication, yet limited methodological guidance exists. Objective As standard abstraction approaches resulted in sub-standard data reliability for unstructured data elements collected as part of a multi-site, retrospective EMR study of hospital discharge communication quality, our goal was to develop, apply and examine the utility of a phase-based approach to reliably abstract unstructured data. This approach is examined using the specific example of discharge communication for warfarin management. Research Design We adopted a “fit-for-use” framework to guide the development and evaluation of abstraction methods using a four step, phase-based approach including (1) team building, (2) identification of challenges, (3) adaptation of abstraction methods, and (4) systematic data quality monitoring. Measures Unstructured data elements were the focus of this study, including elements communicating steps in warfarin management (e.g., warfarin initiation) and medical follow-up (e.g., timeframe for follow-up). Results After implementation of the phase-based approach, inter-rater reliability for all unstructured data elements demonstrated kappas of ≥ 0.89 -- an average increase of + 0.25 for each unstructured data element. Conclusions As compared to standard abstraction methodologies, this phase-based approach was more time intensive, but did markedly increase abstraction reliability for unstructured data elements within multi-site EMR documentation. PMID:27624585
ERIC Educational Resources Information Center
Zwerdling, Daniel
Beginning an an informal, unstructured information interchange among 100 union, worker, and management representatives from seventeen public and private sector organizations operationally involved in quality of work life activities, a 1977 conference evolved into the first annual meeting of the American Quality of Work Life Association.…
Security Aspects of Computer Supported Collaborative Work
1993-09-01
unstructured tasks at one end 11 and prescriptive tasks at the other. Unstructured tasks are those requiring creative input from a number of users and...collaborative technology begun to mature, it has begun to outstrip prevailing management attitudes. One barrier to telecommuting is the perception that
EXTENSIBLE DATABASE FRAMEWORK FOR MANAGEMENT OF UNSTRUCTURED AND SEMI-STRUCTURED DOCUMENTS
NASA Technical Reports Server (NTRS)
Gawdiak, Yuri O. (Inventor); La, Tracy T. (Inventor); Lin, Shu-Chun Y. (Inventor); Malof, David A. (Inventor); Tran, Khai Peter B. (Inventor)
2005-01-01
Method and system for querying a collection of Unstructured or semi-structured documents to identify presence of, and provide context and/or content for, keywords and/or keyphrases. The documents are analyzed and assigned a node structure, including an ordered sequence of mutually exclusive node segments or strings. Each node has an associated set of at least four, five or six attributes with node information and can represent a format marker or text, with the last node in any node segment usually being a text node. A keyword (or keyphrase) is specified. and the last node in each node segment is searched for a match with the keyword. When a match is found at a query node, or at a node determined with reference to a query node, the system displays the context andor the content of the query node.
Knowledge-based vision for space station object motion detection, recognition, and tracking
NASA Technical Reports Server (NTRS)
Symosek, P.; Panda, D.; Yalamanchili, S.; Wehner, W., III
1987-01-01
Computer vision, especially color image analysis and understanding, has much to offer in the area of the automation of Space Station tasks such as construction, satellite servicing, rendezvous and proximity operations, inspection, experiment monitoring, data management and training. Knowledge-based techniques improve the performance of vision algorithms for unstructured environments because of their ability to deal with imprecise a priori information or inaccurately estimated feature data and still produce useful results. Conventional techniques using statistical and purely model-based approaches lack flexibility in dealing with the variabilities anticipated in the unstructured viewing environment of space. Algorithms developed under NASA sponsorship for Space Station applications to demonstrate the value of a hypothesized architecture for a Video Image Processor (VIP) are presented. Approaches to the enhancement of the performance of these algorithms with knowledge-based techniques and the potential for deployment of highly-parallel multi-processor systems for these algorithms are discussed.
EMPHASIS™/Nevada UTDEM User Guide Version 2.1.2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, C. David; Pasik, Michael F.; Seidel, David B.
The Unstructured Time-Domain ElectroMagnetics (UTDEM) portion of the EMPHASIS suite solves Maxwell’s equations using finite-element techniques on unstructured meshes. This document provides user-specific information to facilitate the use of the code for applications of interest.
A data management infrastructure for bridge monitoring
NASA Astrophysics Data System (ADS)
Jeong, Seongwoon; Byun, Jaewook; Kim, Daeyoung; Sohn, Hoon; Bae, In Hwan; Law, Kincho H.
2015-04-01
This paper discusses a data management infrastructure framework for bridge monitoring applications. As sensor technologies mature and become economically affordable, their deployment for bridge monitoring will continue to grow. Data management becomes a critical issue not only for storing the sensor data but also for integrating with the bridge model to support other functions, such as management, maintenance and inspection. The focus of this study is on the effective data management of bridge information and sensor data, which is crucial to structural health monitoring and life cycle management of bridge structures. We review the state-of-the-art of bridge information modeling and sensor data management, and propose a data management framework for bridge monitoring based on NoSQL database technologies that have been shown useful in handling high volume, time-series data and to flexibly deal with unstructured data schema. Specifically, Apache Cassandra and Mongo DB are deployed for the prototype implementation of the framework. This paper describes the database design for an XML-based Bridge Information Modeling (BrIM) schema, and the representation of sensor data using Sensor Model Language (SensorML). The proposed prototype data management framework is validated using data collected from the Yeongjong Bridge in Incheon, Korea.
Zarei, Javad; Sadoughi, Farahnaz
2016-01-01
In recent years, hospitals in Iran - similar to those in other countries - have experienced growing use of computerized health information systems (CHISs), which play a significant role in the operations of hospitals. But, the major challenge of CHIS use is information security. This study attempts to evaluate CHIS information security risk management at hospitals of Iran. This applied study is a descriptive and cross-sectional research that has been conducted in 2015. The data were collected from 551 hospitals of Iran. Based on literature review, experts' opinion, and observations at five hospitals, our intensive questionnaire was designed to assess security risk management for CHISs at the concerned hospitals, which was then sent to all hospitals in Iran by the Ministry of Health. Sixty-nine percent of the studied hospitals pursue information security policies and procedures in conformity with Iran Hospitals Accreditation Standards. At some hospitals, risk identification, risk evaluation, and risk estimation, as well as risk treatment, are unstructured without any specified approach or methodology. There is no significant structured approach to risk management at the studied hospitals. Information security risk management is not followed by Iran's hospitals and their information security policies. This problem can cause a large number of challenges for their CHIS security in future. Therefore, Iran's Ministry of Health should develop practical policies to improve information security risk management in the hospitals of Iran.
Some Consideration On Knowledge Management Implication On Organization's Competitiveness
NASA Astrophysics Data System (ADS)
Draghici, Anca; Ciortan, Marius Areta; Florea, Claudia
2015-07-01
The research described in this paper has been focused on two objectives: to debate the knowledge management's active role for organizations competitive advantage and to describe information technology's capabilities in leveraging the knowledge worker's competencies. For the purposes of this article, competitive advantage is perceived as a strength that provides a market advantage relative to a competitor. Often competitive advantage is related to the core competencies of the organisation, which are frequently based on implicit know-how or tacit knowledge. This intangible, unstructured knowledge is difficult to manage; consequently management has ignored it when designing business strategy. However, the increased competitive pressures of the post-industrial global economy and the exponential advances in computing power have increased management's interest in knowledge as a sustainable source of competitive advantage.
An information extraction framework for cohort identification using electronic health records.
Liu, Hongfang; Bielinski, Suzette J; Sohn, Sunghwan; Murphy, Sean; Wagholikar, Kavishwar B; Jonnalagadda, Siddhartha R; Ravikumar, K E; Wu, Stephen T; Kullo, Iftikhar J; Chute, Christopher G
2013-01-01
Information extraction (IE), a natural language processing (NLP) task that automatically extracts structured or semi-structured information from free text, has become popular in the clinical domain for supporting automated systems at point-of-care and enabling secondary use of electronic health records (EHRs) for clinical and translational research. However, a high performance IE system can be very challenging to construct due to the complexity and dynamic nature of human language. In this paper, we report an IE framework for cohort identification using EHRs that is a knowledge-driven framework developed under the Unstructured Information Management Architecture (UIMA). A system to extract specific information can be developed by subject matter experts through expert knowledge engineering of the externalized knowledge resources used in the framework.
Aniba, Mohamed Radhouene; Siguenza, Sophie; Friedrich, Anne; Plewniak, Frédéric; Poch, Olivier; Marchler-Bauer, Aron; Thompson, Julie Dawn
2009-01-01
The traditional approach to bioinformatics analyses relies on independent task-specific services and applications, using different input and output formats, often idiosyncratic, and frequently not designed to inter-operate. In general, such analyses were performed by experts who manually verified the results obtained at each step in the process. Today, the amount of bioinformatics information continuously being produced means that handling the various applications used to study this information presents a major data management and analysis challenge to researchers. It is now impossible to manually analyse all this information and new approaches are needed that are capable of processing the large-scale heterogeneous data in order to extract the pertinent information. We review the recent use of integrated expert systems aimed at providing more efficient knowledge extraction for bioinformatics research. A general methodology for building knowledge-based expert systems is described, focusing on the unstructured information management architecture, UIMA, which provides facilities for both data and process management. A case study involving a multiple alignment expert system prototype called AlexSys is also presented.
Aniba, Mohamed Radhouene; Siguenza, Sophie; Friedrich, Anne; Plewniak, Frédéric; Poch, Olivier; Marchler-Bauer, Aron
2009-01-01
The traditional approach to bioinformatics analyses relies on independent task-specific services and applications, using different input and output formats, often idiosyncratic, and frequently not designed to inter-operate. In general, such analyses were performed by experts who manually verified the results obtained at each step in the process. Today, the amount of bioinformatics information continuously being produced means that handling the various applications used to study this information presents a major data management and analysis challenge to researchers. It is now impossible to manually analyse all this information and new approaches are needed that are capable of processing the large-scale heterogeneous data in order to extract the pertinent information. We review the recent use of integrated expert systems aimed at providing more efficient knowledge extraction for bioinformatics research. A general methodology for building knowledge-based expert systems is described, focusing on the unstructured information management architecture, UIMA, which provides facilities for both data and process management. A case study involving a multiple alignment expert system prototype called AlexSys is also presented. PMID:18971242
A common type system for clinical natural language processing
2013-01-01
Background One challenge in reusing clinical data stored in electronic medical records is that these data are heterogenous. Clinical Natural Language Processing (NLP) plays an important role in transforming information in clinical text to a standard representation that is comparable and interoperable. Information may be processed and shared when a type system specifies the allowable data structures. Therefore, we aim to define a common type system for clinical NLP that enables interoperability between structured and unstructured data generated in different clinical settings. Results We describe a common type system for clinical NLP that has an end target of deep semantics based on Clinical Element Models (CEMs), thus interoperating with structured data and accommodating diverse NLP approaches. The type system has been implemented in UIMA (Unstructured Information Management Architecture) and is fully functional in a popular open-source clinical NLP system, cTAKES (clinical Text Analysis and Knowledge Extraction System) versions 2.0 and later. Conclusions We have created a type system that targets deep semantics, thereby allowing for NLP systems to encapsulate knowledge from text and share it alongside heterogenous clinical data sources. Rather than surface semantics that are typically the end product of NLP algorithms, CEM-based semantics explicitly build in deep clinical semantics as the point of interoperability with more structured data types. PMID:23286462
A common type system for clinical natural language processing.
Wu, Stephen T; Kaggal, Vinod C; Dligach, Dmitriy; Masanz, James J; Chen, Pei; Becker, Lee; Chapman, Wendy W; Savova, Guergana K; Liu, Hongfang; Chute, Christopher G
2013-01-03
One challenge in reusing clinical data stored in electronic medical records is that these data are heterogenous. Clinical Natural Language Processing (NLP) plays an important role in transforming information in clinical text to a standard representation that is comparable and interoperable. Information may be processed and shared when a type system specifies the allowable data structures. Therefore, we aim to define a common type system for clinical NLP that enables interoperability between structured and unstructured data generated in different clinical settings. We describe a common type system for clinical NLP that has an end target of deep semantics based on Clinical Element Models (CEMs), thus interoperating with structured data and accommodating diverse NLP approaches. The type system has been implemented in UIMA (Unstructured Information Management Architecture) and is fully functional in a popular open-source clinical NLP system, cTAKES (clinical Text Analysis and Knowledge Extraction System) versions 2.0 and later. We have created a type system that targets deep semantics, thereby allowing for NLP systems to encapsulate knowledge from text and share it alongside heterogenous clinical data sources. Rather than surface semantics that are typically the end product of NLP algorithms, CEM-based semantics explicitly build in deep clinical semantics as the point of interoperability with more structured data types.
2001-01-01
This editorial provides a model of how quality initiatives concerned with health information on the World Wide Web may in the future interact with each other. This vision fits into the evolving "Semantic Web" architecture - ie, the prospective that the World Wide Web may evolve from a mess of unstructured, human-readable information sources into a global knowledge base with an additional layer providing richer and more meaningful relationships between resources. One first prerequisite for forming such a "Semantic Web" or "web of trust" among the players active in quality management of health information is that these initiatives make statements about themselves and about each other in a machine-processable language. I present a concrete model on how this collaboration could look, and provide some recommendations on what the role of the World Health Organization (WHO) and other policy makers in this framework could be. PMID:11772549
Eysenbach, G
2001-01-01
This editorial provides a model of how quality initiatives concerned with health information on the World Wide Web may in the future interact with each other. This vision fits into the evolving "Semantic Web" architecture - ie, the prospective that the World Wide Web may evolve from a mess of unstructured, human-readable information sources into a global knowledge base with an additional layer providing richer and more meaningful relationships between resources. One first prerequisite for forming such a "Semantic Web" or "web of trust" among the players active in quality management of health information is that these initiatives make statements about themselves and about each other in a machine-processable language. I present a concrete model on how this collaboration could look, and provide some recommendations on what the role of the World Health Organization (WHO) and other policy makers in this framework could be.
Turning Search into Knowledge Management.
ERIC Educational Resources Information Center
Kaufman, David
2002-01-01
Discussion of knowledge management for electronic data focuses on creating a high quality similarity ranking algorithm. Topics include similarity ranking and unstructured data management; searching, categorization, and summarization of documents; query evaluation; considering sentences in addition to keywords; and vector models. (LRW)
Zarei, Javad; Sadoughi, Farahnaz
2016-01-01
Background In recent years, hospitals in Iran – similar to those in other countries – have experienced growing use of computerized health information systems (CHISs), which play a significant role in the operations of hospitals. But, the major challenge of CHIS use is information security. This study attempts to evaluate CHIS information security risk management at hospitals of Iran. Materials and methods This applied study is a descriptive and cross-sectional research that has been conducted in 2015. The data were collected from 551 hospitals of Iran. Based on literature review, experts’ opinion, and observations at five hospitals, our intensive questionnaire was designed to assess security risk management for CHISs at the concerned hospitals, which was then sent to all hospitals in Iran by the Ministry of Health. Results Sixty-nine percent of the studied hospitals pursue information security policies and procedures in conformity with Iran Hospitals Accreditation Standards. At some hospitals, risk identification, risk evaluation, and risk estimation, as well as risk treatment, are unstructured without any specified approach or methodology. There is no significant structured approach to risk management at the studied hospitals. Conclusion Information security risk management is not followed by Iran’s hospitals and their information security policies. This problem can cause a large number of challenges for their CHIS security in future. Therefore, Iran’s Ministry of Health should develop practical policies to improve information security risk management in the hospitals of Iran. PMID:27313481
Adding Hierarchical Objects to Relational Database General-Purpose XML-Based Information Managements
NASA Technical Reports Server (NTRS)
Lin, Shu-Chun; Knight, Chris; La, Tracy; Maluf, David; Bell, David; Tran, Khai Peter; Gawdiak, Yuri
2006-01-01
NETMARK is a flexible, high-throughput software system for managing, storing, and rapid searching of unstructured and semi-structured documents. NETMARK transforms such documents from their original highly complex, constantly changing, heterogeneous data formats into well-structured, common data formats in using Hypertext Markup Language (HTML) and/or Extensible Markup Language (XML). The software implements an object-relational database system that combines the best practices of the relational model utilizing Structured Query Language (SQL) with those of the object-oriented, semantic database model for creating complex data. In particular, NETMARK takes advantage of the Oracle 8i object-relational database model using physical-address data types for very efficient keyword searches of records across both context and content. NETMARK also supports multiple international standards such as WEBDAV for drag-and-drop file management and SOAP for integrated information management using Web services. The document-organization and -searching capabilities afforded by NETMARK are likely to make this software attractive for use in disciplines as diverse as science, auditing, and law enforcement.
An Information Extraction Framework for Cohort Identification Using Electronic Health Records
Liu, Hongfang; Bielinski, Suzette J.; Sohn, Sunghwan; Murphy, Sean; Wagholikar, Kavishwar B.; Jonnalagadda, Siddhartha R.; Ravikumar, K.E.; Wu, Stephen T.; Kullo, Iftikhar J.; Chute, Christopher G
Information extraction (IE), a natural language processing (NLP) task that automatically extracts structured or semi-structured information from free text, has become popular in the clinical domain for supporting automated systems at point-of-care and enabling secondary use of electronic health records (EHRs) for clinical and translational research. However, a high performance IE system can be very challenging to construct due to the complexity and dynamic nature of human language. In this paper, we report an IE framework for cohort identification using EHRs that is a knowledge-driven framework developed under the Unstructured Information Management Architecture (UIMA). A system to extract specific information can be developed by subject matter experts through expert knowledge engineering of the externalized knowledge resources used in the framework. PMID:24303255
VisualUrText: A Text Analytics Tool for Unstructured Textual Data
NASA Astrophysics Data System (ADS)
Zainol, Zuraini; Jaymes, Mohd T. H.; Nohuddin, Puteri N. E.
2018-05-01
The growing amount of unstructured text over Internet is tremendous. Text repositories come from Web 2.0, business intelligence and social networking applications. It is also believed that 80-90% of future growth data is available in the form of unstructured text databases that may potentially contain interesting patterns and trends. Text Mining is well known technique for discovering interesting patterns and trends which are non-trivial knowledge from massive unstructured text data. Text Mining covers multidisciplinary fields involving information retrieval (IR), text analysis, natural language processing (NLP), data mining, machine learning statistics and computational linguistics. This paper discusses the development of text analytics tool that is proficient in extracting, processing, analyzing the unstructured text data and visualizing cleaned text data into multiple forms such as Document Term Matrix (DTM), Frequency Graph, Network Analysis Graph, Word Cloud and Dendogram. This tool, VisualUrText, is developed to assist students and researchers for extracting interesting patterns and trends in document analyses.
Structuring Legacy Pathology Reports by openEHR Archetypes to Enable Semantic Querying.
Kropf, Stefan; Krücken, Peter; Mueller, Wolf; Denecke, Kerstin
2017-05-18
Clinical information is often stored as free text, e.g. in discharge summaries or pathology reports. These documents are semi-structured using section headers, numbered lists, items and classification strings. However, it is still challenging to retrieve relevant documents since keyword searches applied on complete unstructured documents result in many false positive retrieval results. We are concentrating on the processing of pathology reports as an example for unstructured clinical documents. The objective is to transform reports semi-automatically into an information structure that enables an improved access and retrieval of relevant data. The data is expected to be stored in a standardized, structured way to make it accessible for queries that are applied to specific sections of a document (section-sensitive queries) and for information reuse. Our processing pipeline comprises information modelling, section boundary detection and section-sensitive queries. For enabling a focused search in unstructured data, documents are automatically structured and transformed into a patient information model specified through openEHR archetypes. The resulting XML-based pathology electronic health records (PEHRs) are queried by XQuery and visualized by XSLT in HTML. Pathology reports (PRs) can be reliably structured into sections by a keyword-based approach. The information modelling using openEHR allows saving time in the modelling process since many archetypes can be reused. The resulting standardized, structured PEHRs allow accessing relevant data by retrieving data matching user queries. Mapping unstructured reports into a standardized information model is a practical solution for a better access to data. Archetype-based XML enables section-sensitive retrieval and visualisation by well-established XML techniques. Focussing the retrieval to particular sections has the potential of saving retrieval time and improving the accuracy of the retrieval.
Monitoring and Identifying in Real time Critical Patients Events.
Chavez Mora, Emma
2014-01-01
Nowadays pervasive health care monitoring environments, as well as business activity monitoring environments, gather information from a variety of data sources. However it includes new challenges because of the use of body and wireless sensors, nontraditional operational and transactional sources. This makes the health data more difficult to monitor. Decision making in this environment is typically complex and unstructured as clinical work is essentially interpretative, multitasking, collaborative, distributed and reactive. Thus, the health care arena requires real time data management in areas such as patient monitoring, detection of adverse events and adaptive responses to operational failures. This research presents a new architecture that enables real time patient data management through the use of intelligent data sources.
Personalized Guideline-Based Treatment Recommendations Using Natural Language Processing Techniques.
Becker, Matthias; Böckmann, Britta
2017-01-01
Clinical guidelines and clinical pathways are accepted and proven instruments for quality assurance and process optimization. Today, electronic representation of clinical guidelines exists as unstructured text, but is not well-integrated with patient-specific information from electronic health records. Consequently, generic content of the clinical guidelines is accessible, but it is not possible to visualize the position of the patient on the clinical pathway, decision support cannot be provided by personalized guidelines for the next treatment step. The Systematized Nomenclature of Medicine - Clinical Terms (SNOMED CT) provides common reference terminology as well as the semantic link for combining the pathways and the patient-specific information. This paper proposes a model-based approach to support the development of guideline-compliant pathways combined with patient-specific structured and unstructured information using SNOMED CT. To identify SNOMED CT concepts, a software was developed to extract SNOMED CT codes out of structured and unstructured German data to map these with clinical pathways annotated in accordance with the systematized nomenclature.
Finding geospatial pattern of unstructured data by clustering routes
NASA Astrophysics Data System (ADS)
Boustani, M.; Mattmann, C. A.; Ramirez, P.; Burke, W.
2016-12-01
Today the majority of data generated has a geospatial context to it. Either in attribute form as a latitude or longitude, or name of location or cross referenceable using other means such as an external gazetteer or location service. Our research is interested in exploiting geospatial location and context in unstructured data such as that found on the web in HTML pages, images, videos, documents, and other areas, and in structured information repositories found on intranets, in scientific environments, and otherwise. We are working together on the DARPA MEMEX project to exploit open source software tools such as the Lucene Geo Gazetteer, Apache Tika, Apache Lucene, and Apache OpenNLP, to automatically extract, and make meaning out of geospatial information. In particular, we are interested in unstructured descriptors e.g., a phone number, or a named entity, and the ability to automatically learn geospatial paths related to these descriptors. For example, a particular phone number may represent an entity that travels on a monthly basis, according to easily identifiable and somes more difficult to track patterns. We will present a set of automatic techniques to extract descriptors, and then to geospatially infer their paths across unstructured data.
Kimia, Amir A; Savova, Guergana; Landschaft, Assaf; Harper, Marvin B
2015-07-01
Electronically stored clinical documents may contain both structured data and unstructured data. The use of structured clinical data varies by facility, but clinicians are familiar with coded data such as International Classification of Diseases, Ninth Revision, Systematized Nomenclature of Medicine-Clinical Terms codes, and commonly other data including patient chief complaints or laboratory results. Most electronic health records have much more clinical information stored as unstructured data, for example, clinical narrative such as history of present illness, procedure notes, and clinical decision making are stored as unstructured data. Despite the importance of this information, electronic capture or retrieval of unstructured clinical data has been challenging. The field of natural language processing (NLP) is undergoing rapid development, and existing tools can be successfully used for quality improvement, research, healthcare coding, and even billing compliance. In this brief review, we provide examples of successful uses of NLP using emergency medicine physician visit notes for various projects and the challenges of retrieving specific data and finally present practical methods that can run on a standard personal computer as well as high-end state-of-the-art funded processes run by leading NLP informatics researchers.
Ochoa, Silvia; Talavera, Julia; Paciello, Julio
2015-01-01
The identification of epidemiological risk areas is one of the major problems in public health. Information management strategies are needed to facilitate prevention and control of disease in the affected areas. This paper presents a model to optimize geographical data collection of suspected or confirmed disease occurrences using the Unstructured Supplementary Service Data (USSD) mobile technology, considering its wide adoption even in developing countries such as Paraguay. A Geographic Information System (GIS) is proposed for visualizing potential epidemiological risk areas in real time, that aims to support decision making and to implement prevention or contingency programs for public health.
Applying the Collective Causal Mapping Methodology to Operations Management Curriculum Development
ERIC Educational Resources Information Center
Hays, Julie M.; Bouzdine-Chameeva, Tatiana; Goldstein, Susan Meyer; Hill, Arthur V.; Scavarda, Annibal José
2007-01-01
Although the field of operations management has come a long way since its beginnings in scientific management, the field still appears somewhat amorphous and unstructured to many. Introductory operations management textbooks usually include a number of largely disjointed topics, which leave many students (and their instructors) without a coherent…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Einstein, Daniel R.; Kuprat, Andrew P.; Jiao, Xiangmin
2013-01-01
Geometries for organ scale and multiscale simulations of organ function are now routinely derived from imaging data. However, medical images may also contain spatially heterogeneous information other than geometry that are relevant to such simulations either as initial conditions or in the form of model parameters. In this manuscript, we present an algorithm for the efficient and robust mapping of such data to imaging based unstructured polyhedral grids in parallel. We then illustrate the application of our mapping algorithm to three different mapping problems: 1) the mapping of MRI diffusion tensor data to an unstuctured ventricular grid; 2) the mappingmore » of serial cyro-section histology data to an unstructured mouse brain grid; and 3) the mapping of CT-derived volumetric strain data to an unstructured multiscale lung grid. Execution times and parallel performance are reported for each case.« less
The BioIntelligence Framework: a new computational platform for biomedical knowledge computing.
Farley, Toni; Kiefer, Jeff; Lee, Preston; Von Hoff, Daniel; Trent, Jeffrey M; Colbourn, Charles; Mousses, Spyro
2013-01-01
Breakthroughs in molecular profiling technologies are enabling a new data-intensive approach to biomedical research, with the potential to revolutionize how we study, manage, and treat complex diseases. The next great challenge for clinical applications of these innovations will be to create scalable computational solutions for intelligently linking complex biomedical patient data to clinically actionable knowledge. Traditional database management systems (DBMS) are not well suited to representing complex syntactic and semantic relationships in unstructured biomedical information, introducing barriers to realizing such solutions. We propose a scalable computational framework for addressing this need, which leverages a hypergraph-based data model and query language that may be better suited for representing complex multi-lateral, multi-scalar, and multi-dimensional relationships. We also discuss how this framework can be used to create rapid learning knowledge base systems to intelligently capture and relate complex patient data to biomedical knowledge in order to automate the recovery of clinically actionable information.
NASA Astrophysics Data System (ADS)
Greene, Patrick; Nourgaliev, Robert; Schofield, Sam
2015-11-01
A new sharp high-order interface tracking method for multi-material flow problems on unstructured meshes is presented. The method combines the marker-tracking algorithm with a discontinuous Galerkin (DG) level set method to implicitly track interfaces. DG projection is used to provide a mapping from the Lagrangian marker field to the Eulerian level set field. For the level set re-distancing, we developed a novel marching method that takes advantage of the unique features of the DG representation of the level set. The method efficiently marches outward from the zero level set with values in the new cells being computed solely from cell neighbors. Results are presented for a number of different interface geometries including ones with sharp corners and multiple hierarchical level sets. The method can robustly handle the level set discontinuities without explicit utilization of solution limiters. Results show that the expected high order (3rd and higher) of convergence for the DG representation of the level set is obtained for smooth solutions on unstructured meshes. High-order re-distancing on irregular meshes is a must for applications were the interfacial curvature is important for underlying physics, such as surface tension, wetting and detonation shock dynamics. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. Information management release number LLNL-ABS-675636.
Midwives' experiences of managing women in labour in the Limpopo Province of South Africa.
Maputle, S M; Hiss, D C
2010-09-01
The objective of this study was to explore and describe the experiences of midwives managing women during labour at a tertiary care hospital in the Limpopo Province. An exploratory, descriptive, contextual and inductive design was applied to this qualitative research study. Purposive sampling was used to select midwives who were working in the childbirth unit and had managed women during labour. A sample of 12 midwives participated in this study. Data were collected by means of unstructured individual interviews and analysed through an open coding method by the researchers and the independent co-coder. Categories identified were lack of mutual participation and responsibility sharing, dependency and lack of decision-making, lack of information-sharing, empowering autonomy and informed choices opportunities, lack of open communication and listening, non-accommodative midwifery actions, and lack of human and material infrastructure. To ensure the validity of the results, criteria to measure trustworthiness were utilized. This study has implications for woman-centered care by midwives managing women in labour and provides appropriate guidelines that should be integrated into the Batho-Pele Principles.
Progress Toward Overset-Grid Moving Body Capability for USM3D Unstructured Flow Solver
NASA Technical Reports Server (NTRS)
Pandyna, Mohagna J.; Frink, Neal T.; Noack, Ralph W.
2005-01-01
A static and dynamic Chimera overset-grid capability is added to an established NASA tetrahedral unstructured parallel Navier-Stokes flow solver, USM3D. Modifications to the solver primarily consist of a few strategic calls to the Donor interpolation Receptor Transaction library (DiRTlib) to facilitate communication of solution information between various grids. The assembly of multiple overlapping grids into a single-zone composite grid is performed by the Structured, Unstructured and Generalized Grid AssembleR (SUGGAR) code. Several test cases are presented to verify the implementation, assess overset-grid solution accuracy and convergence relative to single-grid solutions, and demonstrate the prescribed relative grid motion capability.
A UIMA wrapper for the NCBO annotator.
Roeder, Christophe; Jonquet, Clement; Shah, Nigam H; Baumgartner, William A; Verspoor, Karin; Hunter, Lawrence
2010-07-15
The Unstructured Information Management Architecture (UIMA) framework and web services are emerging as useful tools for integrating biomedical text mining tools. This note describes our work, which wraps the National Center for Biomedical Ontology (NCBO) Annotator-an ontology-based annotation service-to make it available as a component in UIMA workflows. This wrapper is freely available on the web at http://bionlp-uima.sourceforge.net/ as part of the UIMA tools distribution from the Center for Computational Pharmacology (CCP) at the University of Colorado School of Medicine. It has been implemented in Java for support on Mac OS X, Linux and MS Windows.
The National Grid Project: A system overview
NASA Technical Reports Server (NTRS)
Gaither, Adam; Gaither, Kelly; Jean, Brian; Remotigue, Michael; Whitmire, John; Soni, Bharat; Thompson, Joe; Dannenhoffer,, John; Weatherill, Nigel
1995-01-01
The National Grid Project (NGP) is a comprehensive numerical grid generation software system that is being developed at the National Science Foundation (NSF) Engineering Research Center (ERC) for Computational Field Simulation (CFS) at Mississippi State University (MSU). NGP is supported by a coalition of U.S. industries and federal laboratories. The objective of the NGP is to significantly decrease the amount of time it takes to generate a numerical grid for complex geometries and to increase the quality of these grids to enable computational field simulations for applications in industry. A geometric configuration can be discretized into grids (or meshes) that have two fundamental forms: structured and unstructured. Structured grids are formed by intersecting curvilinear coordinate lines and are composed of quadrilateral (2D) and hexahedral (3D) logically rectangular cells. The connectivity of a structured grid provides for trivial identification of neighboring points by incrementing coordinate indices. Unstructured grids are composed of cells of any shape (commonly triangles, quadrilaterals, tetrahedra and hexahedra), but do not have trivial identification of neighbors by incrementing an index. For unstructured grids, a set of points and an associated connectivity table is generated to define unstructured cell shapes and neighboring points. Hybrid grids are a combination of structured grids and unstructured grids. Chimera (overset) grids are intersecting or overlapping structured grids. The NGP system currently provides a user interface that integrates both 2D and 3D structured and unstructured grid generation, a solid modeling topology data management system, an internal Computer Aided Design (CAD) system based on Non-Uniform Rational B-Splines (NURBS), a journaling language, and a grid/solution visualization system.
Vest, Joshua R; Grannis, Shaun J; Haut, Dawn P; Halverson, Paul K; Menachemi, Nir
2017-11-01
Increasingly, health care providers are adopting population health management approaches that address the social determinants of health (SDH). However, effectively identifying patients needing services that address a SDH in primary care settings is challenging. The purpose of the current study is to explore how various data sources can identify adult primary care patients that are in need of services that address SDH. A cross-sectional study described patients in need of SDH services offered by a safety-net hospital's federally qualified health center clinics. SDH services of social work, behavioral health, nutrition counseling, respiratory therapy, financial planning, medical-legal partnership assistance, patient navigation, and pharmacist consultation were offered on a co-located basis and were identified using structured billing and scheduling data, and unstructured electronic health record data. We report the prevalence of the eight different SDH service needs and the patient characteristics associated with service need. Moreover, characteristics of patients with SDH services need documented in structured data sources were compared with those documented by unstructured data sources. More than half (53%) of patients needed SDH services. Those in need of such services tended to be female, older, more medically complex, and higher utilizers of services. Structured and unstructured data sources exhibited poor agreement on patient SDH services need. Patients with SDH services need documented by unstructured data tended to be more complex. The need for SDH services among a safety-net population is high. Identifying patients in need of such services requires multiple data sources with structured and unstructured data. Copyright © 2017. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Nývlt, Vladimír; Prušková, Kristýna
2017-10-01
BIM today is much more than drafting in 3D only, and project participants are further challenging, what is the topic of both this paper, and further research. Knowledge of objects, their behaviour, and other characteristics has high impact on whole building life cycle. Other structured and unstructured knowledge is rightfully added (e.g. historically based experience, needs and requirements of users, investors, needs for project and objects revisions) Grasping of all attributes into system for collection, managing and time control of knowledge. Further important findings lie in the necessity of understanding how to manage knowledge needs with diverse and variable ways, when BIM maturity levels are advanced, as defined by Bew and Richards (2008). All decisions made would always rely on good, timely, and correct data. Usage of BIM models in terms of Building Information Management can support all decisions through data gathering, sharing, and using across all disciplines and all Life Cycle steps. It particularly significantly improves possibilities and level of life cycle costing. Experience and knowledge stored in data models of BIM, describing user requirements, best practices derived from other projects and/or research outputs will help to understand sustainability in its complexity and wholeness.
A Force-Sensing System on Legs for Biomimetic Hexapod Robots Interacting with Unstructured Terrain
Wu, Rui; Li, Changle; Zang, Xizhe; Zhang, Xuehe; Jin, Hongzhe; Zhao, Jie
2017-01-01
The tiger beetle can maintain its stability by controlling the interaction force between its legs and an unstructured terrain while it runs. The biomimetic hexapod robot mimics a tiger beetle, and a comprehensive force sensing system combined with certain algorithms can provide force information that can help the robot understand the unstructured terrain that it interacts with. This study introduces a complicated leg force sensing system for a hexapod robot that is the same for all six legs. First, the layout and configuration of sensing system are designed according to the structure and sizes of legs. Second, the joint toque sensors, 3-DOF foot-end force sensor and force information processing module are designed, and the force sensor performance parameters are tested by simulations and experiments. Moreover, a force sensing system is implemented within the robot control architecture. Finally, the experimental evaluation of the leg force sensor system on the hexapod robot is discussed and the performance of the leg force sensor system is verified. PMID:28654003
Runge-Kutta discontinuous Galerkin method using a new type of WENO limiters on unstructured meshes
NASA Astrophysics Data System (ADS)
Zhu, Jun; Zhong, Xinghui; Shu, Chi-Wang; Qiu, Jianxian
2013-09-01
In this paper we generalize a new type of limiters based on the weighted essentially non-oscillatory (WENO) finite volume methodology for the Runge-Kutta discontinuous Galerkin (RKDG) methods solving nonlinear hyperbolic conservation laws, which were recently developed in [32] for structured meshes, to two-dimensional unstructured triangular meshes. The key idea of such limiters is to use the entire polynomials of the DG solutions from the troubled cell and its immediate neighboring cells, and then apply the classical WENO procedure to form a convex combination of these polynomials based on smoothness indicators and nonlinear weights, with suitable adjustments to guarantee conservation. The main advantage of this new limiter is its simplicity in implementation, especially for the unstructured meshes considered in this paper, as only information from immediate neighbors is needed and the usage of complicated geometric information of the meshes is largely avoided. Numerical results for both scalar equations and Euler systems of compressible gas dynamics are provided to illustrate the good performance of this procedure.
Out-of-Core Streamline Visualization on Large Unstructured Meshes
NASA Technical Reports Server (NTRS)
Ueng, Shyh-Kuang; Sikorski, K.; Ma, Kwan-Liu
1997-01-01
It's advantageous for computational scientists to have the capability to perform interactive visualization on their desktop workstations. For data on large unstructured meshes, this capability is not generally available. In particular, particle tracing on unstructured grids can result in a high percentage of non-contiguous memory accesses and therefore may perform very poorly with virtual memory paging schemes. The alternative of visualizing a lower resolution of the data degrades the original high-resolution calculations. This paper presents an out-of-core approach for interactive streamline construction on large unstructured tetrahedral meshes containing millions of elements. The out-of-core algorithm uses an octree to partition and restructure the raw data into subsets stored into disk files for fast data retrieval. A memory management policy tailored to the streamline calculations is used such that during the streamline construction only a very small amount of data are brought into the main memory on demand. By carefully scheduling computation and data fetching, the overhead of reading data from the disk is significantly reduced and good memory performance results. This out-of-core algorithm makes possible interactive streamline visualization of large unstructured-grid data sets on a single mid-range workstation with relatively low main-memory capacity: 5-20 megabytes. Our test results also show that this approach is much more efficient than relying on virtual memory and operating system's paging algorithms.
Extended bounds limiter for high-order finite-volume schemes on unstructured meshes
NASA Astrophysics Data System (ADS)
Tsoutsanis, Panagiotis
2018-06-01
This paper explores the impact of the definition of the bounds of the limiter proposed by Michalak and Ollivier-Gooch in [56] (2009), for higher-order Monotone-Upstream Central Scheme for Conservation Laws (MUSCL) numerical schemes on unstructured meshes in the finite-volume (FV) framework. A new modification of the limiter is proposed where the bounds are redefined by utilising all the spatial information provided by all the elements in the reconstruction stencil. Numerical results obtained on smooth and discontinuous test problems of the Euler equations on unstructured meshes, highlight that the newly proposed extended bounds limiter exhibits superior performance in terms of accuracy and mesh sensitivity compared to the cell-based or vertex-based bounds implementations.
Estimating abundance of mountain lions from unstructured spatial sampling
Robin E. Russell; J. Andrew Royle; Richard Desimone; Michael K. Schwartz; Victoria L. Edwards; Kristy P. Pilgrim; Kevin S. McKelvey
2012-01-01
Mountain lions (Puma concolor) are often difficult to monitor because of their low capture probabilities, extensive movements, and large territories. Methods for estimating the abundance of this species are needed to assess population status, determine harvest levels, evaluate the impacts of management actions on populations, and derive conservation and management...
EMPHASIS(TM)/Nevada UTDEM User Guide Version 2.1.1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, C. David; Pasik, Michael F.; Pointon, Timothy D.
The Unstructured Time - Domain ElectroMagnetics (UTDEM) portion of the EMPHASIS suite solves Maxwell's equations using finite - element techniques on unstructured meshes. This document provides user - specific information to facilitate the use of the code for ap plications of interest. Acknowledgement The authors would like to thank all of those individuals who have helped to bring EMPHASIS/Nevada to the point it is today, including Bill Bohnhoff, Rich Drake, and all of the NEVADA code team.
A Framework for Enhancing Real-time Social Media Data to Improve Disaster Management Process
NASA Astrophysics Data System (ADS)
Attique Shah, Syed; Zafer Şeker, Dursun; Demirel, Hande
2018-05-01
Social Media datasets are playing a vital role to provide information that can support decision making in nearly all domains of technology. It is due to the fact that social media is a quick and economical approach for data collection from public through methods like crowdsourcing. It is already proved by existing research that in case of any disaster (natural or man-made) the information extracted from Social Media sites is very critical to Disaster Management Systems for response and reconstruction. This study comprises of two components, the first part proposes a framework that provides updated and filtered real time input data for the disaster management system through social media and the second part consists of a designed web user API for a structured and defined real time data input process. This study contributes to the discipline of design science for the information systems domain. The aim of this study is to propose a framework that can filter and organize data from the unstructured social media sources through recognized methods and to bring this retrieved data to the same level as that of taken through a structured and predefined mechanism of a web API. Both components are designed to a level such that they can potentially collaborate and produce updated information for a disaster management system to carry out accurate and effective.
Wide Area Information Servers: An Executive Information System for Unstructured Files.
ERIC Educational Resources Information Center
Kahle, Brewster; And Others
1992-01-01
Describes the Wide Area Information Servers (WAIS) system, an integrated information retrieval system for corporate end users. Discussion covers general characteristics of the system, search techniques, protocol development, user interfaces, servers, selective dissemination of information, nontextual data, access to other servers, and description…
A UIMA wrapper for the NCBO annotator
Roeder, Christophe; Jonquet, Clement; Shah, Nigam H.; Baumgartner, William A.; Verspoor, Karin; Hunter, Lawrence
2010-01-01
Summary: The Unstructured Information Management Architecture (UIMA) framework and web services are emerging as useful tools for integrating biomedical text mining tools. This note describes our work, which wraps the National Center for Biomedical Ontology (NCBO) Annotator—an ontology-based annotation service—to make it available as a component in UIMA workflows. Availability: This wrapper is freely available on the web at http://bionlp-uima.sourceforge.net/ as part of the UIMA tools distribution from the Center for Computational Pharmacology (CCP) at the University of Colorado School of Medicine. It has been implemented in Java for support on Mac OS X, Linux and MS Windows. Contact: chris.roeder@ucdenver.edu PMID:20505005
The BioIntelligence Framework: a new computational platform for biomedical knowledge computing
Farley, Toni; Kiefer, Jeff; Lee, Preston; Von Hoff, Daniel; Trent, Jeffrey M; Colbourn, Charles
2013-01-01
Breakthroughs in molecular profiling technologies are enabling a new data-intensive approach to biomedical research, with the potential to revolutionize how we study, manage, and treat complex diseases. The next great challenge for clinical applications of these innovations will be to create scalable computational solutions for intelligently linking complex biomedical patient data to clinically actionable knowledge. Traditional database management systems (DBMS) are not well suited to representing complex syntactic and semantic relationships in unstructured biomedical information, introducing barriers to realizing such solutions. We propose a scalable computational framework for addressing this need, which leverages a hypergraph-based data model and query language that may be better suited for representing complex multi-lateral, multi-scalar, and multi-dimensional relationships. We also discuss how this framework can be used to create rapid learning knowledge base systems to intelligently capture and relate complex patient data to biomedical knowledge in order to automate the recovery of clinically actionable information. PMID:22859646
A solution-adaptive hybrid-grid method for the unsteady analysis of turbomachinery
NASA Technical Reports Server (NTRS)
Mathur, Sanjay R.; Madavan, Nateri K.; Rajagopalan, R. G.
1993-01-01
A solution-adaptive method for the time-accurate analysis of two-dimensional flows in turbomachinery is described. The method employs a hybrid structured-unstructured zonal grid topology in conjunction with appropriate modeling equations and solution techniques in each zone. The viscous flow region in the immediate vicinity of the airfoils is resolved on structured O-type grids while the rest of the domain is discretized using an unstructured mesh of triangular cells. Implicit, third-order accurate, upwind solutions of the Navier-Stokes equations are obtained in the inner regions. In the outer regions, the Euler equations are solved using an explicit upwind scheme that incorporates a second-order reconstruction procedure. An efficient and robust grid adaptation strategy, including both grid refinement and coarsening capabilities, is developed for the unstructured grid regions. Grid adaptation is also employed to facilitate information transfer at the interfaces between unstructured grids in relative motion. Results for grid adaptation to various features pertinent to turbomachinery flows are presented. Good comparisons between the present results and experimental measurements and earlier structured-grid results are obtained.
Obstacle Classification and 3D Measurement in Unstructured Environments Based on ToF Cameras
Yu, Hongshan; Zhu, Jiang; Wang, Yaonan; Jia, Wenyan; Sun, Mingui; Tang, Yandong
2014-01-01
Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient. PMID:24945679
A Case Study of Knowledge Management in the "Back Office" of Two English Football Clubs
ERIC Educational Resources Information Center
Doloriert, Clair; Whitworth, Kieran
2011-01-01
Purpose: This study aims to explore knowledge management (KM) practice in the "back office" of two English football clubs. Design/methodology/approach: The paper takes the form of a comparative case study of two medium-sized businesses using multi-method data including unstructured interviews, structured questionnaires and document…
Information persistence using XML database technology
NASA Astrophysics Data System (ADS)
Clark, Thomas A.; Lipa, Brian E. G.; Macera, Anthony R.; Staskevich, Gennady R.
2005-05-01
The Joint Battlespace Infosphere (JBI) Information Management (IM) services provide information exchange and persistence capabilities that support tailored, dynamic, and timely access to required information, enabling near real-time planning, control, and execution for DoD decision making. JBI IM services will be built on a substrate of network centric core enterprise services and when transitioned, will establish an interoperable information space that aggregates, integrates, fuses, and intelligently disseminates relevant information to support effective warfighter business processes. This virtual information space provides individual users with information tailored to their specific functional responsibilities and provides a highly tailored repository of, or access to, information that is designed to support a specific Community of Interest (COI), geographic area or mission. Critical to effective operation of JBI IM services is the implementation of repositories, where data, represented as information, is represented and persisted for quick and easy retrieval. This paper will address information representation, persistence and retrieval using existing database technologies to manage structured data in Extensible Markup Language (XML) format as well as unstructured data in an IM services-oriented environment. Three basic categories of database technologies will be compared and contrasted: Relational, XML-Enabled, and Native XML. These technologies have diverse properties such as maturity, performance, query language specifications, indexing, and retrieval methods. We will describe our application of these evolving technologies within the context of a JBI Reference Implementation (RI) by providing some hopefully insightful anecdotes and lessons learned along the way. This paper will also outline future directions, promising technologies and emerging COTS products that can offer more powerful information management representations, better persistence mechanisms and improved retrieval techniques.
Tool Use Within NASA Software Quality Assurance
NASA Technical Reports Server (NTRS)
Shigeta, Denise; Port, Dan; Nikora, Allen P.; Wilf, Joel
2013-01-01
As space mission software systems become larger and more complex, it is increasingly important for the software assurance effort to have the ability to effectively assess both the artifacts produced during software system development and the development process itself. Conceptually, assurance is a straightforward idea - it is the result of activities carried out by an organization independent of the software developers to better inform project management of potential technical and programmatic risks, and thus increase management's confidence in the decisions they ultimately make. In practice, effective assurance for large, complex systems often entails assessing large, complex software artifacts (e.g., requirements specifications, architectural descriptions) as well as substantial amounts of unstructured information (e.g., anomaly reports resulting from testing activities during development). In such an environment, assurance engineers can benefit greatly from appropriate tool support. In order to do so, an assurance organization will need accurate and timely information on the tool support available for various types of assurance activities. In this paper, we investigate the current use of tool support for assurance organizations within NASA, and describe on-going work at JPL for providing assurance organizations with the information about tools they need to use them effectively.
Visualizing unstructured patient data for assessing diagnostic and therapeutic history.
Deng, Yihan; Denecke, Kerstin
2014-01-01
Having access to relevant patient data is crucial for clinical decision making. The data is often documented in unstructured texts and collected in the electronic health record. In this paper, we evaluate an approach to visualize information extracted from clinical documents by means of tag cloud. Tag clouds will be generated using a bag of word approach and by exploiting part of speech tags. For a real word data set comprising radiological reports, pathological reports and surgical operation reports, tag clouds are generated and a questionnaire-based study is conducted as evaluation. Feedback from the physicians shows that the tag cloud visualization is an effective and rapid approach to represent relevant parts of unstructured patient data. To handle the different medical narratives, we have summarized several possible improvements according to the user feedback and evaluation results.
Secure and Privacy-Preserving Distributed Information Brokering
ERIC Educational Resources Information Center
Li, Fengjun
2010-01-01
As enormous structured, semi-structured and unstructured data are collected and archived by organizations in many realms ranging from business to health networks to government agencies, the needs for efficient yet secure inter-organization information sharing naturally arise. Unlike early information sharing approaches that only involve a small…
Data Storing Proposal from Heterogeneous Systems into a Specialized Repository
NASA Astrophysics Data System (ADS)
Václavová, Andrea; Tanuška, Pavol; Jánošík, Ján
2016-12-01
The aim of this paper is to analyze and to propose an appropriate system for processing and simultaneously storing a vast volume of structured and unstructured data. The paper consists of three parts. The first part addresses the issue of structured and unstructured data. The second part provides the detailed analysis of data repositories and subsequent evaluation indicating which system would be for the given type and volume of data optimal. The third part focuses on the use of gathered information to transfer data to the proposed repository.
Chen, Elizabeth S.; Maloney, Francine L.; Shilmayster, Eugene; Goldberg, Howard S.
2009-01-01
A systematic and standard process for capturing information within free-text clinical documents could facilitate opportunities for improving quality and safety of patient care, enhancing decision support, and advancing data warehousing across an enterprise setting. At Partners HealthCare System, the Medical Language Processing (MLP) services project was initiated to establish a component-based architectural model and processes to facilitate putting MLP functionality into production for enterprise consumption, promote sharing of components, and encourage reuse. Key objectives included exploring the use of an open-source framework called the Unstructured Information Management Architecture (UIMA) and leveraging existing MLP-related efforts, terminology, and document standards. This paper describes early experiences in defining the infrastructure and standards for extracting, encoding, and structuring clinical observations from a variety of clinical documents to serve enterprise-wide needs. PMID:20351830
Chen, Elizabeth S; Maloney, Francine L; Shilmayster, Eugene; Goldberg, Howard S
2009-11-14
A systematic and standard process for capturing information within free-text clinical documents could facilitate opportunities for improving quality and safety of patient care, enhancing decision support, and advancing data warehousing across an enterprise setting. At Partners HealthCare System, the Medical Language Processing (MLP) services project was initiated to establish a component-based architectural model and processes to facilitate putting MLP functionality into production for enterprise consumption, promote sharing of components, and encourage reuse. Key objectives included exploring the use of an open-source framework called the Unstructured Information Management Architecture (UIMA) and leveraging existing MLP-related efforts, terminology, and document standards. This paper describes early experiences in defining the infrastructure and standards for extracting, encoding, and structuring clinical observations from a variety of clinical documents to serve enterprise-wide needs.
Automated extraction of family history information from clinical notes.
Bill, Robert; Pakhomov, Serguei; Chen, Elizabeth S; Winden, Tamara J; Carter, Elizabeth W; Melton, Genevieve B
2014-01-01
Despite increased functionality for obtaining family history in a structured format within electronic health record systems, clinical notes often still contain this information. We developed and evaluated an Unstructured Information Management Application (UIMA)-based natural language processing (NLP) module for automated extraction of family history information with functionality for identifying statements, observations (e.g., disease or procedure), relative or side of family with attributes (i.e., vital status, age of diagnosis, certainty, and negation), and predication ("indicator phrases"), the latter of which was used to establish relationships between observations and family member. The family history NLP system demonstrated F-scores of 66.9, 92.4, 82.9, 57.3, 97.7, and 61.9 for detection of family history statements, family member identification, observation identification, negation identification, vital status, and overall extraction of the predications between family members and observations, respectively. While the system performed well for detection of family history statements and predication constituents, further work is needed to improve extraction of certainty and temporal modifications.
Automated Extraction of Family History Information from Clinical Notes
Bill, Robert; Pakhomov, Serguei; Chen, Elizabeth S.; Winden, Tamara J.; Carter, Elizabeth W.; Melton, Genevieve B.
2014-01-01
Despite increased functionality for obtaining family history in a structured format within electronic health record systems, clinical notes often still contain this information. We developed and evaluated an Unstructured Information Management Application (UIMA)-based natural language processing (NLP) module for automated extraction of family history information with functionality for identifying statements, observations (e.g., disease or procedure), relative or side of family with attributes (i.e., vital status, age of diagnosis, certainty, and negation), and predication (“indicator phrases”), the latter of which was used to establish relationships between observations and family member. The family history NLP system demonstrated F-scores of 66.9, 92.4, 82.9, 57.3, 97.7, and 61.9 for detection of family history statements, family member identification, observation identification, negation identification, vital status, and overall extraction of the predications between family members and observations, respectively. While the system performed well for detection of family history statements and predication constituents, further work is needed to improve extraction of certainty and temporal modifications. PMID:25954443
Unstructured Socializing with Peers and Delinquent Behavior: A Genetically Informed Analysis.
Meldrum, Ryan C; Barnes, J C
2017-09-01
A large body of research finds that unstructured socializing with peers is positively associated with delinquency during adolescence. Yet, existing research has not ruled out the potential for confounding due to genetic factors and factors that can be traced to environments shared between siblings. To fill this void, the current study examines whether the association between unstructured socializing with peers and delinquent behavior remains when accounting for genetic factors, shared environmental influences, and a variety of non-shared environmental covariates. We do so by using data from the twin subsample of the National Longitudinal Study of Adolescent to Adult Health (n = 1200 at wave 1 and 1103 at wave 2; 51% male; mean age at wave 1 = 15.63). Results from both cross-sectional and lagged models indicate the association between unstructured socializing with peers and delinquent behavior remains when controlling for both genetic and environmental influences. Supplementary analyses examining the association under different specifications offer additional, albeit qualified, evidence supportive of this finding. The study concludes with a discussion highlighting the importance of limiting free time with friends in the absence of authority figures as a strategy for reducing delinquency during adolescence.
NASA Astrophysics Data System (ADS)
Davenport, Jack H.
2016-05-01
Intelligence analysts demand rapid information fusion capabilities to develop and maintain accurate situational awareness and understanding of dynamic enemy threats in asymmetric military operations. The ability to extract relationships between people, groups, and locations from a variety of text datasets is critical to proactive decision making. The derived network of entities must be automatically created and presented to analysts to assist in decision making. DECISIVE ANALYTICS Corporation (DAC) provides capabilities to automatically extract entities, relationships between entities, semantic concepts about entities, and network models of entities from text and multi-source datasets. DAC's Natural Language Processing (NLP) Entity Analytics model entities as complex systems of attributes and interrelationships which are extracted from unstructured text via NLP algorithms. The extracted entities are automatically disambiguated via machine learning algorithms, and resolution recommendations are presented to the analyst for validation; the analyst's expertise is leveraged in this hybrid human/computer collaborative model. Military capability is enhanced by these NLP Entity Analytics because analysts can now create/update an entity profile with intelligence automatically extracted from unstructured text, thereby fusing entity knowledge from structured and unstructured data sources. Operational and sustainment costs are reduced since analysts do not have to manually tag and resolve entities.
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann
1993-01-01
A general solution adaptive scheme-based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
Hybrid Grid Techniques for Propulsion Applications
NASA Technical Reports Server (NTRS)
Koomullil, Roy P.; Soni, Bharat K.; Thornburg, Hugh J.
1996-01-01
During the past decade, computational simulation of fluid flow for propulsion activities has progressed significantly, and many notable successes have been reported in the literature. However, the generation of a high quality mesh for such problems has often been reported as a pacing item. Hence, much effort has been expended to speed this portion of the simulation process. Several approaches have evolved for grid generation. Two of the most common are structured multi-block, and unstructured based procedures. Structured grids tend to be computationally efficient, and have high aspect ratio cells necessary for efficently resolving viscous layers. Structured multi-block grids may or may not exhibit grid line continuity across the block interface. This relaxation of the continuity constraint at the interface is intended to ease the grid generation process, which is still time consuming. Flow solvers supporting non-contiguous interfaces require specialized interpolation procedures which may not ensure conservation at the interface. Unstructured or generalized indexing data structures offer greater flexibility, but require explicit connectivity information and are not easy to generate for three dimensional configurations. In addition, unstructured mesh based schemes tend to be less efficient and it is difficult to resolve viscous layers. Recently hybrid or generalized element solution and grid generation techniques have been developed with the objective of combining the attractive features of both structured and unstructured techniques. In the present work, recently developed procedures for hybrid grid generation and flow simulation are critically evaluated, and compared to existing structured and unstructured procedures in terms of accuracy and computational requirements.
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann; Usab, William J., Jr.
1993-01-01
A general solution adaptive scheme based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
Survey of Knowledge Representation and Reasoning Systems
2009-07-01
processing large volumes of unstructured information such as natural language documents, email, audio , images and video [Ferrucci et al. 2006]. Using this...information we hope to obtain improved es- timation and prediction, data-mining, social network analysis, and semantic search and visualisation . Knowledge
Use of Unstructured Event-Based Reports for Global Infectious Disease Surveillance
Blench, Michael; Tolentino, Herman; Freifeld, Clark C.; Mandl, Kenneth D.; Mawudeku, Abla; Eysenbach, Gunther; Brownstein, John S.
2009-01-01
Free or low-cost sources of unstructured information, such as Internet news and online discussion sites, provide detailed local and near real-time data on disease outbreaks, even in countries that lack traditional public health surveillance. To improve public health surveillance and, ultimately, interventions, we examined 3 primary systems that process event-based outbreak information: Global Public Health Intelligence Network, HealthMap, and EpiSPIDER. Despite similarities among them, these systems are highly complementary because they monitor different data types, rely on varying levels of automation and human analysis, and distribute distinct information. Future development should focus on linking these systems more closely to public health practitioners in the field and establishing collaborative networks for alert verification and dissemination. Such development would further establish event-based monitoring as an invaluable public health resource that provides critical context and an alternative to traditional indicator-based outbreak reporting. PMID:19402953
Jonnagaddala, Jitendra; Liaw, Siaw-Teng; Ray, Pradeep; Kumar, Manish; Dai, Hong-Jie; Hsu, Chien-Yeh
2015-01-01
Heart disease is the leading cause of death worldwide. Therefore, assessing the risk of its occurrence is a crucial step in predicting serious cardiac events. Identifying heart disease risk factors and tracking their progression is a preliminary step in heart disease risk assessment. A large number of studies have reported the use of risk factor data collected prospectively. Electronic health record systems are a great resource of the required risk factor data. Unfortunately, most of the valuable information on risk factor data is buried in the form of unstructured clinical notes in electronic health records. In this study, we present an information extraction system to extract related information on heart disease risk factors from unstructured clinical notes using a hybrid approach. The hybrid approach employs both machine learning and rule-based clinical text mining techniques. The developed system achieved an overall microaveraged F-score of 0.8302.
OC-2-KB: A software pipeline to build an evidence-based obesity and cancer knowledge base.
Lossio-Ventura, Juan Antonio; Hogan, William; Modave, François; Guo, Yi; He, Zhe; Hicks, Amanda; Bian, Jiang
2017-11-01
Obesity has been linked to several types of cancer. Access to adequate health information activates people's participation in managing their own health, which ultimately improves their health outcomes. Nevertheless, the existing online information about the relationship between obesity and cancer is heterogeneous and poorly organized. A formal knowledge representation can help better organize and deliver quality health information. Currently, there are several efforts in the biomedical domain to convert unstructured data to structured data and store them in Semantic Web knowledge bases (KB). In this demo paper, we present, OC-2-KB (Obesity and Cancer to Knowledge Base), a system that is tailored to guide the automatic KB construction for managing obesity and cancer knowledge from free-text scientific literature (i.e., PubMed abstracts) in a systematic way. OC-2-KB has two important modules which perform the acquisition of entities and the extraction then classification of relationships among these entities. We tested the OC-2-KB system on a data set with 23 manually annotated obesity and cancer PubMed abstracts and created a preliminary KB with 765 triples. We conducted a preliminary evaluation on this sample of triples and reported our evaluation results.
NASA Astrophysics Data System (ADS)
Wong, Jaime G.; Rosi, Giuseppe A.; Rouhi, Amirreza; Rival, David E.
2017-10-01
Particle tracking velocimetry (PTV) produces high-quality temporal information that is often neglected when computing spatial gradients. A method is presented here to utilize this temporal information in order to improve the estimation of spatial gradients for spatially unstructured Lagrangian data sets. Starting with an initial guess, this method penalizes any gradient estimate where the substantial derivative of vorticity along a pathline is not equal to the local vortex stretching/tilting. Furthermore, given an initial guess, this method can proceed on an individual pathline without any further reference to neighbouring pathlines. The equivalence of the substantial derivative and vortex stretching/tilting is based on the vorticity transport equation, where viscous diffusion is neglected. By minimizing the residual of the vorticity-transport equation, the proposed method is first tested to reduce error and noise on a synthetic Taylor-Green vortex field dissipating in time. Furthermore, when the proposed method is applied to high-density experimental data collected with `Shake-the-Box' PTV, noise within the spatial gradients is significantly reduced. In the particular test case investigated here of an accelerating circular plate captured during a single run, the method acts to delineate the shear layer and vortex core, as well as resolve the Kelvin-Helmholtz instabilities, which were previously unidentifiable without the use of ensemble averaging. The proposed method shows promise for improving PTV measurements that require robust spatial gradients while retaining the unstructured Lagrangian perspective.
Smart Extraction and Analysis System for Clinical Research.
Afzal, Muhammad; Hussain, Maqbool; Khan, Wajahat Ali; Ali, Taqdir; Jamshed, Arif; Lee, Sungyoung
2017-05-01
With the increasing use of electronic health records (EHRs), there is a growing need to expand the utilization of EHR data to support clinical research. The key challenge in achieving this goal is the unavailability of smart systems and methods to overcome the issue of data preparation, structuring, and sharing for smooth clinical research. We developed a robust analysis system called the smart extraction and analysis system (SEAS) that consists of two subsystems: (1) the information extraction system (IES), for extracting information from clinical documents, and (2) the survival analysis system (SAS), for a descriptive and predictive analysis to compile the survival statistics and predict the future chance of survivability. The IES subsystem is based on a novel permutation-based pattern recognition method that extracts information from unstructured clinical documents. Similarly, the SAS subsystem is based on a classification and regression tree (CART)-based prediction model for survival analysis. SEAS is evaluated and validated on a real-world case study of head and neck cancer. The overall information extraction accuracy of the system for semistructured text is recorded at 99%, while that for unstructured text is 97%. Furthermore, the automated, unstructured information extraction has reduced the average time spent on manual data entry by 75%, without compromising the accuracy of the system. Moreover, around 88% of patients are found in a terminal or dead state for the highest clinical stage of disease (level IV). Similarly, there is an ∼36% probability of a patient being alive if at least one of the lifestyle risk factors was positive. We presented our work on the development of SEAS to replace costly and time-consuming manual methods with smart automatic extraction of information and survival prediction methods. SEAS has reduced the time and energy of human resources spent unnecessarily on manual tasks.
Towards an automated intelligence product generation capability
NASA Astrophysics Data System (ADS)
Smith, Alison M.; Hawes, Timothy W.; Nolan, James J.
2015-05-01
Creating intelligence information products is a time consuming and difficult process for analysts faced with identifying key pieces of information relevant to a complex set of information requirements. Complicating matters, these key pieces of information exist in multiple modalities scattered across data stores, buried in huge volumes of data. This results in the current predicament analysts find themselves; information retrieval and management consumes huge amounts of time that could be better spent performing analysis. The persistent growth in data accumulation rates will only increase the amount of time spent on these tasks without a significant advance in automated solutions for information product generation. We present a product generation tool, Automated PrOduct Generation and Enrichment (APOGEE), which aims to automate the information product creation process in order to shift the bulk of the analysts' effort from data discovery and management to analysis. APOGEE discovers relevant text, imagery, video, and audio for inclusion in information products using semantic and statistical models of unstructured content. APOGEEs mixed-initiative interface, supported by highly responsive backend mechanisms, allows analysts to dynamically control the product generation process ensuring a maximally relevant result. The combination of these capabilities results in significant reductions in the time it takes analysts to produce information products while helping to increase the overall coverage. Through evaluation with a domain expert, APOGEE has been shown the potential to cut down the time for product generation by 20x. The result is a flexible end-to-end system that can be rapidly deployed in new operational settings.
Mehrabi, Saeed; Krishnan, Anand; Roch, Alexandra M; Schmidt, Heidi; Li, DingCheng; Kesterson, Joe; Beesley, Chris; Dexter, Paul; Schmidt, Max; Palakal, Mathew; Liu, Hongfang
2015-01-01
In this study we have developed a rule-based natural language processing (NLP) system to identify patients with family history of pancreatic cancer. The algorithm was developed in a Unstructured Information Management Architecture (UIMA) framework and consisted of section segmentation, relation discovery, and negation detection. The system was evaluated on data from two institutions. The family history identification precision was consistent across the institutions shifting from 88.9% on Indiana University (IU) dataset to 87.8% on Mayo Clinic dataset. Customizing the algorithm on the the Mayo Clinic data, increased its precision to 88.1%. The family member relation discovery achieved precision, recall, and F-measure of 75.3%, 91.6% and 82.6% respectively. Negation detection resulted in precision of 99.1%. The results show that rule-based NLP approaches for specific information extraction tasks are portable across institutions; however customization of the algorithm on the new dataset improves its performance.
EMPHASIS/Nevada UTDEM user guide. Version 2.0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, C. David; Seidel, David Bruce; Pasik, Michael Francis
The Unstructured Time-Domain ElectroMagnetics (UTDEM) portion of the EMPHASIS suite solves Maxwell's equations using finite-element techniques on unstructured meshes. This document provides user-specific information to facilitate the use of the code for applications of interest. UTDEM is a general-purpose code for solving Maxwell's equations on arbitrary, unstructured tetrahedral meshes. The geometries and the meshes thereof are limited only by the patience of the user in meshing and by the available computing resources for the solution. UTDEM solves Maxwell's equations using finite-element method (FEM) techniques on tetrahedral elements using vector, edge-conforming basis functions. EMPHASIS/Nevada Unstructured Time-Domain ElectroMagnetic Particle-In-Cell (UTDEM PIC) ismore » a superset of the capabilities found in UTDEM. It adds the capability to simulate systems in which the effects of free charge are important and need to be treated in a self-consistent manner. This is done by integrating the equations of motion for macroparticles (a macroparticle is an object that represents a large number of real physical particles, all with the same position and momentum) being accelerated by the electromagnetic forces upon the particle (Lorentz force). The motion of these particles results in a current, which is a source for the fields in Maxwell's equations.« less
The contextual issues associated with sexual harassment experiences reported by registered nurses.
Madison, Jeanne; Minichiello, Victor
The study aimed to explore contextual conditions in Australian health care workplaces that make sex-based and sexual harassment (SB&SH) a relatively common experience for registered nurses (RNs). Unstructured, in-depth interviews with a convenience sample of Australian RNs. The informants were 16 RNs (15 female and one male), working in health care, who were students enrolled in advanced tertiary preparation in nursing, counselling, and health care management at an Australian university. Experiences described by the interview informants identified four conditions present in their workplace when they experienced SB&SH. Informants noted: 1) the silence that surrounds harassment; 2) that they could not expect support from their peers and professional colleagues; 3) that education did not exist in their workplaces regarding (SB&SH) and, 4) that traditional stereotypes associated with RNs were closely linked to the experience of harassment in the workplace. Inadequate coverage of workplace issues related to (SB&SH) in undergraduate or postgraduate educational programs were identified.
Roadmap to a Comprehensive Clinical Data Warehouse for Precision Medicine Applications in Oncology
Foran, David J; Chen, Wenjin; Chu, Huiqi; Sadimin, Evita; Loh, Doreen; Riedlinger, Gregory; Goodell, Lauri A; Ganesan, Shridar; Hirshfield, Kim; Rodriguez, Lorna; DiPaola, Robert S
2017-01-01
Leading institutions throughout the country have established Precision Medicine programs to support personalized treatment of patients. A cornerstone for these programs is the establishment of enterprise-wide Clinical Data Warehouses. Working shoulder-to-shoulder, a team of physicians, systems biologists, engineers, and scientists at Rutgers Cancer Institute of New Jersey have designed, developed, and implemented the Warehouse with information originating from data sources, including Electronic Medical Records, Clinical Trial Management Systems, Tumor Registries, Biospecimen Repositories, Radiology and Pathology archives, and Next Generation Sequencing services. Innovative solutions were implemented to detect and extract unstructured clinical information that was embedded in paper/text documents, including synoptic pathology reports. Supporting important precision medicine use cases, the growing Warehouse enables physicians to systematically mine and review the molecular, genomic, image-based, and correlated clinical information of patient tumors individually or as part of large cohorts to identify changes and patterns that may influence treatment decisions and potential outcomes. PMID:28469389
Chute, Christopher G; Pathak, Jyotishman; Savova, Guergana K; Bailey, Kent R; Schor, Marshall I; Hart, Lacey A; Beebe, Calvin E; Huff, Stanley M
2011-01-01
SHARPn is a collaboration among 16 academic and industry partners committed to the production and distribution of high-quality software artifacts that support the secondary use of EMR data. Areas of emphasis are data normalization, natural language processing, high-throughput phenotyping, and data quality metrics. Our work avails the industrial scalability afforded by the Unstructured Information Management Architecture (UIMA) from IBM Watson Research labs, the same framework which underpins the Watson Jeopardy demonstration. This descriptive paper outlines our present work and achievements, and presages our trajectory for the remainder of the funding period. The project is one of the four Strategic Health IT Advanced Research Projects (SHARP) projects funded by the Office of the National Coordinator in 2010. PMID:22195076
Chute, Christopher G; Pathak, Jyotishman; Savova, Guergana K; Bailey, Kent R; Schor, Marshall I; Hart, Lacey A; Beebe, Calvin E; Huff, Stanley M
2011-01-01
SHARPn is a collaboration among 16 academic and industry partners committed to the production and distribution of high-quality software artifacts that support the secondary use of EMR data. Areas of emphasis are data normalization, natural language processing, high-throughput phenotyping, and data quality metrics. Our work avails the industrial scalability afforded by the Unstructured Information Management Architecture (UIMA) from IBM Watson Research labs, the same framework which underpins the Watson Jeopardy demonstration. This descriptive paper outlines our present work and achievements, and presages our trajectory for the remainder of the funding period. The project is one of the four Strategic Health IT Advanced Research Projects (SHARP) projects funded by the Office of the National Coordinator in 2010.
Machine learning for Big Data analytics in plants.
Ma, Chuang; Zhang, Hao Helen; Wang, Xiangfeng
2014-12-01
Rapid advances in high-throughput genomic technology have enabled biology to enter the era of 'Big Data' (large datasets). The plant science community not only needs to build its own Big-Data-compatible parallel computing and data management infrastructures, but also to seek novel analytical paradigms to extract information from the overwhelming amounts of data. Machine learning offers promising computational and analytical solutions for the integrative analysis of large, heterogeneous and unstructured datasets on the Big-Data scale, and is gradually gaining popularity in biology. This review introduces the basic concepts and procedures of machine-learning applications and envisages how machine learning could interface with Big Data technology to facilitate basic research and biotechnology in the plant sciences. Copyright © 2014 Elsevier Ltd. All rights reserved.
Another Fine MeSH: Clinical Medicine Meets Information Science.
ERIC Educational Resources Information Center
O'Rourke, Alan; Booth, Andrew; Ford, Nigel
1999-01-01
Discusses evidence-based medicine (EBM) and the need for systematic use of databases like MEDLINE with more sophisticated search strategies to optimize the retrieval of relevant papers. Describes an empirical study of hospital libraries that examined requests for information and search strategies using both structured and unstructured forms.…
NASA Technical Reports Server (NTRS)
Fernandez, Becerra
2003-01-01
Expert Seeker is a computer program of the knowledge-management-system (KMS) type that falls within the category of expertise-locator systems. The main goal of the KMS system implemented by Expert Seeker is to organize and distribute knowledge of who are the domain experts within and without a given institution, company, or other organization. The intent in developing this KMS was to enable the re-use of organizational knowledge and provide a methodology for querying existing information (including structured, semistructured, and unstructured information) in a way that could help identify organizational experts. More specifically, Expert Seeker was developed to make it possible, by use of an intranet, to do any or all of the following: Assist an employee in identifying who has the skills needed for specific projects and to determine whether the experts so identified are available. Assist managers in identifying employees who may need training opportunities. Assist managers in determining what expertise is lost when employees retire or otherwise leave. Facilitate the development of new ways of identifying opportunities for innovation and minimization of duplicated efforts. Assist employees in achieving competitive advantages through the application of knowledge-management concepts and related systems. Assist external organizations in requesting speakers for specific engagements or determining from whom they might be able to request help via electronic mail. Help foster an environment of collaboration for rapid development in today's environment, in which it is increasingly necessary to assemble teams of experts from government, universities, research laboratories, and industries, to quickly solve problems anytime, anywhere. Make experts more visible. Provide a central repository of information about employees, including information that, heretofore, has typically not been captured by the human-resources systems (e.g., information about past projects, patents, or hobbies). Unify myriad collections of data into Web-enabled repository that could easily be searched for relevant data.
Wave Resource Characterization Using an Unstructured Grid Modeling Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Wei-Cheng; Yang, Zhaoqing; Wang, Taiping
This paper presents a modeling study conducted on the central Oregon coast for wave resource characterization using the unstructured-grid SWAN model coupled with a nested-grid WWIII model. The flexibility of models of various spatial resolutions and the effects of open- boundary conditions simulated by a nested-grid WWIII model with different physics packages were evaluated. The model results demonstrate the advantage of the unstructured-grid modeling approach for flexible model resolution and good model skills in simulating the six wave resource parameters recommended by the International Electrotechnical Commission in comparison to the observed data in Year 2009 at National Data Buoy Centermore » Buoy 46050. Notably, spectral analysis indicates that the ST4 physics package improves upon the model skill of the ST2 physics package for predicting wave power density for large waves, which is important for wave resource assessment, device load calculation, and risk management. In addition, bivariate distributions show the simulated sea state of maximum occurrence with the ST4 physics package matched the observed data better than that with the ST2 physics package. This study demonstrated that the unstructured-grid wave modeling approach, driven by the nested-grid regional WWIII outputs with the ST4 physics package, can efficiently provide accurate wave hindcasts to support wave resource characterization. Our study also suggests that wind effects need to be considered if the dimension of the model domain is greater than approximately 100 km, or O (10^2 km).« less
The upwind control volume scheme for unstructured triangular grids
NASA Technical Reports Server (NTRS)
Giles, Michael; Anderson, W. Kyle; Roberts, Thomas W.
1989-01-01
A new algorithm for the numerical solution of the Euler equations is presented. This algorithm is particularly suited to the use of unstructured triangular meshes, allowing geometric flexibility. Solutions are second-order accurate in the steady state. Implementation of the algorithm requires minimal grid connectivity information, resulting in modest storage requirements, and should enhance the implementation of the scheme on massively parallel computers. A novel form of upwind differencing is developed, and is shown to yield sharp resolution of shocks. Two new artificial viscosity models are introduced that enhance the performance of the new scheme. Numerical results for transonic airfoil flows are presented, which demonstrate the performance of the algorithm.
Towards a Lifecycle Information Framework and Technology in Manufacturing.
Hedberg, Thomas; Feeney, Allison Barnard; Helu, Moneer; Camelio, Jaime A
2017-06-01
Industry has been chasing the dream of integrating and linking data across the product lifecycle and enterprises for decades. However, industry has been challenged by the fact that the context in which data is used varies based on the function / role in the product lifecycle that is interacting with the data. Holistically, the data across the product lifecycle must be considered an unstructured data-set because multiple data repositories and domain-specific schema exist in each phase of the lifecycle. This paper explores a concept called the Lifecycle Information Framework and Technology (LIFT). LIFT is a conceptual framework for lifecycle information management and the integration of emerging and existing technologies, which together form the basis of a research agenda for dynamic information modeling in support of digital-data curation and reuse in manufacturing. This paper provides a discussion of the existing technologies and activities that the LIFT concept leverages. Also, the paper describes the motivation for applying such work to the domain of manufacturing. Then, the LIFT concept is discussed in detail, while underlying technologies are further examined and a use case is detailed. Lastly, potential impacts are explored.
Seagrasses can colonize unstructured mudflats either through clonal growth or seed germination and survival. Zostera japonica is an introduced seagrass in North America that has rapidly colonized mudflats along the Pacific Coast, leading to active management of the species. Gro...
Social networks to biological networks: systems biology of Mycobacterium tuberculosis.
Vashisht, Rohit; Bhardwaj, Anshu; Osdd Consortium; Brahmachari, Samir K
2013-07-01
Contextualizing relevant information to construct a network that represents a given biological process presents a fundamental challenge in the network science of biology. The quality of network for the organism of interest is critically dependent on the extent of functional annotation of its genome. Mostly the automated annotation pipelines do not account for unstructured information present in volumes of literature and hence large fraction of genome remains poorly annotated. However, if used, this information could substantially enhance the functional annotation of a genome, aiding the development of a more comprehensive network. Mining unstructured information buried in volumes of literature often requires manual intervention to a great extent and thus becomes a bottleneck for most of the automated pipelines. In this review, we discuss the potential of scientific social networking as a solution for systematic manual mining of data. Focusing on Mycobacterium tuberculosis, as a case study, we discuss our open innovative approach for the functional annotation of its genome. Furthermore, we highlight the strength of such collated structured data in the context of drug target prediction based on systems level analysis of pathogen.
NASA Astrophysics Data System (ADS)
Brisc, Felicia; Vater, Stefan; Behrens, Joern
2016-04-01
We present the UGRID Reader, a visualization software component that implements the UGRID Conventions into Paraview. It currently supports the reading and visualization of 2D unstructured triangular, quadrilateral and mixed triangle/quadrilateral meshes, while the data can be defined per cell or per vertex. The Climate and Forecast Metadata Conventions (CF Conventions) have been set for many years as the standard framework for climate data written in NetCDF format. While they allow storing unstructured data simply as data defined at a series of points, they do not currently address the topology of the underlying unstructured mesh. However, it is often necessary to have additional mesh topology information, i.e. is it a one dimensional network, a 2D triangular mesh or a flexible mixed triangle/quadrilateral mesh, a 2D mesh with vertical layers, or a fully unstructured 3D mesh. The UGRID Conventions proposed by the UGRID Interoperability group are attempting to fill in this void by extending the CF Conventions with topology specifications. As the UGRID Conventions are increasingly popular with an important subset of the CF community, they warrant the development of a customized tool for the visualization and exploration of UGRID-conforming data. The implementation of the UGRID Reader has been designed corresponding to the ParaView plugin architecture. This approach allowed us to tap into the powerful reading and rendering capabilities of ParaView, while the reader is easy to install. We aim at parallelism to be able to process large data sets. Furthermore, our current application of the reader is the visualization of higher order simulation output which demands for a special representation of the data within a cell.
Prospects and expectations for unstructured methods
NASA Technical Reports Server (NTRS)
Baker, Timothy J.
1995-01-01
The last decade has witnessed a vigorous and sustained research effort on unstructured methods for computational fluid dynamics. Unstructured mesh generators and flow solvers have evolved to the point where they are now in use for design purposes throughout the aerospace industry. In this paper we survey the various mesh types, structured as well as unstructured, and examine their relative strengths and weaknesses. We argue that unstructured methodology does offer the best prospect for the next generation of computational fluid dynamics algorithms.
NASA Astrophysics Data System (ADS)
de la Llave Plata, M.; Couaillier, V.; Le Pape, M.-C.; Marmignon, C.; Gazaix, M.
2013-03-01
This paper reports recent work on the extension of the multiblock structured solver elsA to deal with hybrid grids. The new hybrid-grid solver is called elsA-H (elsA-Hybrid), is based on the investigation of a new unstructured-grid module has been built within the original elsA CFD (computational fluid dynamics) system. The implementation benefits from the flexibility of the object-oriented design. The aim of elsA-H is to take advantage of the full potential of structured solvers and unstructured mesh generation by allowing any type of grid to be used within the same simulation process. The main challenge lies in the numerical treatment of the hybrid-grid interfaces where blocks of different type meet. In particular, one must pay attention to the transfer of information across these boundaries, so that the accuracy of the numerical scheme is preserved and flux conservation is guaranteed. In this paper, the numerical approach allowing to achieve this is presented. A comparison between the hybrid and the structured-grid methods is also carried out by considering a fully hexahedral multiblock mesh for which a few blocks have been transformed into unstructured. The performance of elsA-H for the simulation of internal flows will be demonstrated on a number of turbomachinery configurations.
Generation of unstructured grids and Euler solutions for complex geometries
NASA Technical Reports Server (NTRS)
Loehner, Rainald; Parikh, Paresh; Salas, Manuel D.
1989-01-01
Algorithms are described for the generation and adaptation of unstructured grids in two and three dimensions, as well as Euler solvers for unstructured grids. The main purpose is to demonstrate how unstructured grids may be employed advantageously for the economic simulation of both geometrically as well as physically complex flow fields.
Erraguntla, Madhav; Zapletal, Josef; Lawley, Mark
2017-12-01
The impact of infectious disease on human populations is a function of many factors including environmental conditions, vector dynamics, transmission mechanics, social and cultural behaviors, and public policy. A comprehensive framework for disease management must fully connect the complete disease lifecycle, including emergence from reservoir populations, zoonotic vector transmission, and impact on human societies. The Framework for Infectious Disease Analysis is a software environment and conceptual architecture for data integration, situational awareness, visualization, prediction, and intervention assessment. Framework for Infectious Disease Analysis automatically collects biosurveillance data using natural language processing, integrates structured and unstructured data from multiple sources, applies advanced machine learning, and uses multi-modeling for analyzing disease dynamics and testing interventions in complex, heterogeneous populations. In the illustrative case studies, natural language processing from social media, news feeds, and websites was used for information extraction, biosurveillance, and situation awareness. Classification machine learning algorithms (support vector machines, random forests, and boosting) were used for disease predictions.
Natural language processing: an introduction.
Nadkarni, Prakash M; Ohno-Machado, Lucila; Chapman, Wendy W
2011-01-01
To provide an overview and tutorial of natural language processing (NLP) and modern NLP-system design. This tutorial targets the medical informatics generalist who has limited acquaintance with the principles behind NLP and/or limited knowledge of the current state of the art. We describe the historical evolution of NLP, and summarize common NLP sub-problems in this extensive field. We then provide a synopsis of selected highlights of medical NLP efforts. After providing a brief description of common machine-learning approaches that are being used for diverse NLP sub-problems, we discuss how modern NLP architectures are designed, with a summary of the Apache Foundation's Unstructured Information Management Architecture. We finally consider possible future directions for NLP, and reflect on the possible impact of IBM Watson on the medical field.
Integrating UIMA annotators in a web-based text processing framework.
Chen, Xiang; Arnold, Corey W
2013-01-01
The Unstructured Information Management Architecture (UIMA) [1] framework is a growing platform for natural language processing (NLP) applications. However, such applications may be difficult for non-technical users deploy. This project presents a web-based framework that wraps UIMA-based annotator systems into a graphical user interface for researchers and clinicians, and a web service for developers. An annotator that extracts data elements from lung cancer radiology reports is presented to illustrate the use of the system. Annotation results from the web system can be exported to multiple formats for users to utilize in other aspects of their research and workflow. This project demonstrates the benefits of a lay-user interface for complex NLP applications. Efforts such as this can lead to increased interest and support for NLP work in the clinical domain.
Natural language processing: an introduction
Ohno-Machado, Lucila; Chapman, Wendy W
2011-01-01
Objectives To provide an overview and tutorial of natural language processing (NLP) and modern NLP-system design. Target audience This tutorial targets the medical informatics generalist who has limited acquaintance with the principles behind NLP and/or limited knowledge of the current state of the art. Scope We describe the historical evolution of NLP, and summarize common NLP sub-problems in this extensive field. We then provide a synopsis of selected highlights of medical NLP efforts. After providing a brief description of common machine-learning approaches that are being used for diverse NLP sub-problems, we discuss how modern NLP architectures are designed, with a summary of the Apache Foundation's Unstructured Information Management Architecture. We finally consider possible future directions for NLP, and reflect on the possible impact of IBM Watson on the medical field. PMID:21846786
UNSTRUCTURED INDIVIDUAL VARIATION AND DEMOGRAPHIC STOCHASTICITY. (R829088)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
McEwan, Reed; Melton, Genevieve B; Knoll, Benjamin C; Wang, Yan; Hultman, Gretchen; Dale, Justin L; Meyer, Tim; Pakhomov, Serguei V
2016-01-01
Many design considerations must be addressed in order to provide researchers with full text and semantic search of unstructured healthcare data such as clinical notes and reports. Institutions looking at providing this functionality must also address the big data aspects of their unstructured corpora. Because these systems are complex and demand a non-trivial investment, there is an incentive to make the system capable of servicing future needs as well, further complicating the design. We present architectural best practices as lessons learned in the design and implementation NLP-PIER (Patient Information Extraction for Research), a scalable, extensible, and secure system for processing, indexing, and searching clinical notes at the University of Minnesota.
A Crime Analysis Decision Support System for Crime Report Classification and Visualization
ERIC Educational Resources Information Center
Ku, Chih-Hao
2012-01-01
Today's Internet-based crime reporting systems make timely and anonymous crime reporting possible. However, these reports also result in a rapidly growing set of unstructured text files. Complicating the problem is that the information has not been filtered or guided in a detective-led interview resulting in much irrelevant information. To…
A Holistic, Similarity-Based Approach for Personalized Ranking in Web Databases
ERIC Educational Resources Information Center
Telang, Aditya
2011-01-01
With the advent of the Web, the notion of "information retrieval" has acquired a completely new connotation and currently encompasses several disciplines ranging from traditional forms of text and data retrieval in unstructured and structured repositories to retrieval of static and dynamic information from the contents of the surface and deep Web.…
Technology support of the handover: promoting observability, flexibility and efficiency.
Patterson, Emily S
2012-12-01
Efforts to standardise data elements and increase the comprehensiveness of information included in patient handovers have produced a growing interest in augmenting the verbal exchange of information with written communications conducted through health information technology (HIT). The aim of this perspective is to offer recommendations to optimise technology support of handovers, based on a review of the relevant scientific literature. Review of the literature on human factors and the study of communication produced three recommendations. The first entails making available "shared knowledge" relevant to the handover and subsequent clinical management with intended and unintended recipients. The second is to create a flexible narrative structure (unstructured text fields) for human-human communications facilitated by technology. The third recommendation is to avoid reliance on real-time data entry during busy periods. Implementing these recommendations is anticipated to increase the observability (the ability to readily determine current status), flexibility, and efficiency of HIT-supported patient handovers. Anticipated benefits of technology-supported handovers include reducing reliance on human memory, increasing the efficiency and structure of the verbal exchange, avoiding readbacks of numeric data, and aiding clinical management following the handover. In cases when verbal handovers are delayed, do not occur, or involve members of the health care team without first-hand access to critical information, making 'common ground' observable for all recipients, creating a flexible narrative structure for communication and avoiding reliance on real-time data entry during the busiest times has implications for HIT design and day to day data entry and management operations. Benefits include increased observability, flexibility, and efficiency of HIT-supported patient handovers.
An Efficient, Scalable and Robust P2P Overlay for Autonomic Communication
NASA Astrophysics Data System (ADS)
Li, Deng; Liu, Hui; Vasilakos, Athanasios
The term Autonomic Communication (AC) refers to self-managing systems which are capable of supporting self-configuration, self-healing and self-optimization. However, information reflection and collection, lack of centralized control, non-cooperation and so on are just some of the challenges within AC systems. Since many self-* properties (e.g. selfconfiguration, self-optimization, self-healing, and self-protecting) are achieved by a group of autonomous entities that coordinate in a peer-to-peer (P2P) fashion, it has opened the door to migrating research techniques from P2P systems. P2P's meaning can be better understood with a set of key characteristics similar to AC: Decentralized organization, Self-organizing nature (i.e. adaptability), Resource sharing and aggregation, and Fault-tolerance. However, not all P2P systems are compatible with AC. Unstructured systems are designed more specifically than structured systems for the heterogeneous Internet environment, where the nodes' persistence and availability are not guaranteed. Motivated by the challenges in AC and based on comprehensive analysis of popular P2P applications, three correlative standards for evaluating the compatibility of a P2P system with AC are presented in this chapter. According to these standards, a novel Efficient, Scalable and Robust (ESR) P2P overlay is proposed. Differing from current structured and unstructured, or meshed and tree-like P2P overlay, the ESR is a whole new three dimensional structure to improve the efficiency of routing, while information exchanges take in immediate neighbors with local information to make the system scalable and fault-tolerant. Furthermore, rather than a complex game theory or incentive mechanism, asimple but effective punish mechanism has been presented based on a new ID structure which can guarantee the continuity of each node's record in order to discourage negative behavior on an autonomous environment as AC.
A survey of food safety training in small food manufacturers.
Worsfold, Denise
2005-08-01
A survey of food safety training was conducted in small food manufacturing firms in South Wales. Structured interviews with managers were used to collect information on the extent and level of food hygiene and HACCP training and the manager's perceptions of and attitude towards training. All the businesses surveyed had undertaken some hygiene training. Hygiene induction programmes were often unstructured and generally unrecorded. Low-risk production workers were usually trained on the job whilst high-care production staff were trained in hygiene to Level 1. Part-time and temporary staff received less training than full-timers. Regular refresher training was undertaken by less than half of the sample. None of the businesses made use of National Vocational Qualification (NVQ) qualifications. Over half of the managers/senior staff had undertaken higher levels of hygiene training and half had attended a HACCP course. Managers trained the workforce to operate the HACCP system. Formal training-related activities were generally only found in the larger businesses. Few of the manufacturers had made use of training consultants. Managers held positive attitudes towards training but most regarded it as operating expense rather than an investment. Resource poverty, in terms of time and money was perceived to be a major inhibiting factor to continual, systematic training.
Kreimeyer, Kory; Foster, Matthew; Pandey, Abhishek; Arya, Nina; Halford, Gwendolyn; Jones, Sandra F; Forshee, Richard; Walderhaug, Mark; Botsis, Taxiarchis
2017-09-01
We followed a systematic approach based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses to identify existing clinical natural language processing (NLP) systems that generate structured information from unstructured free text. Seven literature databases were searched with a query combining the concepts of natural language processing and structured data capture. Two reviewers screened all records for relevance during two screening phases, and information about clinical NLP systems was collected from the final set of papers. A total of 7149 records (after removing duplicates) were retrieved and screened, and 86 were determined to fit the review criteria. These papers contained information about 71 different clinical NLP systems, which were then analyzed. The NLP systems address a wide variety of important clinical and research tasks. Certain tasks are well addressed by the existing systems, while others remain as open challenges that only a small number of systems attempt, such as extraction of temporal information or normalization of concepts to standard terminologies. This review has identified many NLP systems capable of processing clinical free text and generating structured output, and the information collected and evaluated here will be important for prioritizing development of new approaches for clinical NLP. Copyright © 2017 Elsevier Inc. All rights reserved.
Forrester, Joseph D; Pillai, Satish K; Beer, Karlyn D; Neatherlin, John; Massaquoi, Moses; Nyenswah, Tolbert G; Montgomery, Joel M; De Cock, Kevin
2014-10-10
Ebola virus disease (Ebola) is a multisystem disease caused by a virus of the genus Ebolavirus. In late March 2014, Ebola cases were described in Liberia, with epicenters in Lofa County and later in Montserrado County. While information about case burden and health care infrastructure was available for the two epicenters, little information was available about remote counties in southeastern Liberia. Over 9 days, August 6-14, 2014, Ebola case burden, health care infrastructure, and emergency preparedness were assessed in collaboration with the Liberian Ministry of Health and Social Welfare in four counties in southeastern Liberia: Grand Gedeh, Grand Kru, River Gee, and Maryland. Data were collected by health care facility visits to three of the four county referral hospitals and by unstructured interviews with county and district health officials, hospital administrators, physicians, nurses, physician assistants, and health educators in all four counties. Local burial practices were discussed with county officials, but no direct observation of burial practices was conducted. Basic information about Ebola surveillance and epidemiology, case investigation, contact tracing, case management, and infection control was provided to local officials.
Cooperation in marine affairs: Evidence from the Gulf of Thailand
NASA Astrophysics Data System (ADS)
Harakunarak, Ampai
1998-12-01
This study argues that the evolving process of interstate cooperation based upon power interests could operate regardless of the formal or expressed will of governments to make binding agreements. The study developed a comprehensive model for interstate cooperation in which state interests were part of conditions for the mutual problem-solving process. The model was then tested against three marine issues in the Gulf of Thailand: offshore hydrocarbon development associated with maritime boundary delimitation, marine fisheries, and marine pollution. Literature review, newspapers and periodicals, on-line databases, and unstructured interviews were primary sources of data. The analysis found that consultation accounted for cooperative interaction among the realist Gulf of Thailand states. Evidence from three tested marine issues suggested that frequent contact and subsequent effects of a greater exchange of knowledge and information can at least stabilize relationships between the states. Informality and non-binding nature of the interactive process offered the states the needed flexibility in designing and implementing effective marine management in the Gulf of Thailand. The study discussed a modified realist framework and some implications for a future study of informal regimes and compliance with interstate agreements.
Mehrabi, Saeed; Krishnan, Anand; Roch, Alexandra M; Schmidt, Heidi; Li, DingCheng; Kesterson, Joe; Beesley, Chris; Dexter, Paul; Schmidt, Max; Palakal, Mathew; Liu, Hongfang
2018-01-01
In this study we have developed a rule-based natural language processing (NLP) system to identify patients with family history of pancreatic cancer. The algorithm was developed in a Unstructured Information Management Architecture (UIMA) framework and consisted of section segmentation, relation discovery, and negation detection. The system was evaluated on data from two institutions. The family history identification precision was consistent across the institutions shifting from 88.9% on Indiana University (IU) dataset to 87.8% on Mayo Clinic dataset. Customizing the algorithm on the the Mayo Clinic data, increased its precision to 88.1%. The family member relation discovery achieved precision, recall, and F-measure of 75.3%, 91.6% and 82.6% respectively. Negation detection resulted in precision of 99.1%. The results show that rule-based NLP approaches for specific information extraction tasks are portable across institutions; however customization of the algorithm on the new dataset improves its performance. PMID:26262122
A conservative MHD scheme on unstructured Lagrangian grids for Z-pinch hydrodynamic simulations
NASA Astrophysics Data System (ADS)
Wu, Fuyuan; Ramis, Rafael; Li, Zhenghong
2018-03-01
A new algorithm to model resistive magnetohydrodynamics (MHD) in Z-pinches has been developed. Two-dimensional axisymmetric geometry with azimuthal magnetic field Bθ is considered. Discretization is carried out using unstructured meshes made up of arbitrarily connected polygons. The algorithm is fully conservative for mass, momentum, and energy. Matter energy and magnetic energy are managed separately. The diffusion of magnetic field is solved using a derivative of the Symmetric-Semi-Implicit scheme, Livne et al. (1985) [23], where unconditional stability is obtained without needing to solve large sparse systems of equations. This MHD package has been integrated into the radiation-hydrodynamics code MULTI-2D, Ramis et al. (2009) [20], that includes hydrodynamics, laser energy deposition, heat conduction, and radiation transport. This setup allows to simulate Z-pinch configurations relevant for Inertial Confinement Fusion.
Appelhans, Bradley M; Li, Hong
2016-08-01
This study tested associations of organized sports participation and unstructured active play with overall moderate and vigorous physical activity (MVPA) in low-income children and examined factors associated with participation frequency. Research staff visited 88 low-income Chicago households with children ages 6-13 years. MVPA was assessed through 7-day accelerometry. Researchers documented the home availability of physical activity equipment. Caregivers reported on child participation in organized sports and unstructured active play, family support for physical activity, perceived neighborhood safety, and access to neighborhood physical activity venues. Despite similar participation in organized sports and unstructured active play, boys accumulated more MVPA than girls. MVPA was predicted by an interaction between gender and unstructured active play. Boys accumulated 23-45 additional minutes of weekday MVPA and 53-62 additional minutes of weekend MVPA through unstructured active play, with no such associations in girls. Higher reported neighborhood safety and family support for physical activity were associated with engagement in unstructured active play for both genders, and with participation in organized sports for girls. Physical activity interventions for low-income, urban children should emphasize unstructured active play, particularly in boys. Fostering family support for physical activity and safe play environments may be critical intervention components.
Big Data in the Information Age: Exploring the Intellectual Foundation of Communication Theory
ERIC Educational Resources Information Center
Borkovich, Debra J.; Noah, Philip D.
2014-01-01
Big Data are structured, semi-structured, unstructured, and raw data that are revolutionizing how we think about and use information in the 21st century. Big Data represents a paradigm shift from our prior use of traditional data assets over the past 30+ years, such as numeric and textual data, to generating and accessing petabytes and beyond of…
Implicit schemes and parallel computing in unstructured grid CFD
NASA Technical Reports Server (NTRS)
Venkatakrishnam, V.
1995-01-01
The development of implicit schemes for obtaining steady state solutions to the Euler and Navier-Stokes equations on unstructured grids is outlined. Applications are presented that compare the convergence characteristics of various implicit methods. Next, the development of explicit and implicit schemes to compute unsteady flows on unstructured grids is discussed. Next, the issues involved in parallelizing finite volume schemes on unstructured meshes in an MIMD (multiple instruction/multiple data stream) fashion are outlined. Techniques for partitioning unstructured grids among processors and for extracting parallelism in explicit and implicit solvers are discussed. Finally, some dynamic load balancing ideas, which are useful in adaptive transient computations, are presented.
Transforming Hierarchical Relationships in Student Conduct Administration
ERIC Educational Resources Information Center
Jacobson, Kelly A.
2013-01-01
Conflict transformation theory provided a philosophical lens for this critical cultural, constructivist study, wherein four student conduct administrators who engage in leveling hierarchical relationships with students in conduct processes shared ways they make meaning of their professional practice. Through informal, unstructured interviews, a…
Browning, Matthew H.E.M.; Marion, Jeffrey L.; Gregoire, Timothy G.
2013-01-01
Parks are developing nature play areas to improve children's health and “connect” them with nature. However, these play areas are often located in protected natural areas where managers must balance recreation with associated environmental impacts. In this exploratory study, we sought to describe these impacts. We also investigated which ages, gender, and play group sizes most frequently caused impact and where impacts most frequently occur. We measured the lineal and aerial extent and severity of impacts at three play areas in the eastern United States. Methods included soil and vegetation loss calculations, qualitative searches and tree and shrub damage classifications. Additionally, we observed 12 h of play at five play areas. Results showed that measurable negative impacts were caused during 33% of the time children play. On average, 76% of groundcover vegetation was lost at recreation sites and 100% was lost at informal trails. In addition, approximately half of all trees and shrubs at sites were damaged. Meanwhile, soil exposure was 25% greater on sites and trails than at controls. Boys and small group sizes more frequently caused impact, and informal recreation sites were most commonly used for play. No statistically significant correlations were found between age or location and impact frequency. Managers interested in developing nature play areas should be aware of, but not deterred by these impacts. The societal benefits of unstructured play in nature may outweigh the environmental costs. Recommended management strategies include selecting impact-resistant sites, improving site resistance, promoting low impact practices, and managing adaptively.
De-identification of unstructured paper-based health records for privacy-preserving secondary use.
Fenz, Stefan; Heurix, Johannes; Neubauer, Thomas; Rella, Antonio
2014-07-01
Abstract Whenever personal data is processed, privacy is a serious issue. Especially in the document-centric e-health area, the patients' privacy must be preserved in order to prevent any negative repercussions for the patient. Clinical research, for example, demands structured health records to carry out efficient clinical trials, whereas legislation (e.g. HIPAA) regulates that only de-identified health records may be used for research. However, unstructured and often paper-based data dominates information technology, especially in the healthcare sector. Existing approaches are geared towards data in English-language documents only and have not been designed to handle the recognition of erroneous personal data which is the result of the OCR-based digitization of paper-based health records.
McEwan, Reed; Melton, Genevieve B.; Knoll, Benjamin C.; Wang, Yan; Hultman, Gretchen; Dale, Justin L.; Meyer, Tim; Pakhomov, Serguei V.
2016-01-01
Many design considerations must be addressed in order to provide researchers with full text and semantic search of unstructured healthcare data such as clinical notes and reports. Institutions looking at providing this functionality must also address the big data aspects of their unstructured corpora. Because these systems are complex and demand a non-trivial investment, there is an incentive to make the system capable of servicing future needs as well, further complicating the design. We present architectural best practices as lessons learned in the design and implementation NLP-PIER (Patient Information Extraction for Research), a scalable, extensible, and secure system for processing, indexing, and searching clinical notes at the University of Minnesota. PMID:27570663
Dua, Anahita; Sudan, Ranjan; Desai, Sapan S
2014-01-01
The American Board of Surgery In-Training Examination (ABSITE) is a predictor of resident performance on the general surgery-qualifying examination and plays a role in obtaining competitive fellowships. A learning management system (LMS) permits the delivery of a structured curriculum that appeals to the modern resident owing to the ease of accessibility and all-in-one organization. This study hypothesizes that trainees using a structured surgeon-directed LMS will achieve improved ABSITE scores compared with those using an unstructured approach to the examination. A multidisciplinary print and digital review course with practice questions, review textbooks, weekly reading assignments, and slide and audio reviews integrated within an online LMS was made available to postgraduate year (PGY)-3 and PGY-4 residents in 2008 and 2009. Surveys were emailed requesting ABSITE scores to compare outcomes in those trainees that used the course with those who used an unstructured approach. Statistical analysis was conducted via descriptive statistics and Pearson chi-square with p < 0.05 deemed statistically significant. Surveys were mailed to 508 trainees. There was an 80% (408) response rate. Residents who used structured approaches in both the years achieved the highest scores, followed by those who adopted a structured approach in PGY-4. The residents using an unstructured approach in both the years showed no significant improvement. Residents who used a structured LMS performed significantly better than their counterparts who used an unstructured approach. A properly constructed online education curriculum has the potential to improve ABSITE scores. Copyright © 2014 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
A survey of simultaneous localization and mapping on unstructured lunar complex environment
NASA Astrophysics Data System (ADS)
Wang, Yiqiao; Zhang, Wei; An, Pei
2017-10-01
Simultaneous localization and mapping (SLAM) technology is the key to realizing lunar rover's intelligent perception and autonomous navigation. It embodies the autonomous ability of mobile robot, and has attracted plenty of concerns of researchers in the past thirty years. Visual sensors are meaningful to SLAM research because they can provide a wealth of information. Visual SLAM uses merely images as external information to estimate the location of the robot and construct the environment map. Nowadays, SLAM technology still has problems when applied in large-scale, unstructured and complex environment. Based on the latest technology in the field of visual SLAM, this paper investigates and summarizes the SLAM technology using in the unstructured complex environment of lunar surface. In particular, we focus on summarizing and comparing the detection and matching of features of SIFT, SURF and ORB, in the meanwhile discussing their advantages and disadvantages. We have analyzed the three main methods: SLAM Based on Extended Kalman Filter, SLAM Based on Particle Filter and SLAM Based on Graph Optimization (EKF-SLAM, PF-SLAM and Graph-based SLAM). Finally, this article summarizes and discusses the key scientific and technical difficulties in the lunar context that Visual SLAM faces. At the same time, we have explored the frontier issues such as multi-sensor fusion SLAM and multi-robot cooperative SLAM technology. We also predict and prospect the development trend of lunar rover SLAM technology, and put forward some ideas of further research.
A Hybrid Multilevel Storage Architecture for Electric Power Dispatching Big Data
NASA Astrophysics Data System (ADS)
Yan, Hu; Huang, Bibin; Hong, Bowen; Hu, Jing
2017-10-01
Electric power dispatching is the center of the whole power system. In the long run time, the power dispatching center has accumulated a large amount of data. These data are now stored in different power professional systems and form lots of information isolated islands. Integrating these data and do comprehensive analysis can greatly improve the intelligent level of power dispatching. In this paper, a hybrid multilevel storage architecture for electrical power dispatching big data is proposed. It introduces relational database and NoSQL database to establish a power grid panoramic data center, effectively meet power dispatching big data storage needs, including the unified storage of structured and unstructured data fast access of massive real-time data, data version management and so on. It can be solid foundation for follow-up depth analysis of power dispatching big data.
Towards a Lifecycle Information Framework and Technology in Manufacturing
Hedberg, Thomas; Feeney, Allison Barnard; Helu, Moneer; Camelio, Jaime A.
2016-01-01
Industry has been chasing the dream of integrating and linking data across the product lifecycle and enterprises for decades. However, industry has been challenged by the fact that the context in which data is used varies based on the function / role in the product lifecycle that is interacting with the data. Holistically, the data across the product lifecycle must be considered an unstructured data-set because multiple data repositories and domain-specific schema exist in each phase of the lifecycle. This paper explores a concept called the Lifecycle Information Framework and Technology (LIFT). LIFT is a conceptual framework for lifecycle information management and the integration of emerging and existing technologies, which together form the basis of a research agenda for dynamic information modeling in support of digital-data curation and reuse in manufacturing. This paper provides a discussion of the existing technologies and activities that the LIFT concept leverages. Also, the paper describes the motivation for applying such work to the domain of manufacturing. Then, the LIFT concept is discussed in detail, while underlying technologies are further examined and a use case is detailed. Lastly, potential impacts are explored. PMID:28265224
Big data: the management revolution.
McAfee, Andrew; Brynjolfsson, Erik
2012-10-01
Big data, the authors write, is far more powerful than the analytics of the past. Executives can measure and therefore manage more precisely than ever before. They can make better predictions and smarter decisions. They can target more-effective interventions in areas that so far have been dominated by gut and intuition rather than by data and rigor. The differences between big data and analytics are a matter of volume, velocity, and variety: More data now cross the internet every second than were stored in the entire internet 20 years ago. Nearly real-time information makes it possible for a company to be much more agile than its competitors. And that information can come from social networks, images, sensors, the web, or other unstructured sources. The managerial challenges, however, are very real. Senior decision makers have to learn to ask the right questions and embrace evidence-based decision making. Organizations must hire scientists who can find patterns in very large data sets and translate them into useful business information. IT departments have to work hard to integrate all the relevant internal and external sources of data. The authors offer two success stories to illustrate how companies are using big data: PASSUR Aerospace enables airlines to match their actual and estimated arrival times. Sears Holdings directly analyzes its incoming store data to make promotions much more precise and faster.
Searching Across the International Space Station Databases
NASA Technical Reports Server (NTRS)
Maluf, David A.; McDermott, William J.; Smith, Ernest E.; Bell, David G.; Gurram, Mohana
2007-01-01
Data access in the enterprise generally requires us to combine data from different sources and different formats. It is advantageous thus to focus on the intersection of the knowledge across sources and domains; keeping irrelevant knowledge around only serves to make the integration more unwieldy and more complicated than necessary. A context search over multiple domain is proposed in this paper to use context sensitive queries to support disciplined manipulation of domain knowledge resources. The objective of a context search is to provide the capability for interrogating many domain knowledge resources, which are largely semantically disjoint. The search supports formally the tasks of selecting, combining, extending, specializing, and modifying components from a diverse set of domains. This paper demonstrates a new paradigm in composition of information for enterprise applications. In particular, it discusses an approach to achieving data integration across multiple sources, in a manner that does not require heavy investment in database and middleware maintenance. This lean approach to integration leads to cost-effectiveness and scalability of data integration with an underlying schemaless object-relational database management system. This highly scalable, information on demand system framework, called NX-Search, which is an implementation of an information system built on NETMARK. NETMARK is a flexible, high-throughput open database integration framework for managing, storing, and searching unstructured or semi-structured arbitrary XML and HTML used widely at the National Aeronautics Space Administration (NASA) and industry.
The Feasibility of Adaptive Unstructured Computations On Petaflops Systems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Heber, Gerd; Gao, Guang; Saini, Subhash (Technical Monitor)
1999-01-01
This viewgraph presentation covers the advantages of mesh adaptation, unstructured grids, and dynamic load balancing. It illustrates parallel adaptive communications, and explains PLUM (Parallel dynamic load balancing for adaptive unstructured meshes), and PSAW (Proper Self Avoiding Walks).
Best Practices for Unstructured Grid Shock-Fitting
NASA Technical Reports Server (NTRS)
McCoud, Peter L.
2017-01-01
Unstructured grid solvers have well-known issues predicting surface heat fluxes when strong shocks are present. Various efforts have been made to address the underlying numerical issues that cause the erroneous predictions. The present work addresses some of the shortcomings of unstructured grid solvers, not by addressing the numerics, but by applying structured grid best practices to unstructured grids. A methodology for robust shock detection and shock-fitting is outlined and applied to production-relevant cases. Results
Adaption of unstructured meshes using node movement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carpenter, J.G.; McRae, V.D.S.
1996-12-31
The adaption algorithm of Benson and McRae is modified for application to unstructured grids. The weight function generation was modified for application to unstructured grids and movement was limited to prevent cross over. A NACA 0012 airfoil is used as a test case to evaluate the modified algorithm when applied to unstructured grids and compared to results obtained by Warren. An adaptive mesh solution for the Sudhoo and Hall four element airfoil is included as a demonstration case.
Domain-independent information extraction in unstructured text
DOE Office of Scientific and Technical Information (OSTI.GOV)
Irwin, N.H.
Extracting information from unstructured text has become an important research area in recent years due to the large amount of text now electronically available. This status report describes the findings and work done during the second year of a two-year Laboratory Directed Research and Development Project. Building on the first-year`s work of identifying important entities, this report details techniques used to group words into semantic categories and to output templates containing selective document content. Using word profiles and category clustering derived during a training run, the time-consuming knowledge-building task can be avoided. Though the output still lacks in completeness whenmore » compared to systems with domain-specific knowledge bases, the results do look promising. The two approaches are compatible and could complement each other within the same system. Domain-independent approaches retain appeal as a system that adapts and learns will soon outpace a system with any amount of a priori knowledge.« less
Towards organizing health knowledge on community-based health services.
Akbari, Mohammad; Hu, Xia; Nie, Liqiang; Chua, Tat-Seng
2016-12-01
Online community-based health services accumulate a huge amount of unstructured health question answering (QA) records at a continuously increasing pace. The ability to organize these health QA records has been found to be effective for data access. The existing approaches for organizing information are often not applicable to health domain due to its domain nature as characterized by complex relation among entities, large vocabulary gap, and heterogeneity of users. To tackle these challenges, we propose a top-down organization scheme, which can automatically assign the unstructured health-related records into a hierarchy with prior domain knowledge. Besides automatic hierarchy prototype generation, it also enables each data instance to be associated with multiple leaf nodes and profiles each node with terminologies. Based on this scheme, we design a hierarchy-based health information retrieval system. Experiments on a real-world dataset demonstrate the effectiveness of our scheme in organizing health QA into a topic hierarchy and retrieving health QA records from the topic hierarchy.
Romagnoli, Katrina M; Nelson, Scott D; Hines, Lisa; Empey, Philip; Boyce, Richard D; Hochheiser, Harry
2017-02-22
Drug information compendia and drug-drug interaction information databases are critical resources for clinicians and pharmacists working to avoid adverse events due to exposure to potential drug-drug interactions (PDDIs). Our goal is to develop information models, annotated data, and search tools that will facilitate the interpretation of PDDI information. To better understand the information needs and work practices of specialists who search and synthesize PDDI evidence for drug information resources, we conducted an inquiry that combined a thematic analysis of published literature with unstructured interviews. Starting from an initial set of relevant articles, we developed search terms and conducted a literature search. Two reviewers conducted a thematic analysis of included articles. Unstructured interviews with drug information experts were conducted and similarly coded. Information needs, work processes, and indicators of potential strengths and weaknesses of information systems were identified. Review of 92 papers and 10 interviews identified 56 categories of information needs related to the interpretation of PDDI information including drug and interaction information; study design; evidence including clinical details, quality and content of reports, and consequences; and potential recommendations. We also identified strengths/weaknesses of PDDI information systems. We identified the kinds of information that might be most effective for summarizing PDDIs. The drug information experts we interviewed had differing goals, suggesting a need for detailed information models and flexible presentations. Several information needs not discussed in previous work were identified, including temporal overlaps in drug administration, biological plausibility of interactions, and assessment of the quality and content of reports. Richly structured depictions of PDDI information may help drug information experts more effectively interpret data and develop recommendations. Effective information models and system designs will be needed to maximize the utility of this information.
Craig M. Thompson; J. Andrew Royle; James D. Garner
2012-01-01
Wildlife management often hinges upon an accurate assessment of population density. Although undeniably useful, many of the traditional approaches to density estimation such as visual counts, livetrapping, or markârecapture suffer from a suite of methodological and analytical weaknesses. Rare, secretive, or highly mobile species exacerbate these problems through the...
Ithuralde, Raúl Esteban; Roitberg, Adrián Enrique; Turjanski, Adrián Gustavo
2016-07-20
Intrinsically disordered proteins (IDPs) are a set of proteins that lack a definite secondary structure in solution. IDPs can acquire tertiary structure when bound to their partners; therefore, the recognition process must also involve protein folding. The nature of the transition state (TS), structured or unstructured, determines the binding mechanism. The characterization of the TS has become a major challenge for experimental techniques and molecular simulations approaches since diffusion, recognition, and binding is coupled to folding. In this work we present atomistic molecular dynamics (MD) simulations that sample the free energy surface of the coupled folding and binding of the transcription factor c-myb to the cotranscription factor CREB binding protein (CBP). This process has been recently studied and became a model to study IDPs. Despite the plethora of available information, we still do not know how c-myb binds to CBP. We performed a set of atomistic biased MD simulations running a total of 15.6 μs. Our results show that c-myb folds very fast upon binding to CBP with no unique pathway for binding. The process can proceed through both structured or unstructured TS's with similar probabilities. This finding reconciles previous seemingly different experimental results. We also performed Go-type coarse-grained MD of several structured and unstructured models that indicate that coupled folding and binding follows a native contact mechanism. To the best of our knowledge, this is the first atomistic MD simulation that samples the free energy surface of the coupled folding and binding processes of IDPs.
NASA Technical Reports Server (NTRS)
Kleb, W. L.
1994-01-01
Steady flow over the leading portion of a multicomponent airfoil section is studied using computational fluid dynamics (CFD) employing an unstructured grid. To simplify the problem, only the inviscid terms are retained from the Reynolds-averaged Navier-Stokes equations - leaving the Euler equations. The algorithm is derived using the finite-volume approach, incorporating explicit time-marching of the unsteady Euler equations to a time-asymptotic, steady-state solution. The inviscid fluxes are obtained through either of two approximate Riemann solvers: Roe's flux difference splitting or van Leer's flux vector splitting. Results are presented which contrast the solutions given by the two flux functions as a function of Mach number and grid resolution. Additional information is presented concerning code verification techniques, flow recirculation regions, convergence histories, and computational resources.
Salehi, Ali; Jimenez-Berni, Jose; Deery, David M; Palmer, Doug; Holland, Edward; Rozas-Larraondo, Pablo; Chapman, Scott C; Georgakopoulos, Dimitrios; Furbank, Robert T
2015-01-01
To our knowledge, there is no software or database solution that supports large volumes of biological time series sensor data efficiently and enables data visualization and analysis in real time. Existing solutions for managing data typically use unstructured file systems or relational databases. These systems are not designed to provide instantaneous response to user queries. Furthermore, they do not support rapid data analysis and visualization to enable interactive experiments. In large scale experiments, this behaviour slows research discovery, discourages the widespread sharing and reuse of data that could otherwise inform critical decisions in a timely manner and encourage effective collaboration between groups. In this paper we present SensorDB, a web based virtual laboratory that can manage large volumes of biological time series sensor data while supporting rapid data queries and real-time user interaction. SensorDB is sensor agnostic and uses web-based, state-of-the-art cloud and storage technologies to efficiently gather, analyse and visualize data. Collaboration and data sharing between different agencies and groups is thereby facilitated. SensorDB is available online at http://sensordb.csiro.au.
Best Practices for Unstructured Grid Shock Fitting
NASA Technical Reports Server (NTRS)
McCloud, Peter L.
2017-01-01
Unstructured grid solvers have well-known issues predicting surface heat fluxes when strong shocks are present. Various efforts have been made to address the underlying numerical issues that cause the erroneous predictions. The present work addresses some of the shortcomings of unstructured grid solvers, not by addressing the numerics, but by applying structured grid best practices to unstructured grids. A methodology for robust shock detection and shock fitting is outlined and applied to production relevant cases. Results achieved by using the Loci-CHEM Computational Fluid Dynamics solver are provided.
Aspects of Unstructured Grids and Finite-Volume Solvers for the Euler and Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
1992-01-01
One of the major achievements in engineering science has been the development of computer algorithms for solving nonlinear differential equations such as the Navier-Stokes equations. In the past, limited computer resources have motivated the development of efficient numerical schemes in computational fluid dynamics (CFD) utilizing structured meshes. The use of structured meshes greatly simplifies the implementation of CFD algorithms on conventional computers. Unstructured grids on the other hand offer an alternative to modeling complex geometries. Unstructured meshes have irregular connectivity and usually contain combinations of triangles, quadrilaterals, tetrahedra, and hexahedra. The generation and use of unstructured grids poses new challenges in CFD. The purpose of this note is to present recent developments in the unstructured grid generation and flow solution technology.
Ongoing Activities to Facilitate Access to Supplementary Materials for Cartographic Education.
ERIC Educational Resources Information Center
Anderson, Paul S.
A wealth of unpublished or unstructured educational materials for all aspects of cartographic instruction are widely dispersed and unnecessarily difficult to obtain. The Cartography Assistance Brochures Project of the Cartography Specialty Group of the Association of American Geographers (AAG), the North American Cartographic Information Society,…
The Effects of Unstructured Group Discussion on Ethical Judgment
ERIC Educational Resources Information Center
Richards, Clinton H.; Alder, G. Stoney
2014-01-01
The authors examine the effects of shared information and group discussion on ethical judgment when no structure is imposed on the discussion to encourage ethical considerations. Discussants were asked to identify arguments for and against a variety of business behaviors with ethical implications. A group moderator solicited and recorded arguments…
White Teachers' Reactions to the Racial Treatment of Middle-School Black Boys
ERIC Educational Resources Information Center
Battle, Stefan
2017-01-01
This qualitative exploratory study, informed by grounded theory, used questionnaires and unstructured interviews based on fictionalized vignettes to examine urban, public, middle-school White teachers' attitudes about middle-school Black boys, questioning whether and how such attitudes might influence classroom interactions. Twenty-four…
Labyrinth, An Abstract Model for Hypermedia Applications. Description of its Static Components.
ERIC Educational Resources Information Center
Diaz, Paloma; Aedo, Ignacio; Panetsos, Fivos
1997-01-01
The model for hypermedia applications called Labyrinth allows: (1) the design of platform-independent hypermedia applications; (2) the categorization, generalization and abstraction of sparse unstructured heterogeneous information in multiple and interconnected levels; (3) the creation of personal views in multiuser hyperdocuments for both groups…
Get It Together: Integrating Data with XML.
ERIC Educational Resources Information Center
Miller, Ron
2003-01-01
Discusses the use of XML for data integration to move data across different platforms, including across the Internet, from a variety of sources. Topics include flexibility; standards; organizing databases; unstructured data and the use of meta tags to encode it with XML information; cost effectiveness; and eliminating client software licenses.…
NASA Astrophysics Data System (ADS)
Stefanski, Douglas Lawrence
A finite volume method for solving the Reynolds Averaged Navier-Stokes (RANS) equations on unstructured hybrid grids is presented. Capabilities for handling arbitrary mixtures of reactive gas species within the unstructured framework are developed. The modeling of turbulent effects is carried out via the 1998 Wilcox k -- o model. This unstructured solver is incorporated within VULCAN -- a multi-block structured grid code -- as part of a novel patching procedure in which non-matching interfaces between structured blocks are replaced by transitional unstructured grids. This approach provides a fully-conservative alternative to VULCAN's non-conservative patching methods for handling such interfaces. In addition, the further development of the standalone unstructured solver toward large-eddy simulation (LES) applications is also carried out. Dual time-stepping using a Crank-Nicholson formulation is added to recover time-accuracy, and modeling of sub-grid scale effects is incorporated to provide higher fidelity LES solutions for turbulent flows. A switch based on the work of Ducros, et al., is implemented to transition from a monotonicity-preserving flux scheme near shocks to a central-difference method in vorticity-dominated regions in order to better resolve small-scale turbulent structures. The updated unstructured solver is used to carry out large-eddy simulations of a supersonic constrained mixing layer.
Current Searching Methodology and Retrieval Issues: An Assessment
2008-03-01
searching that are used by search engines are discussed. They are: full text searching, i.e., the searching of unstructured data, and metadata searching...also found among search engines ; however, it is the popularity of full text searching that has changed the road map to information access. The...other hand, information seekers’ willingness, or lack of, to learn the multiple search engines ’ capabilities may diminish their search results
Joint Extraction of Entities and Relations Using Reinforcement Learning and Deep Learning.
Feng, Yuntian; Zhang, Hongjun; Hao, Wenning; Chen, Gang
2017-01-01
We use both reinforcement learning and deep learning to simultaneously extract entities and relations from unstructured texts. For reinforcement learning, we model the task as a two-step decision process. Deep learning is used to automatically capture the most important information from unstructured texts, which represent the state in the decision process. By designing the reward function per step, our proposed method can pass the information of entity extraction to relation extraction and obtain feedback in order to extract entities and relations simultaneously. Firstly, we use bidirectional LSTM to model the context information, which realizes preliminary entity extraction. On the basis of the extraction results, attention based method can represent the sentences that include target entity pair to generate the initial state in the decision process. Then we use Tree-LSTM to represent relation mentions to generate the transition state in the decision process. Finally, we employ Q -Learning algorithm to get control policy π in the two-step decision process. Experiments on ACE2005 demonstrate that our method attains better performance than the state-of-the-art method and gets a 2.4% increase in recall-score.
Joint Extraction of Entities and Relations Using Reinforcement Learning and Deep Learning
Zhang, Hongjun; Chen, Gang
2017-01-01
We use both reinforcement learning and deep learning to simultaneously extract entities and relations from unstructured texts. For reinforcement learning, we model the task as a two-step decision process. Deep learning is used to automatically capture the most important information from unstructured texts, which represent the state in the decision process. By designing the reward function per step, our proposed method can pass the information of entity extraction to relation extraction and obtain feedback in order to extract entities and relations simultaneously. Firstly, we use bidirectional LSTM to model the context information, which realizes preliminary entity extraction. On the basis of the extraction results, attention based method can represent the sentences that include target entity pair to generate the initial state in the decision process. Then we use Tree-LSTM to represent relation mentions to generate the transition state in the decision process. Finally, we employ Q-Learning algorithm to get control policy π in the two-step decision process. Experiments on ACE2005 demonstrate that our method attains better performance than the state-of-the-art method and gets a 2.4% increase in recall-score. PMID:28894463
Managing Content in a Matter of Minutes
NASA Technical Reports Server (NTRS)
2004-01-01
NASA software created to help scientists expeditiously search and organize their research documents is now aiding compliance personnel, law enforcement investigators, and the general public in their efforts to search, store, manage, and retrieve documents more efficiently. Developed at Ames Research Center, NETMARK software was designed to manipulate vast amounts of unstructured and semi-structured NASA documents. NETMARK is both a relational and object-oriented technology built on an Oracle enterprise-wide database. To ensure easy user access, Ames constructed NETMARK as a Web-enabled platform utilizing the latest in Internet technology. One of the significant benefits of the program was its ability to store and manage mission-critical data.
Onstad, David; Crain, Philip; Crespo, Andre; Hutchison, William; Buntin, David; Porter, Pat; Catchot, Angus; Cook, Don; Pilcher, Clint; Flexner, Lindsey; Higgins, Laura
2016-01-01
We created a deterministic, frequency-based model of the evolution of resistance by corn earworm, Helicoverpa zea (Boddie) (Lepidoptera: Noctuidae), to insecticidal traits expressed in crops planted in the heterogeneous landscapes of the southern United States. The model accounts for four generations of selection by insecticidal traits each year. We used the model results to investigate the influence of three factors on insect resistance management (IRM): 1) how does adding a third insecticidal trait to both corn and cotton affect durability of the products, 2) how does unstructured corn refuge influence IRM, and 3) how do block refuges (50% compliance) and blended refuges compare with regard to IRM? When Bt cotton expresses the same number of insecticidal traits, Bt corn with three insecticidal traits provides longer durability than Bt corn with two pyramided traits. Blended refuge provides similar durability for corn products compared with the same level of required block refuge when the rate of refuge compliance by farmers is 50%. Results for Mississippi and Texas are similar, but durabilities for corn traits are surprisingly lower in Georgia, where unstructured corn refuge is the highest of the three states, but refuge for Bt cotton is the lowest of the three states. Thus, unstructured corn refuge can be valuable for IRM but its influence is determined by selection for resistance by Bt cotton. PMID:26637533
Efficient Hierarchical Quorums in Unstructured Peer-to-Peer Networks
NASA Astrophysics Data System (ADS)
Henry, Kevin; Swanson, Colleen; Xie, Qi; Daudjee, Khuzaima
Managing updates in a peer-to-peer (P2P) network can be a challenging task, especially in the unstructured setting. If one peer reads or updates a data item, then it is desirable to read the most recent version or to have the update visible to all other peers. In practice, this should be accomplished by coordinating and writing to only a small number of peers. We propose two approaches, inspired by hierarchical quorums, to solve this problem in unstructured P2P networks. Our first proposal provides uniform load balancing, while the second sacrifices full load balancing for larger average quorum intersection, and hence greater tolerance to network churn. We demonstrate that applying a random logical tree structure to peers on a per-data item basis allows us to achieve near optimal quorum size, thus minimizing the number of peers that must be coordinated to perform a read or write operation. Unlike previous approaches, our random hierarchical quorums are always guaranteed to overlap at at least one peer when all peers are reachable and, as demonstrated through performance studies, prove to be more resilient to changing network conditions to maximize quorum intersection than previous approaches with a similar quorum size. Furthermore, our two quorum approaches are interchangeable within the same network, providing adaptivity by allowing one to be swapped for the other as network conditions change.
Information Extraction from Unstructured Text for the Biodefense Knowledge Center
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samatova, N F; Park, B; Krishnamurthy, R
2005-04-29
The Bio-Encyclopedia at the Biodefense Knowledge Center (BKC) is being constructed to allow an early detection of emerging biological threats to homeland security. It requires highly structured information extracted from variety of data sources. However, the quantity of new and vital information available from every day sources cannot be assimilated by hand, and therefore reliable high-throughput information extraction techniques are much anticipated. In support of the BKC, Lawrence Livermore National Laboratory and Oak Ridge National Laboratory, together with the University of Utah, are developing an information extraction system built around the bioterrorism domain. This paper reports two important pieces ofmore » our effort integrated in the system: key phrase extraction and semantic tagging. Whereas two key phrase extraction technologies developed during the course of project help identify relevant texts, our state-of-the-art semantic tagging system can pinpoint phrases related to emerging biological threats. Also we are enhancing and tailoring the Bio-Encyclopedia by augmenting semantic dictionaries and extracting details of important events, such as suspected disease outbreaks. Some of these technologies have already been applied to large corpora of free text sources vital to the BKC mission, including ProMED-mail, PubMed abstracts, and the DHS's Information Analysis and Infrastructure Protection (IAIP) news clippings. In order to address the challenges involved in incorporating such large amounts of unstructured text, the overall system is focused on precise extraction of the most relevant information for inclusion in the BKC.« less
Andreu-Perez, Javier; Poon, Carmen C Y; Merrifield, Robert D; Wong, Stephen T C; Yang, Guang-Zhong
2015-07-01
This paper provides an overview of recent developments in big data in the context of biomedical and health informatics. It outlines the key characteristics of big data and how medical and health informatics, translational bioinformatics, sensor informatics, and imaging informatics will benefit from an integrated approach of piecing together different aspects of personalized information from a diverse range of data sources, both structured and unstructured, covering genomics, proteomics, metabolomics, as well as imaging, clinical diagnosis, and long-term continuous physiological sensing of an individual. It is expected that recent advances in big data will expand our knowledge for testing new hypotheses about disease management from diagnosis to prevention to personalized treatment. The rise of big data, however, also raises challenges in terms of privacy, security, data ownership, data stewardship, and governance. This paper discusses some of the existing activities and future opportunities related to big data for health, outlining some of the key underlying issues that need to be tackled.
Extracting and standardizing medication information in clinical text - the MedEx-UIMA system.
Jiang, Min; Wu, Yonghui; Shah, Anushi; Priyanka, Priyanka; Denny, Joshua C; Xu, Hua
2014-01-01
Extraction of medication information embedded in clinical text is important for research using electronic health records (EHRs). However, most of current medication information extraction systems identify drug and signature entities without mapping them to standard representation. In this study, we introduced the open source Java implementation of MedEx, an existing high-performance medication information extraction system, based on the Unstructured Information Management Architecture (UIMA) framework. In addition, we developed new encoding modules in the MedEx-UIMA system, which mapped an extracted drug name/dose/form to both generalized and specific RxNorm concepts and translated drug frequency information to ISO standard. We processed 826 documents by both systems and verified that MedEx-UIMA and MedEx (the Python version) performed similarly by comparing both results. Using two manually annotated test sets that contained 300 drug entries from medication list and 300 drug entries from narrative reports, the MedEx-UIMA system achieved F-measures of 98.5% and 97.5% respectively for encoding drug names to corresponding RxNorm generic drug ingredients, and F-measures of 85.4% and 88.1% respectively for mapping drug names/dose/form to the most specific RxNorm concepts. It also achieved an F-measure of 90.4% for normalizing frequency information to ISO standard. The open source MedEx-UIMA system is freely available online at http://code.google.com/p/medex-uima/.
An assessment of unstructured grid technology for timely CFD analysis
NASA Technical Reports Server (NTRS)
Kinard, Tom A.; Schabowski, Deanne M.
1995-01-01
An assessment of two unstructured methods is presented in this paper. A tetrahedral unstructured method USM3D, developed at NASA Langley Research Center is compared to a Cartesian unstructured method, SPLITFLOW, developed at Lockheed Fort Worth Company. USM3D is an upwind finite volume solver that accepts grids generated primarily from the Vgrid grid generator. SPLITFLOW combines an unstructured grid generator with an implicit flow solver in one package. Both methods are exercised on three test cases, a wing, and a wing body, and a fully expanded nozzle. The results for the first two runs are included here and compared to the structured grid method TEAM and to available test data. On each test case, the set up procedure are described, including any difficulties that were encountered. Detailed descriptions of the solvers are not included in this paper.
Blouin, Danielle; Day, Andrew G.; Pavlov, Andrey
2011-01-01
Background Although never directly compared, structured interviews are reported as being more reliable than unstructured interviews. This study compared the reliability of both types of interview when applied to a common pool of applicants for positions in an emergency medicine residency program. Methods In 2008, one structured interview was added to the two unstructured interviews traditionally used in our resident selection process. A formal job analysis using the critical incident technique guided the development of the structured interview tool. This tool consisted of 7 scenarios assessing 4 of the domains deemed essential for success as a resident in this program. The traditional interview tool assessed 5 general criteria. In addition to these criteria, the unstructured panel members were asked to rate each candidate on the same 4 essential domains rated by the structured panel members. All 3 panels interviewed all candidates. Main outcomes were the overall, interitem, and interrater reliabilities, the correlations between interview panels, and the dimensionality of each interview tool. Results Thirty candidates were interviewed. The overall reliability reached 0.43 for the structured interview, and 0.81 and 0.71 for the unstructured interviews. Analyses of the variance components showed a high interrater, low interitem reliability for the structured interview, and a high interrater, high interitem reliability for the unstructured interviews. The summary measures from the 2 unstructured interviews were significantly correlated, but neither was correlated with the structured interview. Only the structured interview was multidimensional. Conclusions A structured interview did not yield a higher overall reliability than both unstructured interviews. The lower reliability is explained by a lower interitem reliability, which in turn is due to the multidimensionality of the interview tool. Both unstructured panels consistently rated a single dimension, even when prompted to assess the 4 specific domains established as essential to succeed in this residency program. PMID:23205201
Blouin, Danielle; Day, Andrew G; Pavlov, Andrey
2011-12-01
Although never directly compared, structured interviews are reported as being more reliable than unstructured interviews. This study compared the reliability of both types of interview when applied to a common pool of applicants for positions in an emergency medicine residency program. In 2008, one structured interview was added to the two unstructured interviews traditionally used in our resident selection process. A formal job analysis using the critical incident technique guided the development of the structured interview tool. This tool consisted of 7 scenarios assessing 4 of the domains deemed essential for success as a resident in this program. The traditional interview tool assessed 5 general criteria. In addition to these criteria, the unstructured panel members were asked to rate each candidate on the same 4 essential domains rated by the structured panel members. All 3 panels interviewed all candidates. Main outcomes were the overall, interitem, and interrater reliabilities, the correlations between interview panels, and the dimensionality of each interview tool. Thirty candidates were interviewed. The overall reliability reached 0.43 for the structured interview, and 0.81 and 0.71 for the unstructured interviews. Analyses of the variance components showed a high interrater, low interitem reliability for the structured interview, and a high interrater, high interitem reliability for the unstructured interviews. The summary measures from the 2 unstructured interviews were significantly correlated, but neither was correlated with the structured interview. Only the structured interview was multidimensional. A structured interview did not yield a higher overall reliability than both unstructured interviews. The lower reliability is explained by a lower interitem reliability, which in turn is due to the multidimensionality of the interview tool. Both unstructured panels consistently rated a single dimension, even when prompted to assess the 4 specific domains established as essential to succeed in this residency program.
Multigrid techniques for unstructured meshes
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1995-01-01
An overview of current multigrid techniques for unstructured meshes is given. The basic principles of the multigrid approach are first outlined. Application of these principles to unstructured mesh problems is then described, illustrating various different approaches, and giving examples of practical applications. Advanced multigrid topics, such as the use of algebraic multigrid methods, and the combination of multigrid techniques with adaptive meshing strategies are dealt with in subsequent sections. These represent current areas of research, and the unresolved issues are discussed. The presentation is organized in an educational manner, for readers familiar with computational fluid dynamics, wishing to learn more about current unstructured mesh techniques.
Unstructured Euler flow solutions using hexahedral cell refinement
NASA Technical Reports Server (NTRS)
Melton, John E.; Cappuccio, Gelsomina; Thomas, Scott D.
1991-01-01
An attempt is made to extend grid refinement into three dimensions by using unstructured hexahedral grids. The flow solver is developed using the TIGER (topologically Independent Grid, Euler Refinement) as the starting point. The program uses an unstructured hexahedral mesh and a modified version of the Jameson four-stage, finite-volume Runge-Kutta algorithm for integration of the Euler equations. The unstructured mesh allows for local refinement appropriate for each freestream condition, thereby concentrating mesh cells in the regions of greatest interest. This increases the computational efficiency because the refinement is not required to extend throughout the entire flow field.
ERIC Educational Resources Information Center
Kwak, Yoonyoung; Lu, Ting; Christ, Sharon L.
2017-01-01
Background: Many adolescents are referred to Child Protective Services for possible maltreatment every year, but not much is known about their organized and unstructured activity participation. Objective: The purposes of this study are to provide a description of organized and unstructured activity participation for adolescents who were possible…
Lerna, Anna; Esposito, Dalila; Conson, Massimiliano; Russo, Luigi; Massagli, Angelo
2012-01-01
The Picture Exchange Communication System (PECS) is a common treatment choice for non-verbal children with autism. However, little empirical evidence is available on the usefulness of PECS in treating social-communication impairments in autism. To test the effects of PECS on social-communicative skills in children with autism, concurrently taking into account standardized psychometric data, standardized functional assessment of adaptive behaviour, and information on social-communicative variables coded in an unstructured setting. Eighteen preschool children (mean age = 38.78 months) were assigned to two intervention approaches, i.e. PECS and Conventional Language Therapy (CLT). Both PECS (Phases I-IV) and CLT were delivered three times per week, in 30-min sessions, for 6 months. Outcome measures were the following: Autism Diagnostic Observation Schedule (ADOS) domain scores for Communication and Reciprocal Social Interaction; Language and Personal-Social subscales of the Griffiths' Mental Developmental Scales (GMDS); Communication and Social Abilities domains of the Vineland Adaptive Behavior Scales (VABS); and several social-communicative variables coded in an unstructured setting. Results demonstrated that the two groups did not differ at Time 1 (pre-treatment assessment), whereas at Time 2 (post-test) the PECS group showed a significant improvement with respect to the CLT group on the VABS social domain score and on almost all the social-communicative abilities coded in the unstructured setting (i.e. joint attention, request, initiation, cooperative play, but not eye contact). These findings showed that PECS intervention (Phases I-IV) can improve social-communicative skills in children with autism. This improvement is especially evident in standardized measures of adaptive behaviour and measures derived from the observation of children in an unstructured setting. © 2012 Royal College of Speech and Language Therapists.
Cube Kohonen self-organizing map (CKSOM) model with new equations in organizing unstructured data.
Lim, Seng Poh; Haron, Habibollah
2013-09-01
Surface reconstruction by using 3-D data is used to represent the surface of an object and perform important tasks. The type of data used is important and can be described as either structured or unstructured. For unstructured data, there is no connectivity information between data points. As a result, incorrect shapes will be obtained during the imaging process. Therefore, the data should be reorganized by finding the correct topology so that the correct shape can be obtained. Previous studies have shown that the Kohonen self-organizing map (KSOM) could be used to solve data organizing problems. However, 2-D Kohonen maps are limited because they are unable to cover the whole surface of closed 3-D surface data. Furthermore, the neurons inside the 3-D KSOM structure should be removed in order to create a correct wireframe model. This is because only the outside neurons are used to represent the surface of an object. The aim of this paper is to use KSOM to organize unstructured data for closed surfaces. KSOM isused in this paper by testing its ability to organize medical image data because KSOM is mostly used in constructing engineering field data. Enhancements are added to the model by introducing class number and the index vector, and new equations are created. Various grid sizes and maximum iterations are tested in the experiments. Based on the results, the number of redundancies is found to be directly proportional to the grid size. When we increase the maximum iterations, the surface of the image becomes smoother. An area formula is used and manual calculations are performed to validate the results. This model is implemented and images are created using Dev C++ and GNUPlot.
Patterson, Mark E; Miranda, Derick; Schuman, Greg; Eaton, Christopher; Smith, Andrew; Silver, Brad
2016-01-01
Leveraging "big data" as a means of informing cost-effective care holds potential in triaging high-risk heart failure (HF) patients for interventions within hospitals seeking to reduce 30-day readmissions. Explore provider's beliefs and perceptions about using an electronic health record (EHR)-based tool that uses unstructured clinical notes to risk-stratify high-risk heart failure patients. Six providers from an inpatient HF clinic within an urban safety net hospital were recruited to participate in a semistructured focus group. A facilitator led a discussion on the feasibility and value of using an EHR tool driven by unstructured clinical notes to help identify high-risk patients. Data collected from transcripts were analyzed using a thematic analysis that facilitated drawing conclusions clustered around categories and themes. From six categories emerged two themes: (1) challenges of finding valid and accurate results, and (2) strategies used to overcome these challenges. Although employing a tool that uses electronic medical record (EMR) unstructured text as the benchmark by which to identify high-risk patients is efficient, choosing appropriate benchmark groups could be challenging given the multiple causes of readmission. Strategies to mitigate these challenges include establishing clear selection criteria to guide benchmark group composition, and quality outcome goals for the hospital. Prior to implementing into practice an innovative EMR-based case-finder driven by unstructured clinical notes, providers are advised to do the following: (1) define patient quality outcome goals, (2) establish criteria by which to guide benchmark selection, and (3) verify the tool's validity and reliability. Achieving consensus on these issues would be necessary for this innovative EHR-based tool to effectively improve clinical decision-making and in turn, decrease readmissions for high-risk patients.
A semantic medical multimedia retrieval approach using ontology information hiding.
Guo, Kehua; Zhang, Shigeng
2013-01-01
Searching useful information from unstructured medical multimedia data has been a difficult problem in information retrieval. This paper reports an effective semantic medical multimedia retrieval approach which can reflect the users' query intent. Firstly, semantic annotations will be given to the multimedia documents in the medical multimedia database. Secondly, the ontology that represented semantic information will be hidden in the head of the multimedia documents. The main innovations of this approach are cross-type retrieval support and semantic information preservation. Experimental results indicate a good precision and efficiency of our approach for medical multimedia retrieval in comparison with some traditional approaches.
ERIC Educational Resources Information Center
Maistry, Suriamurthee
2010-01-01
Continuing professional development (CPD) initiatives for teachers in South Africa take on various forms, ranging from formalised, structured, credit-bearing certification programmes to informal, relatively unstructured, situated learning programmes. While many formal programmes can claim success by measuring throughput rates, there is still much…
ERIC Educational Resources Information Center
Egan, Rylan G.
2012-01-01
Introduction: The following study investigates relationships between spaced practice (re-studying after a delay) and transfer of learning. Specifically, the impact on learners ability to transfer learning after participating in spaced model-building or unstructured study of narrated text. Method: Subjects were randomly assigned either to a…
Young Male Prostitutes: Their Knowledge of Selected Sexually Transmitted Diseases.
ERIC Educational Resources Information Center
Calhoun, Thomas; Pickerill, Brian
1988-01-01
Conducted unstructured interviews with 18 male street prostitutes between the ages of 13 and 22 to determine the extent of accurate knowledge they possessed concerning four common sexually transmitted diseases. Found that subjects possessed more factual information on gonorrhea and syphilis than on herpes and Acquired Immune Deficiency Syndrome.…
Information retrieval system utilizing wavelet transform
Brewster, Mary E.; Miller, Nancy E.
2000-01-01
A method for automatically partitioning an unstructured electronically formatted natural language document into its sub-topic structure. Specifically, the document is converted to an electronic signal and a wavelet transform is then performed on the signal. The resultant signal may then be used to graphically display and interact with the sub-topic structure of the document.
Policy Study of Vocational and Adult Education in Rural Areas.
ERIC Educational Resources Information Center
Mertens, Donna M.
This study of the vocational and adult education system in isolated rural areas was designed to provide information that is necessary for the development of policy for vocational and adult education in isolated rural areas. The study consisted of a review of literature; unstructured interviews with representatives of the business, civic, and…
Comparing Latent Dirichlet Allocation and Latent Semantic Analysis as Classifiers
ERIC Educational Resources Information Center
Anaya, Leticia H.
2011-01-01
In the Information Age, a proliferation of unstructured text electronic documents exists. Processing these documents by humans is a daunting task as humans have limited cognitive abilities for processing large volumes of documents that can often be extremely lengthy. To address this problem, text data computer algorithms are being developed.…
Repetski, Stephen; Venkataraman, Girish; Che, Anney; Luke, Brian T.; Girard, F. Pascal; Stephens, Robert M.
2013-01-01
As the discipline of biomedical science continues to apply new technologies capable of producing unprecedented volumes of noisy and complex biological data, it has become evident that available methods for deriving meaningful information from such data are simply not keeping pace. In order to achieve useful results, researchers require methods that consolidate, store and query combinations of structured and unstructured data sets efficiently and effectively. As we move towards personalized medicine, the need to combine unstructured data, such as medical literature, with large amounts of highly structured and high-throughput data such as human variation or expression data from very large cohorts, is especially urgent. For our study, we investigated a likely biomedical query using the Hadoop framework. We ran queries using native MapReduce tools we developed as well as other open source and proprietary tools. Our results suggest that the available technologies within the Big Data domain can reduce the time and effort needed to utilize and apply distributed queries over large datasets in practical clinical applications in the life sciences domain. The methodologies and technologies discussed in this paper set the stage for a more detailed evaluation that investigates how various data structures and data models are best mapped to the proper computational framework. PMID:24312478
Tuminaro, Raymond S.; Perego, Mauro; Tezaur, Irina Kalashnikova; ...
2016-10-06
A multigrid method is proposed that combines ideas from matrix dependent multigrid for structured grids and algebraic multigrid for unstructured grids. It targets problems where a three-dimensional mesh can be viewed as an extrusion of a two-dimensional, unstructured mesh in a third dimension. Our motivation comes from the modeling of thin structures via finite elements and, more specifically, the modeling of ice sheets. Extruded meshes are relatively common for thin structures and often give rise to anisotropic problems when the thin direction mesh spacing is much smaller than the broad direction mesh spacing. Within our approach, the first few multigridmore » hierarchy levels are obtained by applying matrix dependent multigrid to semicoarsen in a structured thin direction fashion. After sufficient structured coarsening, the resulting mesh contains only a single layer corresponding to a two-dimensional, unstructured mesh. Algebraic multigrid can then be employed in a standard manner to create further coarse levels, as the anisotropic phenomena is no longer present in the single layer problem. The overall approach remains fully algebraic, with the minor exception that some additional information is needed to determine the extruded direction. Furthermore, this facilitates integration of the solver with a variety of different extruded mesh applications.« less
Introducing a distributed unstructured mesh into gyrokinetic particle-in-cell code, XGC
NASA Astrophysics Data System (ADS)
Yoon, Eisung; Shephard, Mark; Seol, E. Seegyoung; Kalyanaraman, Kaushik
2017-10-01
XGC has shown good scalability for large leadership supercomputers. The current production version uses a copy of the entire unstructured finite element mesh on every MPI rank. Although an obvious scalability issue if the mesh sizes are to be dramatically increased, the current approach is also not optimal with respect to data locality of particles and mesh information. To address these issues we have initiated the development of a distributed mesh PIC method. This approach directly addresses the base scalability issue with respect to mesh size and, through the use of a mesh entity centric view of the particle mesh relationship, provides opportunities to address data locality needs of many core and GPU supported heterogeneous systems. The parallel mesh PIC capabilities are being built on the Parallel Unstructured Mesh Infrastructure (PUMI). The presentation will first overview the form of mesh distribution used and indicate the structures and functions used to support the mesh, the particles and their interaction. Attention will then focus on the node-level optimizations being carried out to ensure performant operation of all PIC operations on the distributed mesh. Partnership for Edge Physics Simulation (EPSI) Grant No. DE-SC0008449 and Center for Extended Magnetohydrodynamic Modeling (CEMM) Grant No. DE-SC0006618.
Mudunuri, Uma S; Khouja, Mohamad; Repetski, Stephen; Venkataraman, Girish; Che, Anney; Luke, Brian T; Girard, F Pascal; Stephens, Robert M
2013-01-01
As the discipline of biomedical science continues to apply new technologies capable of producing unprecedented volumes of noisy and complex biological data, it has become evident that available methods for deriving meaningful information from such data are simply not keeping pace. In order to achieve useful results, researchers require methods that consolidate, store and query combinations of structured and unstructured data sets efficiently and effectively. As we move towards personalized medicine, the need to combine unstructured data, such as medical literature, with large amounts of highly structured and high-throughput data such as human variation or expression data from very large cohorts, is especially urgent. For our study, we investigated a likely biomedical query using the Hadoop framework. We ran queries using native MapReduce tools we developed as well as other open source and proprietary tools. Our results suggest that the available technologies within the Big Data domain can reduce the time and effort needed to utilize and apply distributed queries over large datasets in practical clinical applications in the life sciences domain. The methodologies and technologies discussed in this paper set the stage for a more detailed evaluation that investigates how various data structures and data models are best mapped to the proper computational framework.
NASA Astrophysics Data System (ADS)
Yulaeva, E.; Fan, Y.; Moosdorf, N.; Richard, S. M.; Bristol, S.; Peters, S. E.; Zaslavsky, I.; Ingebritsen, S.
2015-12-01
The Digital Crust EarthCube building block creates a framework for integrating disparate 3D/4D information from multiple sources into a comprehensive model of the structure and composition of the Earth's upper crust, and to demonstrate the utility of this model in several research scenarios. One of such scenarios is estimation of various crustal properties related to fluid dynamics (e.g. permeability and porosity) at each node of any arbitrary unstructured 3D grid to support continental-scale numerical models of fluid flow and transport. Starting from Macrostrat, an existing 4D database of 33,903 chronostratigraphic units, and employing GeoDeepDive, a software system for extracting structured information from unstructured documents, we construct 3D gridded fields of sediment/rock porosity, permeability and geochemistry for large sedimentary basins of North America, which will be used to improve our understanding of large-scale fluid flow, chemical weathering rates, and geochemical fluxes into the ocean. In this talk, we discuss the methods, data gaps (particularly in geologically complex terrain), and various physical and geological constraints on interpolation and uncertainty estimation.
Integrated Bio-Entity Network: A System for Biological Knowledge Discovery
Bell, Lindsey; Chowdhary, Rajesh; Liu, Jun S.; Niu, Xufeng; Zhang, Jinfeng
2011-01-01
A significant part of our biological knowledge is centered on relationships between biological entities (bio-entities) such as proteins, genes, small molecules, pathways, gene ontology (GO) terms and diseases. Accumulated at an increasing speed, the information on bio-entity relationships is archived in different forms at scattered places. Most of such information is buried in scientific literature as unstructured text. Organizing heterogeneous information in a structured form not only facilitates study of biological systems using integrative approaches, but also allows discovery of new knowledge in an automatic and systematic way. In this study, we performed a large scale integration of bio-entity relationship information from both databases containing manually annotated, structured information and automatic information extraction of unstructured text in scientific literature. The relationship information we integrated in this study includes protein–protein interactions, protein/gene regulations, protein–small molecule interactions, protein–GO relationships, protein–pathway relationships, and pathway–disease relationships. The relationship information is organized in a graph data structure, named integrated bio-entity network (IBN), where the vertices are the bio-entities and edges represent their relationships. Under this framework, graph theoretic algorithms can be designed to perform various knowledge discovery tasks. We designed breadth-first search with pruning (BFSP) and most probable path (MPP) algorithms to automatically generate hypotheses—the indirect relationships with high probabilities in the network. We show that IBN can be used to generate plausible hypotheses, which not only help to better understand the complex interactions in biological systems, but also provide guidance for experimental designs. PMID:21738677
War Termination: Dreaming of the End and the Ultimate Triumph
2004-05-17
and unstructured, art and science . To realize national strategic objectives and develop a triumphant peace, operational commanders must shun the...itself, war termination is both political and military, structured and unstructured, art and science . To realize national strategic objectives and...termination is political and military, structured and unstructured, art and science . By applying elements of operational art to war termination and
Unstructured Grid Generation Techniques and Software
NASA Technical Reports Server (NTRS)
Posenau, Mary-Anne K. (Editor)
1993-01-01
The Workshop on Unstructured Grid Generation Techniques and Software was conducted for NASA to assess its unstructured grid activities, improve the coordination among NASA centers, and promote technology transfer to industry. The proceedings represent contributions from Ames, Langley, and Lewis Research Centers, and the Johnson and Marshall Space Flight Centers. This report is a compilation of the presentations made at the workshop.
Expert Systems and Command, Control, and Communication System Acquisition
1989-03-01
Systems and Command, Control, and Communicaton System Acquisition 12 Personal Author(s) James E. Minnema 13a Type of Report 13b Time Covered 14 Date...isolated strategic planning, unstructured problems, the author feels that this category should also include problems involving the integration of...distinct operational or management control, and structured or semi-structured problem efforts. The reason for this is that integration of a number of
2018-01-01
Background Virtual environments (VEs) facilitate interaction and support among individuals with chronic illness, yet the characteristics of these VE interactions remain unknown. Objective The objective of this study was to describe social interaction and support among individuals with type 2 diabetes (T2D) who interacted in a VE. Methods Data included VE-mediated synchronous conversations and text-chat and asynchronous emails and discussion board posts from a study that facilitated interaction among individuals with T2D and diabetes educators (N=24) in 2 types of sessions: education and support. Results VE interactions consisted of communication techniques (how individuals interact in the VE), expressions of self-management (T2D-related topics), depth (personalization of topics), and breadth (number of topics discussed). Individuals exchanged support more often in the education (723/1170, 61.79%) than in the support (406/1170, 34.70%) sessions or outside session time (41/1170, 3.50%). Of all support exchanges, 535/1170 (45.73%) were informational, 377/1170 (32.22%) were emotional, 217/1170 (18.55%) were appraisal, and 41/1170 (3.50%) were instrumental. When comparing session types, education sessions predominately provided informational support (357/723, 49.4%), and the support sessions predominately provided emotional (159/406, 39.2%) and informational (159/406, 39.2%) support. Conclusions VE-mediated interactions resemble those in face-to-face environments, as individuals in VEs engage in bidirectional exchanges with others to obtain self-management education and support. Similar to face-to-face environments, individuals in the VE revealed personal information, sought information, and exchanged support during the moderated education sessions and unstructured support sessions. With this versatility, VEs are able to contribute substantially to support for those with diabetes and, very likely, other chronic diseases. PMID:29467118
Utility and potential of rapid epidemic intelligence from internet-based sources.
Yan, S J; Chughtai, A A; Macintyre, C R
2017-10-01
Rapid epidemic detection is an important objective of surveillance to enable timely intervention, but traditional validated surveillance data may not be available in the required timeframe for acute epidemic control. Increasing volumes of data on the Internet have prompted interest in methods that could use unstructured sources to enhance traditional disease surveillance and gain rapid epidemic intelligence. We aimed to summarise Internet-based methods that use freely-accessible, unstructured data for epidemic surveillance and explore their timeliness and accuracy outcomes. Steps outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist were used to guide a systematic review of research related to the use of informal or unstructured data by Internet-based intelligence methods for surveillance. We identified 84 articles published between 2006-2016 relating to Internet-based public health surveillance methods. Studies used search queries, social media posts and approaches derived from existing Internet-based systems for early epidemic alerts and real-time monitoring. Most studies noted improved timeliness compared to official reporting, such as in the 2014 Ebola epidemic where epidemic alerts were generated first from ProMED-mail. Internet-based methods showed variable correlation strength with official datasets, with some methods showing reasonable accuracy. The proliferation of publicly available information on the Internet provided a new avenue for epidemic intelligence. Methodologies have been developed to collect Internet data and some systems are already used to enhance the timeliness of traditional surveillance systems. To improve the utility of Internet-based systems, the key attributes of timeliness and data accuracy should be included in future evaluations of surveillance systems. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
ERIC Educational Resources Information Center
Pierce, Thomas B., Jr.; And Others
1990-01-01
A survey assessed time spent in the community and/or on unstructured activities by randomly selected individuals in Intermediate Care Facilities for the Mentally Retarded (ICF/MR) (N=20) or minigroup home settings (N=20). Individuals in ICF/MR homes spent more time in the community with staff and made fewer choices of unstructured activities.…
Information security of Smart Factories
NASA Astrophysics Data System (ADS)
Iureva, R. A.; Andreev, Y. S.; Iuvshin, A. M.; Timko, A. S.
2018-05-01
In several years, technologies and systems based on the Internet of things (IoT) will be widely used in all smart factories. When processing a huge array of unstructured data, their filtration and adequate interpretation are a priority for enterprises. In this context, the correct representation of information in a user-friendly form acquires special importance, for which the market today presents advanced analytical platforms designed to collect, store and analyze data on technological processes and events in real time. The main idea of the paper is the statement of the information security problem in IoT and integrity of processed information.
Searching for Significance in Unstructured Data: Text Mining with Leximancer
ERIC Educational Resources Information Center
Thomas, David A.
2014-01-01
Scholars in many knowledge domains rely on sophisticated information technologies to search for and retrieve records and publications pertinent to their research interests. But what is a scholar to do when a search identifies hundreds of documents, any of which might be vital or irrelevant to his or her work? The problem is further complicated by…
Clinical Linguistics: Conversational Reflections
ERIC Educational Resources Information Center
Crystal, David
2013-01-01
This is a report of the main points I made in an informal "conversation" with Paul Fletcher and the audience at the 14th ICPLA conference in Cork. The observations arose randomly, as part of an unstructured 1-h Q&A, so they do not provide a systematic account of the subject, but simply reflect the issues which were raised by the conference…
Advanced Natural Language Processing and Temporal Mining for Clinical Discovery
ERIC Educational Resources Information Center
Mehrabi, Saeed
2016-01-01
There has been vast and growing amount of healthcare data especially with the rapid adoption of electronic health records (EHRs) as a result of the HITECH act of 2009. It is estimated that around 80% of the clinical information resides in the unstructured narrative of an EHR. Recently, natural language processing (NLP) techniques have offered…
VastMM-Tag: Semantic Indexing and Browsing of Videos for E-Learning
ERIC Educational Resources Information Center
Morris, Mitchell J.
2012-01-01
Quickly accessing the contents of a video is challenging for users, particularly for unstructured video, which contains no intentional shot boundaries, no chapters, and no apparent edited format. We approach this problem in the domain of lecture videos though the use of machine learning, to gather semantic information about the videos; and through…
Information retrieval system utilizing wavelet transform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brewster, M.E.; Miller, N.E.
A method is disclosed for automatically partitioning an unstructured electronically formatted natural language document into its sub-topic structure. Specifically, the document is converted to an electronic signal and a wavelet transform is then performed on the signal. The resultant signal may then be used to graphically display and interact with the sub-topic structure of the document.
ERIC Educational Resources Information Center
Smith Risser, H.; Bottoms, SueAnn
2014-01-01
The advent of social networking tools allows teachers to create online networks and share information. While some virtual networks have a formal structure and defined boundaries, many do not. These unstructured virtual networks are difficult to study because they lack defined boundaries and a formal structure governing leadership roles and the…
Ifcwall Reconstruction from Unstructured Point Clouds
NASA Astrophysics Data System (ADS)
Bassier, M.; Klein, R.; Van Genechten, B.; Vergauwen, M.
2018-05-01
The automated reconstruction of Building Information Modeling (BIM) objects from point cloud data is still ongoing research. A key aspect is the creation of accurate wall geometry as it forms the basis for further reconstruction of objects in a BIM. After segmenting and classifying the initial point cloud, the labelled segments are processed and the wall topology is reconstructed. However, the preocedure is challenging due to noise, occlusions and the complexity of the input data.In this work, a method is presented to automatically reconstruct consistent wall geometry from point clouds. More specifically, the use of room information is proposed to aid the wall topology creation. First, a set of partial walls is constructed based on classified planar primitives. Next, the rooms are identified using the retrieved wall information along with the floors and ceilings. The wall topology is computed by the intersection of the partial walls conditioned on the room information. The final wall geometry is defined by creating IfcWallStandardCase objects conform the IFC4 standard. The result is a set of walls according to the as-built conditions of a building. The experiments prove that the used method is a reliable framework for wall reconstruction from unstructured point cloud data. Also, the implementation of room information reduces the rate of false positives for the wall topology. Given the walls, ceilings and floors, 94% of the rooms is correctly identified. A key advantage of the proposed method is that it deals with complex rooms and is not bound to single storeys.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, Timothy P.; Martz, Roger L.; Kiedrowski, Brian C.
New unstructured mesh capabilities in MCNP6 (developmental version during summer 2012) show potential for conducting multi-physics analyses by coupling MCNP to a finite element solver such as Abaqus/CAE[2]. Before these new capabilities can be utilized, the ability of MCNP to accurately estimate eigenvalues and pin powers using an unstructured mesh must first be verified. Previous work to verify the unstructured mesh capabilities in MCNP was accomplished using the Godiva sphere [1], and this work attempts to build on that. To accomplish this, a criticality benchmark and a fuel assembly benchmark were used for calculations in MCNP using both the Constructivemore » Solid Geometry (CSG) native to MCNP and the unstructured mesh geometry generated using Abaqus/CAE. The Big Ten criticality benchmark [3] was modeled due to its geometry being similar to that of a reactor fuel pin. The C5G7 3-D Mixed Oxide (MOX) Fuel Assembly Benchmark [4] was modeled to test the unstructured mesh capabilities on a reactor-type problem.« less
Dissociative effects of orthographic distinctiveness in pure and mixed lists: an item-order account.
McDaniel, Mark A; Cahill, Michael; Bugg, Julie M; Meadow, Nathaniel G
2011-10-01
We apply the item-order theory of list composition effects in free recall to the orthographic distinctiveness effect. The item-order account assumes that orthographically distinct items advantage item-specific encoding in both mixed and pure lists, but at the expense of exploiting relational information present in the list. Experiment 1 replicated the typical free recall advantage of orthographically distinct items in mixed lists and the elimination of that advantage in pure lists. Supporting the item-order account, recognition performances indicated that orthographically distinct items received greater item-specific encoding than did orthographically common items in mixed and pure lists (Experiments 1 and 2). Furthermore, order memory (input-output correspondence and sequential contiguity effects) was evident in recall of pure unstructured common lists, but not in recall of unstructured distinct lists (Experiment 1). These combined patterns, although not anticipated by prevailing views, are consistent with an item-order account.
2012-03-27
pulse- detonation engines ( PDE ), stage separation, supersonic cav- ity oscillations, hypersonic aerodynamics, detonation induced structural...ADAPTIVE UNSTRUCTURED CARTESIAN METHOD FOR LARGE-EDDY SIMULATION OF DETONATION IN MULTI-PHASE TURBULENT REACTIVE MIXTURES 5b. GRANT NUMBER FA9550...CCL Report TR-2012-03-03 Hybrid Solution-Adaptive Unstructured Cartesian Method for Large-Eddy Simulation of Detonation in Multi-Phase Turbulent
Deschênes, Philippe; Chano, Frédéric; Dionne, Léa-Laurence; Pittet, Didier; Longtin, Yves
2017-08-01
The efficacy of the World Health Organization (WHO)-recommended handwashing technique against Clostridium difficile is uncertain, and whether it could be improved remains unknown. Also, the benefit of using a structured technique instead of an unstructured technique remains unclear. This study was a prospective comparison of 3 techniques (unstructured, WHO, and a novel technique dubbed WHO shortened repeated [WHO-SR] technique) to remove C difficile. Ten participants were enrolled and performed each technique. Hands were contaminated with 3 × 10 6 colony forming units (CFU) of a nontoxigenic strain containing 90% spores. Efficacy was assessed using the whole-hand method. The relative efficacy of each technique and of a structured (either WHO or WHO-SR) vs an unstructured technique were assessed by Mann-Whitney U test and Wilcoxon signed-rank test. The median effectiveness of the unstructured, WHO, and WHO-SR techniques in log 10 CFU reduction was 1.30 (interquartile range [IQR], 1.27-1.43), 1.71 (IQR, 1.34-1.91), and 1.70 (IQR, 1.54-2.42), respectively. The WHO-SR technique was significantly more efficacious than the unstructured technique (P = .01). Washing hands with a structured technique was more effective than washing with an unstructured technique (median, 1.70 vs 1.30 log 10 CFU reduction, respectively; P = .007). A structured washing technique is more effective than an unstructured technique against C difficile. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
A Semantic Medical Multimedia Retrieval Approach Using Ontology Information Hiding
Guo, Kehua; Zhang, Shigeng
2013-01-01
Searching useful information from unstructured medical multimedia data has been a difficult problem in information retrieval. This paper reports an effective semantic medical multimedia retrieval approach which can reflect the users' query intent. Firstly, semantic annotations will be given to the multimedia documents in the medical multimedia database. Secondly, the ontology that represented semantic information will be hidden in the head of the multimedia documents. The main innovations of this approach are cross-type retrieval support and semantic information preservation. Experimental results indicate a good precision and efficiency of our approach for medical multimedia retrieval in comparison with some traditional approaches. PMID:24082915
Extracting and standardizing medication information in clinical text – the MedEx-UIMA system
Jiang, Min; Wu, Yonghui; Shah, Anushi; Priyanka, Priyanka; Denny, Joshua C.; Xu, Hua
2014-01-01
Extraction of medication information embedded in clinical text is important for research using electronic health records (EHRs). However, most of current medication information extraction systems identify drug and signature entities without mapping them to standard representation. In this study, we introduced the open source Java implementation of MedEx, an existing high-performance medication information extraction system, based on the Unstructured Information Management Architecture (UIMA) framework. In addition, we developed new encoding modules in the MedEx-UIMA system, which mapped an extracted drug name/dose/form to both generalized and specific RxNorm concepts and translated drug frequency information to ISO standard. We processed 826 documents by both systems and verified that MedEx-UIMA and MedEx (the Python version) performed similarly by comparing both results. Using two manually annotated test sets that contained 300 drug entries from medication list and 300 drug entries from narrative reports, the MedEx-UIMA system achieved F-measures of 98.5% and 97.5% respectively for encoding drug names to corresponding RxNorm generic drug ingredients, and F-measures of 85.4% and 88.1% respectively for mapping drug names/dose/form to the most specific RxNorm concepts. It also achieved an F-measure of 90.4% for normalizing frequency information to ISO standard. The open source MedEx-UIMA system is freely available online at http://code.google.com/p/medex-uima/. PMID:25954575
Opening the black box: a study of the process of NICE guidelines implementation.
Spyridonidis, Dimitrios; Calnan, Michael
2011-10-01
This study informs 'evidence-based' implementation by using an innovative methodology to provide further understanding of the implementation process in the English NHS using two distinctly different NICE clinical guidelines as exemplars. The implementation process was tracked retrospectively and prospectively using a comparative case-study and longitudinal design. 74 unstructured interviews were carried out with 48 key informants (managers and clinicians) between 2007 and 2009. This study has shown that the NICE guidelines implementation process has both planned and emergent components, which was well illustrated by the use of the prospective longitudinal design in this study. The implementation process might be characterised as strategic and planned to begin with but became uncontrolled and subject to negotiation as it moved from the planning phase to adoption in everyday practice. The variations in the implementation process could be best accounted for in terms of differences in the structure and nature of the local organisational context. The latter pointed to the importance of managers as well as clinicians in decision-making about implementation. While national priorities determine the context for implementation the shape of the process is influenced by the interactions between doctors and managers, which influence the way they respond to external policy initiatives such as NICE guidelines. NICE and other national health policy-makers need to recognise that the introduction of planned change 'initiatives' in clinical practice are subject to social and political influences at the micro level as well as the macro level. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
A hybrid structured-unstructured grid method for unsteady turbomachinery flow computations
NASA Technical Reports Server (NTRS)
Mathur, Sanjay R.; Madavan, Nateri K.; Rajagopalan, R. G.
1993-01-01
A hybrid grid technique for the solution of 2D, unsteady flows is developed. This technique is capable of handling complex, multiple component geometries in relative motion, such as those encountered in turbomachinery. The numerical approach utilizes a mixed structured-unstructured zonal grid topology along with modeling equations and solution methods that are most appropriate in the individual domains, therefore combining the advantages of both structured and unstructured grid techniques.
Exploring Hypersonic, Unstructured-Grid Issues through Structured Grids
NASA Technical Reports Server (NTRS)
Mazaheri, Ali R.; Kleb, Bill
2007-01-01
Pure-tetrahedral unstructured grids have been shown to produce asymmetric heat transfer rates for symmetric problems. Meanwhile, two-dimensional structured grids produce symmetric solutions and as documented here, introducing a spanwise degree of freedom to these structured grids also yields symmetric solutions. The effects of grid skewness and other perturbations of structured-grids are investigated to uncover possible mechanisms behind the unstructured-grid solution asymmetries. By using controlled experiments around a known, good solution, the effects of particular grid pathologies are uncovered. These structured-grid experiments reveal that similar solution degradation occurs as for unstructured grids, especially for heat transfer rates. Non-smooth grids within the boundary layer is also shown to produce large local errors in heat flux but do not affect surface pressures.
de Reilhac, Pia; Plu-Bureau, Geneviève; Serfaty, David; Letombe, Brigitte; Gondry, Jean; Christin-Maitre, Sophie
2016-10-01
Combined oral contraceptives (COCs) are the most widely used contraceptive method in Europe. Paradoxically, rates of unintended pregnancy and abortion are still remarkably high. A lack of knowledge about COCs is often reported to lead to poor adherence, suggesting an unmet need for adequate contraceptive counselling. Our objective was to investigate the impact on the knowledge level of users of a structured approach to deliver contraceptive information for a first COC prescription. The Oral Contraception Project to Optimise Patient Information (CORALIE) is a multicentre, prospective, randomised study conducted in France between March 2009 and January 2013. The intervention involved providing either an 'essential information' checklist or unstructured counselling to new COC users. The outcome measure was a questionnaire that assessed whether the information provided to the new user by the gynaecologist had been correctly understood. One hundred gynaecologists and an expert committee used the Delphi method to develop an 'essential information' checklist, after which 161 gynaecologists were randomised to two groups. Group I (n = 81) used the checklist with 324 new COC users and group II (n = 80) delivered unstructured information to 307 new COC users. The average score for understanding the information delivered during the visit was significantly higher in women in group I than in the women in group II, even after adjustment for age and previous history of pregnancy: 16.48/20 vs 14.27/20 (p < 0.0001). Delivering structured information for a first COC prescription is beneficial for understanding contraception. Our tool could ultimately contribute to increased adherence and should be investigated in a prospective study of long-term outcomes.
MacRae, Jayden; Love, Tom; Baker, Michael G; Dowell, Anthony; Carnachan, Matthew; Stubbe, Maria; McBain, Lynn
2015-10-06
We designed and validated a rule-based expert system to identify influenza like illness (ILI) from routinely recorded general practice clinical narrative to aid a larger retrospective research study into the impact of the 2009 influenza pandemic in New Zealand. Rules were assessed using pattern matching heuristics on routine clinical narrative. The system was trained using data from 623 clinical encounters and validated using a clinical expert as a gold standard against a mutually exclusive set of 901 records. We calculated a 98.2 % specificity and 90.2 % sensitivity across an ILI incidence of 12.4 % measured against clinical expert classification. Peak problem list identification of ILI by clinical coding in any month was 9.2 % of all detected ILI presentations. Our system addressed an unusual problem domain for clinical narrative classification; using notational, unstructured, clinician entered information in a community care setting. It performed well compared with other approaches and domains. It has potential applications in real-time surveillance of disease, and in assisted problem list coding for clinicians. Our system identified ILI presentation with sufficient accuracy for use at a population level in the wider research study. The peak coding of 9.2 % illustrated the need for automated coding of unstructured narrative in our study.
NASA Astrophysics Data System (ADS)
Kolkman, M. J.; Kok, M.; van der Veen, A.
The solution of complex, unstructured problems is faced with policy controversy and dispute, unused and misused knowledge, project delay and failure, and decline of public trust in governmental decisions. Mental model mapping (also called concept mapping) is a technique to analyse these difficulties on a fundamental cognitive level, which can reveal experiences, perceptions, assumptions, knowledge and subjective beliefs of stakeholders, experts and other actors, and can stimulate communication and learning. This article presents the theoretical framework from which the use of mental model mapping techniques to analyse this type of problems emerges as a promising technique. The framework consists of the problem solving or policy design cycle, the knowledge production or modelling cycle, and the (computer) model as interface between the cycles. Literature attributes difficulties in the decision-making process to communication gaps between decision makers, stakeholders and scientists, and to the construction of knowledge within different paradigm groups that leads to different interpretation of the problem situation. Analysis of the decision-making process literature indicates that choices, which are made in all steps of the problem solving cycle, are based on an individual decision maker’s frame of perception. This frame, in turn, depends on the mental model residing in the mind of the individual. Thus we identify three levels of awareness on which the decision process can be analysed. This research focuses on the third level. Mental models can be elicited using mapping techniques. In this way, analysing an individual’s mental model can shed light on decision-making problems. The steps of the knowledge production cycle are, in the same manner, ultimately driven by the mental models of the scientist in a specific discipline. Remnants of this mental model can be found in the resulting computer model. The characteristics of unstructured problems (complexity, uncertainty and disagreement) can be positioned in the framework, as can the communities of knowledge construction and valuation involved in the solution of these problems (core science, applied science, and professional consultancy, and “post-normal” science). Mental model maps, this research hypothesises, are suitable to analyse the above aspects of the problem. This hypothesis is tested for the case of the Zwolle storm surch barrier. Analysis can aid integration between disciplines, participation of public stakeholders, and can stimulate learning processes. Mental model mapping is recommended to visualise the use of knowledge, to analyse difficulties in problem solving process, and to aid information transfer and communication. Mental model mapping help scientists to shape their new, post-normal responsibilities in a manner that complies with integrity when dealing with unstructured problems in complex, multifunctional systems.
Stavri, P Zoë; Freeman, Donna J; Burroughs, Catherine M
2003-01-01
This paper focuses on one dimension of personal health information seeking: perception of quality and trustworthiness of information sources. Intensive interviews were conducted using a conversational, unstructured, exploratory interview style. Interviews were conducted at 3 publicly accessible library sites in Arizona, Hawaii and Nevada. Thirty-eight non-experts were interviewed. Three separate and distinct methods used to identify credible health information resources were identified. Consumers may have strong opinions about what they mistrust; use fairly rigorous evaluation protocols; or filter information based on intuition or common sense, eye appeal or an authoritative sounding sponsor or title. Many people use a mix of rational and/or intuitive criteria to assess the health information they use.
ERIC Educational Resources Information Center
Kubinger, Klaus D.; Wiesflecker, Sabine; Steindl, Renate
2008-01-01
An interview guide for children and adolescents, which is based on systemic therapy, has recently been added to the collection of published instruments for psychological interviews. This article aims to establish the amount of information gained during a psychological investigation using the Systemic-based Interview Guide rather than an intuitive,…
Clinical linguistics: conversational reflections.
Crystal, David
2013-04-01
This is a report of the main points I made in an informal "conversation" with Paul Fletcher and the audience at the 14th ICPLA conference in Cork. The observations arose randomly, as part of an unstructured 1-h Q&A, so they do not provide a systematic account of the subject, but simply reflect the issues which were raised by the conference participants during that time.
NASA Astrophysics Data System (ADS)
Kumar, V.; Singh, A.; Sharma, S. P.
2016-12-01
Regular grid discretization is often utilized to define complex geological models. However, this subdivision strategy performs at lower precision to represent the topographical observation surface. We have developed a new 2D unstructured grid based inversion for magnetic data for models including topography. It will consolidate prior parametric information into a deterministic inversion system to enhance the boundary between the different lithology based on recovered magnetic susceptibility distribution from the inversion. The presented susceptibility model will satisfy both the observed magnetic data and parametric information and therefore can represent the earth better than geophysical inversion models that only honor the observed magnetic data. Geophysical inversion and lithology classification are generally treated as two autonomous methodologies and connected in a serial way. The presented inversion strategy integrates these two parts into a unified scheme. To reduce the storage space and computation time, the conjugate gradient method is used. It results in feasible and practical imaging inversion of magnetic data to deal with large number of triangular grids. The efficacy of the presented inversion is demonstrated using two synthetic examples and one field data example.
[The daily experience of the patient with an implantable cardioverter defibrillator].
Palacios-Ceña, Domingo; Alonso-Blanco, Cristina; Cachón-Pérez, José Miguel; Alvarez-López, Cristina
2010-01-01
To describe the daily experience of patients with an automatic defibrillator (AD) implant and the adaptive changes of the patient. Qualitative and phenomenological research. Collection of data through; initially unstructured interview with half of the informants, semi-structured interviews through an open questions guide after the initial unstructured interviews and use of personal narratives of the informants. Analysis of the data using the Van Manen proposal. We analysed the interviews of 10 participants. We collected socio-demographic variables and identified the following themes, which respond to the question "How is life with an AD": It is life "with the two sides of the coin," living in constant wait and uncertainty, accepting change, developing adaptation strategies, renegotiating relationships and sexuality and it is to live transformed. The results of this study can be integrated into nurse clinical practice in areas such as valuation after discharge, changes in habits, control of treatment, notification of shocks, masking detection of symptoms and strategies that can jeopardise the bearer. Research needs to be developed that looks closer into the influence of other technological devices in people. Copyright 2009 Elsevier España, S.L. All rights reserved.
Newspaper archives + text mining = rich sources of historical geo-spatial data
NASA Astrophysics Data System (ADS)
Yzaguirre, A.; Smit, M.; Warren, R.
2016-04-01
Newspaper archives are rich sources of cultural, social, and historical information. These archives, even when digitized, are typically unstructured and organized by date rather than by subject or location, and require substantial manual effort to analyze. The effort of journalists to be accurate and precise means that there is often rich geo-spatial data embedded in the text, alongside text describing events that editors considered to be of sufficient importance to the region or the world to merit column inches. A regional newspaper can add over 100,000 articles to its database each year, and extracting information from this data for even a single country would pose a substantial Big Data challenge. In this paper, we describe a pilot study on the construction of a database of historical flood events (location(s), date, cause, magnitude) to be used in flood assessment projects, for example to calibrate models, estimate frequency, establish high water marks, or plan for future events in contexts ranging from urban planning to climate change adaptation. We then present a vision for extracting and using the rich geospatial data available in unstructured text archives, and suggest future avenues of research.
Gandy, Lisa M; Gumm, Jordan; Fertig, Benjamin; Thessen, Anne; Kennish, Michael J; Chavan, Sameer; Marchionni, Luigi; Xia, Xiaoxin; Shankrit, Shambhavi; Fertig, Elana J
2017-01-01
Scientists have unprecedented access to a wide variety of high-quality datasets. These datasets, which are often independently curated, commonly use unstructured spreadsheets to store their data. Standardized annotations are essential to perform synthesis studies across investigators, but are often not used in practice. Therefore, accurately combining records in spreadsheets from differing studies requires tedious and error-prone human curation. These efforts result in a significant time and cost barrier to synthesis research. We propose an information retrieval inspired algorithm, Synthesize, that merges unstructured data automatically based on both column labels and values. Application of the Synthesize algorithm to cancer and ecological datasets had high accuracy (on the order of 85-100%). We further implement Synthesize in an open source web application, Synthesizer (https://github.com/lisagandy/synthesizer). The software accepts input as spreadsheets in comma separated value (CSV) format, visualizes the merged data, and outputs the results as a new spreadsheet. Synthesizer includes an easy to use graphical user interface, which enables the user to finish combining data and obtain perfect accuracy. Future work will allow detection of units to automatically merge continuous data and application of the algorithm to other data formats, including databases.
Gumm, Jordan; Fertig, Benjamin; Thessen, Anne; Kennish, Michael J.; Chavan, Sameer; Marchionni, Luigi; Xia, Xiaoxin; Shankrit, Shambhavi; Fertig, Elana J.
2017-01-01
Scientists have unprecedented access to a wide variety of high-quality datasets. These datasets, which are often independently curated, commonly use unstructured spreadsheets to store their data. Standardized annotations are essential to perform synthesis studies across investigators, but are often not used in practice. Therefore, accurately combining records in spreadsheets from differing studies requires tedious and error-prone human curation. These efforts result in a significant time and cost barrier to synthesis research. We propose an information retrieval inspired algorithm, Synthesize, that merges unstructured data automatically based on both column labels and values. Application of the Synthesize algorithm to cancer and ecological datasets had high accuracy (on the order of 85–100%). We further implement Synthesize in an open source web application, Synthesizer (https://github.com/lisagandy/synthesizer). The software accepts input as spreadsheets in comma separated value (CSV) format, visualizes the merged data, and outputs the results as a new spreadsheet. Synthesizer includes an easy to use graphical user interface, which enables the user to finish combining data and obtain perfect accuracy. Future work will allow detection of units to automatically merge continuous data and application of the algorithm to other data formats, including databases. PMID:28437440
Unstructured mesh generation and adaptivity
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1995-01-01
An overview of current unstructured mesh generation and adaptivity techniques is given. Basic building blocks taken from the field of computational geometry are first described. Various practical mesh generation techniques based on these algorithms are then constructed and illustrated with examples. Issues of adaptive meshing and stretched mesh generation for anisotropic problems are treated in subsequent sections. The presentation is organized in an education manner, for readers familiar with computational fluid dynamics, wishing to learn more about current unstructured mesh techniques.
NASA Technical Reports Server (NTRS)
Baumeister, Kenneth J.
1990-01-01
The Galerkin weighted residual technique using linear triangular weight functions is employed to develop finite difference formulae in Cartesian coordinates for the Laplacian operator on isolated unstructured triangular grids. The weighted residual coefficients associated with the weak formulation of the Laplacian operator along with linear combinations of the residual equations are used to develop the algorithm. The algorithm was tested for a wide variety of unstructured meshes and found to give satisfactory results.
Improving performance with knowledge management
NASA Astrophysics Data System (ADS)
Kim, Sangchul
2018-06-01
People and organization are unable to easily locate their experience and knowledge, so meaningful data is usually fragmented, unstructured, not up-to-date and largely incomplete. Poor knowledge management (KM) leaves a company weak to their knowledge-base - or intellectual capital - walking out of the door each year, that is minimum estimated at 10%. Knowledge management (KM) can be defined as an emerging set of organizational design and operational principles, processes, organizational structures, applications and technologies that helps knowledge workers dramatically leverage their creativity and ability to deliver business value and to reap finally a competitive advantage. Then, this paper proposed various method and software starting with an understanding of the enterprise aspect, and gave inspiration to those who wanted to use KM.
Hunter, Eric J.
2009-01-01
Objectives Building on the concept that task type may influence fundamental frequency (F0) values, the purpose of this case study was to investigate the difference in a child’s F0 during structured, elicited tasks and long-term, unstructured activities. It also explores the possibility that the distribution in children’s F0 may make the standard statistical measures of mean and standard deviation less than ideal metrics. Methods A healthy male child (5 years, 7 months) was evaluated. The child completed four voice tasks used in a previous study of the influence of task type on F0 values: (1) sustaining the vowel /a/; (2) sustaining the vowel, /a/, embedded in a word at the end of a phrase; (3) repeating a sentence; and (4) counting from 1 to 10. The child also wore a National Center for Voice and Speech voice dosimeter, a device that collects voice data over the course of an entire day, during all activities for 34 hours over 4 days. Results Throughout the structured vocal tasks within the clinical environment, the child’s F0, as measured by both the dosimeter and acoustic analysis of microphone data, was similar for all four tasks, with the counting task the most dissimilar. The mean F0 (~257 Hz) matched very closely to the average task results in the literature given for the child’s age group. However, the child’s mean fundamental frequency during the unstructured activities was significantly higher (~376 Hz). Finally, the mode and median of the structured vocal tasks were respectively 260 Hz and 259 Hz (both near the mean), while the unstructured mode and median were respectively 290 Hz and 355 Hz. Conclusions The results of this study suggest that children may produce a notably different voice pattern during clinical observations compared to routine daily activities. In addition, the child’s long-term F0 distribution is not normal. If this distribution is consistent in long-term, unstructured natural vocalization patterns of children, statistical mean would not be a valid measure. Mode and median are suggested as two parameters which convey more accurate information about typical F0 usage. Finally, future research avenues, including further exploration of how children may adapt their F0 to various environments, conversation partners, and activity, are suggested. PMID:19185926
Programming secure mobile agents in healthcare environments using role-based permissions.
Georgiadis, C K; Baltatzis, J; Pangalos, G I
2003-01-01
The healthcare environment consists of vast amounts of dynamic and unstructured information, distributed over a large number of information systems. Mobile agent technology is having an ever-growing impact on the delivery of medical information. It supports acquiring and manipulating information distributed in a large number of information systems. Moreover is suitable for the computer untrained medical stuff. But the introduction of mobile agents generates advanced threads to the sensitive healthcare information, unless the proper countermeasures are taken. By applying the role-based approach to the authorization problem, we ease the sharing of information between hospital information systems and we reduce the administering part. The different initiative of the agent's migration method, results in different methods of assigning roles to the agent.
Nemesis I: Parallel Enhancements to ExodusII
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hennigan, Gary L.; John, Matthew S.; Shadid, John N.
2006-03-28
NEMESIS I is an enhancement to the EXODUS II finite element database model used to store and retrieve data for unstructured parallel finite element analyses. NEMESIS I adds data structures which facilitate the partitioning of a scalar (standard serial) EXODUS II file onto parallel disk systems found on many parallel computers. Since the NEMESIS I application programming interface (APl)can be used to append information to an existing EXODUS II files can be used on files which contain NEMESIS I information. The NEMESIS I information is written and read via C or C++ callable functions which compromise the NEMESIS I API.
Sojda, Richard S.; Chen, Serena H.; El Sawah, Sondoss; Guillaume, Joseph H.A.; Jakeman, A.J.; Lautenbach, Sven; McIntosh, Brian S.; Rizzoli, A.E.; Seppelt, Ralf; Struss, Peter; Voinov, Alexey; Volk, Martin
2012-01-01
Two of the basic tenets of decision support system efforts are to help identify and structure the decisions to be supported, and to then provide analysis in how those decisions might be best made. One example from wetland management would be that wildlife biologists must decide when to draw down water levels to optimise aquatic invertebrates as food for breeding ducks. Once such a decision is identified, a system or tool to help them make that decision in the face of current and projected climate conditions could be developed. We examined a random sample of 100 papers published from 2001-2011 in Environmental Modelling and Software that used the phrase “decision support system” or “decision support tool”, and which are characteristic of different sectors. In our review, 41% of the systems and tools related to the water resources sector, 34% were related to agriculture, and 22% to the conservation of fish, wildlife, and protected area management. Only 60% of the papers were deemed to be reporting on DSS. This was based on the papers reviewed not having directly identified a specific decision to be supported. We also report on the techniques that were used to identify the decisions, such as formal survey, focus group, expert opinion, or sole judgment of the author(s). The primary underlying modelling system, e.g., expert system, agent based model, Bayesian belief network, geographical information system (GIS), and the like was categorised next. Finally, since decision support typically should target some aspect of unstructured decisions, we subjectively determined to what degree this was the case. In only 23% of the papers reviewed, did the system appear to tackle unstructured decisions. This knowledge should be useful in helping workers in the field develop more effective systems and tools, especially by being exposed to the approaches in different, but related, disciplines. We propose that a standard blueprint for reporting on DSS be developed for consideration by journal editors to aid them in filtering papers that use the term, “decision support”.
NASA Astrophysics Data System (ADS)
Wendt, Harry; Orchiston, Wayne; Slee, Bruce
During the 1950s Australia was one of the world's foremost astronomical nations owing primarily to the work of the dynamic radio astronomy group within the Commonwealth Scientific and Industrial Research Organisation's Division of Radiophysics. Most of the observations were made at the network of field stations maintained by the Division in or near Sydney, and one of these field stations was Murraybank in the north-western suburbs of Sydney.
Unstructured mesh algorithms for aerodynamic calculations
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1992-01-01
The use of unstructured mesh techniques for solving complex aerodynamic flows is discussed. The principle advantages of unstructured mesh strategies, as they relate to complex geometries, adaptive meshing capabilities, and parallel processing are emphasized. The various aspects required for the efficient and accurate solution of aerodynamic flows are addressed. These include mesh generation, mesh adaptivity, solution algorithms, convergence acceleration, and turbulence modeling. Computations of viscous turbulent two-dimensional flows and inviscid three-dimensional flows about complex configurations are demonstrated. Remaining obstacles and directions for future research are also outlined.
Multigrid solution of internal flows using unstructured solution adaptive meshes
NASA Technical Reports Server (NTRS)
Smith, Wayne A.; Blake, Kenneth R.
1992-01-01
This is the final report of the NASA Lewis SBIR Phase 2 Contract Number NAS3-25785, Multigrid Solution of Internal Flows Using Unstructured Solution Adaptive Meshes. The objective of this project, as described in the Statement of Work, is to develop and deliver to NASA a general three-dimensional Navier-Stokes code using unstructured solution-adaptive meshes for accuracy and multigrid techniques for convergence acceleration. The code will primarily be applied, but not necessarily limited, to high speed internal flows in turbomachinery.
Masoudi, Reza; Khayeri, Fereydoon; Rabiei, Leili; Zarea, Kourosh
2017-04-01
This study was done to investigate the experiences of family caregivers of people with multiple sclerosis (MS) about stigmatization in Iranian health care context. Stigmatization has been observed obviously among patients with MS but few studies have been conducted on stigma among the family caregivers of these patients. This qualitative study with thematic analysis was done to explore this issue. Fourteen family caregivers of patients with MS were selected by purposive sampling. The data were collected through in-depth and unstructured interviews. Four main subthemes emerged from the analysis of the transcripts: "feeling shame", "fear of being ridiculed by others", "ignored by family" and "concealing disease to be secure against the perceptions of disease". Healthcare professionals should be encouraged to inform caregivers about social engagement strategies and to train them on the management of stigma as an important factor for the reduction of their social problems. Copyright © 2016 Elsevier Inc. All rights reserved.
Eggins, Suzanne; Slade, Diana
2012-01-01
Clinical handover -- the transfer between clinicians of responsibility and accountability for patients and their care (AMA 2006) -- is a pivotal and high-risk communicative event in hospital practice. Studies focusing on critical incidents, mortality, risk and patient harm in hospitals have highlighted ineffective communication -- including incomplete and unstructured clinical handovers -- as a major contributing factor (NSW Health 2005; ACSQHC 2010). In Australia, as internationally, Health Departments and hospital management have responded by introducing standardised handover communication protocols. This paper problematises one such protocol - the ISBAR tool - and argues that the narrow understanding of communication on which such protocols are based may seriously constrain their ability to shape effective handovers. Based on analysis of audio-recorded shift-change clinical handovers between medical staff we argue that handover communication must be conceptualised as inherently interactive and that attempts to describe, model and teach handover practice must recognise both informational and interactive communication strategies. By comparing the communicative performance of participants in authentic handover events we identify communication strategies that are more and less likely to lead to an effective handover and demonstrate the importance of focusing close up on communication to improve the quality and safety of healthcare interactions.
Masanz, James J; Ogren, Philip V; Zheng, Jiaping; Sohn, Sunghwan; Kipper-Schuler, Karin C; Chute, Christopher G
2010-01-01
We aim to build and evaluate an open-source natural language processing system for information extraction from electronic medical record clinical free-text. We describe and evaluate our system, the clinical Text Analysis and Knowledge Extraction System (cTAKES), released open-source at http://www.ohnlp.org. The cTAKES builds on existing open-source technologies—the Unstructured Information Management Architecture framework and OpenNLP natural language processing toolkit. Its components, specifically trained for the clinical domain, create rich linguistic and semantic annotations. Performance of individual components: sentence boundary detector accuracy=0.949; tokenizer accuracy=0.949; part-of-speech tagger accuracy=0.936; shallow parser F-score=0.924; named entity recognizer and system-level evaluation F-score=0.715 for exact and 0.824 for overlapping spans, and accuracy for concept mapping, negation, and status attributes for exact and overlapping spans of 0.957, 0.943, 0.859, and 0.580, 0.939, and 0.839, respectively. Overall performance is discussed against five applications. The cTAKES annotations are the foundation for methods and modules for higher-level semantic processing of clinical free-text. PMID:20819853
Broekhuis, Femke; Gopalaswamy, Arjun M.
2016-01-01
Many ecological theories and species conservation programmes rely on accurate estimates of population density. Accurate density estimation, especially for species facing rapid declines, requires the application of rigorous field and analytical methods. However, obtaining accurate density estimates of carnivores can be challenging as carnivores naturally exist at relatively low densities and are often elusive and wide-ranging. In this study, we employ an unstructured spatial sampling field design along with a Bayesian sex-specific spatially explicit capture-recapture (SECR) analysis, to provide the first rigorous population density estimates of cheetahs (Acinonyx jubatus) in the Maasai Mara, Kenya. We estimate adult cheetah density to be between 1.28 ± 0.315 and 1.34 ± 0.337 individuals/100km2 across four candidate models specified in our analysis. Our spatially explicit approach revealed ‘hotspots’ of cheetah density, highlighting that cheetah are distributed heterogeneously across the landscape. The SECR models incorporated a movement range parameter which indicated that male cheetah moved four times as much as females, possibly because female movement was restricted by their reproductive status and/or the spatial distribution of prey. We show that SECR can be used for spatially unstructured data to successfully characterise the spatial distribution of a low density species and also estimate population density when sample size is small. Our sampling and modelling framework will help determine spatial and temporal variation in cheetah densities, providing a foundation for their conservation and management. Based on our results we encourage other researchers to adopt a similar approach in estimating densities of individually recognisable species. PMID:27135614
Broekhuis, Femke; Gopalaswamy, Arjun M
2016-01-01
Many ecological theories and species conservation programmes rely on accurate estimates of population density. Accurate density estimation, especially for species facing rapid declines, requires the application of rigorous field and analytical methods. However, obtaining accurate density estimates of carnivores can be challenging as carnivores naturally exist at relatively low densities and are often elusive and wide-ranging. In this study, we employ an unstructured spatial sampling field design along with a Bayesian sex-specific spatially explicit capture-recapture (SECR) analysis, to provide the first rigorous population density estimates of cheetahs (Acinonyx jubatus) in the Maasai Mara, Kenya. We estimate adult cheetah density to be between 1.28 ± 0.315 and 1.34 ± 0.337 individuals/100km2 across four candidate models specified in our analysis. Our spatially explicit approach revealed 'hotspots' of cheetah density, highlighting that cheetah are distributed heterogeneously across the landscape. The SECR models incorporated a movement range parameter which indicated that male cheetah moved four times as much as females, possibly because female movement was restricted by their reproductive status and/or the spatial distribution of prey. We show that SECR can be used for spatially unstructured data to successfully characterise the spatial distribution of a low density species and also estimate population density when sample size is small. Our sampling and modelling framework will help determine spatial and temporal variation in cheetah densities, providing a foundation for their conservation and management. Based on our results we encourage other researchers to adopt a similar approach in estimating densities of individually recognisable species.
NASA Astrophysics Data System (ADS)
Bojowald, Martin
The universe, ultimately, is to be described by quantum theory. Quantum aspects of all there is, including space and time, may not be significant for many purposes, but are crucial for some. And so a quantum description of cosmology is required for a complete and consistent worldview. At any rate, even if we were not directly interested in regimes where quantum cosmology plays a role, a complete physical description could not stop at a stage before the whole universe is reached. Quantum theory is essential in the microphysics of particles, atoms, molecules, solids, white dwarfs and neutron stars. Why should one expect this ladder of scales to end at a certain size? If regimes are sufficiently violent and energetic, quantum effects are non-negligible even on scales of the whole cosmos; this is realized at least once in the history of the universe: at the big bang where the classical theory of general relativity would make energy densities diverge.
Ramanan, S V; Radhakrishna, Kedar; Waghmare, Abijeet; Raj, Tony; Nathan, Senthil P; Sreerama, Sai Madhukar; Sampath, Sriram
2016-08-01
Electronic Health Record (EHR) use in India is generally poor, and structured clinical information is mostly lacking. This work is the first attempt aimed at evaluating unstructured text mining for extracting relevant clinical information from Indian clinical records. We annotated a corpus of 250 discharge summaries from an Intensive Care Unit (ICU) in India, with markups for diseases, procedures, and lab parameters, their attributes, as well as key demographic information and administrative variables such as patient outcomes. In this process, we have constructed guidelines for an annotation scheme useful to clinicians in the Indian context. We evaluated the performance of an NLP engine, Cocoa, on a cohort of these Indian clinical records. We have produced an annotated corpus of roughly 90 thousand words, which to our knowledge is the first tagged clinical corpus from India. Cocoa was evaluated on a test corpus of 50 documents. The overlap F-scores across the major categories, namely disease/symptoms, procedures, laboratory parameters and outcomes, are 0.856, 0.834, 0.961 and 0.872 respectively. These results are competitive with results from recent shared tasks based on US records. The annotated corpus and associated results from the Cocoa engine indicate that unstructured text mining is a viable method for cohort analysis in the Indian clinical context, where structured EHR records are largely absent.
Surgical Crisis Management Skills Training and Assessment
Moorthy, Krishna; Munz, Yaron; Forrest, Damien; Pandey, Vikas; Undre, Shabnam; Vincent, Charles; Darzi, Ara
2006-01-01
Background: Intraoperative surgical crisis management is learned in an unstructured manner. In aviation, simulation training allows aircrews to coordinate and standardize recovery strategies. Our aim was to develop a surgical crisis simulation and evaluate its feasibility, realism, and validity of the measures used to assess performance. Methods: Surgical trainees were exposed to a bleeding crisis in a simulated operating theater. Assessment of performance consisted of a trainee’s technical ability to control the bleeding and of their team/human factors skills. This assessment was performed in a blinded manner by 2 surgeons and one human factors expert. Other measures consisted of time measures such as time to diagnose the bleeding (TD), inform team members (TT), achieve control (TC), and close the laceration (TL). Blood loss was used as a surrogate outcome measures. Results: There were considerable variations within both senior (n = 10) and junior (n = 10) trainees for technical and team skills. However, while the senior trainees scored higher than the juniors for technical skills (P = 0.001), there were no differences in human factors skills. There were also significant differences between the 2 groups for TD (P = 0.01), TC (P = 0.001), and TL (0.001). The blood loss was higher in the junior group. Conclusions: We have described the development of a novel simulated setting for the training of crisis management skills and the variability in performance both in between and within the 2 groups. PMID:16794399
Quantum search of a real unstructured database
NASA Astrophysics Data System (ADS)
Broda, Bogusław
2016-02-01
A simple circuit implementation of the oracle for Grover's quantum search of a real unstructured classical database is proposed. The oracle contains a kind of quantumly accessible classical memory, which stores the database.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pautz, Shawn D.; Bailey, Teresa S.
Here, the efficiency of discrete ordinates transport sweeps depends on the scheduling algorithm, the domain decomposition, the problem to be solved, and the computational platform. Sweep scheduling algorithms may be categorized by their approach to several issues. In this paper we examine the strategy of domain overloading for mesh partitioning as one of the components of such algorithms. In particular, we extend the domain overloading strategy, previously defined and analyzed for structured meshes, to the general case of unstructured meshes. We also present computational results for both the structured and unstructured domain overloading cases. We find that an appropriate amountmore » of domain overloading can greatly improve the efficiency of parallel sweeps for both structured and unstructured partitionings of the test problems examined on up to 10 5 processor cores.« less
Pautz, Shawn D.; Bailey, Teresa S.
2016-11-29
Here, the efficiency of discrete ordinates transport sweeps depends on the scheduling algorithm, the domain decomposition, the problem to be solved, and the computational platform. Sweep scheduling algorithms may be categorized by their approach to several issues. In this paper we examine the strategy of domain overloading for mesh partitioning as one of the components of such algorithms. In particular, we extend the domain overloading strategy, previously defined and analyzed for structured meshes, to the general case of unstructured meshes. We also present computational results for both the structured and unstructured domain overloading cases. We find that an appropriate amountmore » of domain overloading can greatly improve the efficiency of parallel sweeps for both structured and unstructured partitionings of the test problems examined on up to 10 5 processor cores.« less
Neural correlates of processing sentences and compound words in Chinese
Hung, Yi-Hui; Tzeng, Ovid; Wu, Denise H.
2017-01-01
Sentence reading involves multiple linguistic operations including processing of lexical and compositional semantics, and determining structural and grammatical relationships among words. Previous studies on Indo-European languages have associated left anterior temporal lobe (aTL) and left interior frontal gyrus (IFG) with reading sentences compared to reading unstructured word lists. To examine whether these brain regions are also involved in reading a typologically distinct language with limited morphosyntax and lack of agreement between sentential arguments, an FMRI study was conducted to compare passive reading of Chinese sentences, unstructured word lists and disconnected character lists that are created by only changing the order of an identical set of characters. Similar to previous findings from other languages, stronger activation was found in mainly left-lateralized anterior temporal regions (including aTL) for reading sentences compared to unstructured word and character lists. On the other hand, stronger activation was identified in left posterior temporal sulcus for reading unstructured words compared to unstructured characters. Furthermore, reading unstructured word lists compared to sentences evoked stronger activation in left IFG and left inferior parietal lobule. Consistent with the literature on Indo-European languages, the present results suggest that left anterior temporal regions subserve sentence-level integration, while left IFG supports restoration of sentence structure. In addition, left posterior temporal sulcus is associated with morphological compounding. Taken together, reading Chinese sentences engages a common network as reading other languages, with particular reliance on integration of semantic constituents. PMID:29194453
NASA Astrophysics Data System (ADS)
Liu, Jiechao; Jayakumar, Paramsothy; Stein, Jeffrey L.; Ersal, Tulga
2018-06-01
This paper presents a nonlinear model predictive control (MPC) formulation for obstacle avoidance in high-speed, large-size autono-mous ground vehicles (AGVs) with high centre of gravity (CoG) that operate in unstructured environments, such as military vehicles. The term 'unstructured' in this context denotes that there are no lanes or traffic rules to follow. Existing MPC formulations for passenger vehicles in structured environments do not readily apply to this context. Thus, a new nonlinear MPC formulation is developed to navigate an AGV from its initial position to a target position at high-speed safely. First, a new cost function formulation is used that aims to find the shortest path to the target position, since no reference trajectory exists in unstructured environments. Second, a region partitioning approach is used in conjunction with a multi-phase optimal control formulation to accommodate the complicated forms the obstacle-free region can assume due to the presence of multiple obstacles in the prediction horizon in an unstructured environment. Third, the no-wheel-lift-off condition, which is the major dynamical safety concern for high-speed, high-CoG AGVs, is ensured by limiting the steering angle within a range obtained offline using a 14 degrees-of-freedom vehicle dynamics model. Thus, a safe, high-speed navigation is enabled in an unstructured environment. Simulations of an AGV approaching multiple obstacles are provided to demonstrate the effectiveness of the algorithm.
Li, Dingcheng; Endle, Cory M; Murthy, Sahana; Stancl, Craig; Suesse, Dale; Sottara, Davide; Huff, Stanley M; Chute, Christopher G; Pathak, Jyotishman
2012-01-01
With increasing adoption of electronic health records (EHRs), the need for formal representations for EHR-driven phenotyping algorithms has been recognized for some time. The recently proposed Quality Data Model from the National Quality Forum (NQF) provides an information model and a grammar that is intended to represent data collected during routine clinical care in EHRs as well as the basic logic required to represent the algorithmic criteria for phenotype definitions. The QDM is further aligned with Meaningful Use standards to ensure that the clinical data and algorithmic criteria are represented in a consistent, unambiguous and reproducible manner. However, phenotype definitions represented in QDM, while structured, cannot be executed readily on existing EHRs. Rather, human interpretation, and subsequent implementation is a required step for this process. To address this need, the current study investigates open-source JBoss® Drools rules engine for automatic translation of QDM criteria into rules for execution over EHR data. In particular, using Apache Foundation's Unstructured Information Management Architecture (UIMA) platform, we developed a translator tool for converting QDM defined phenotyping algorithm criteria into executable Drools rules scripts, and demonstrated their execution on real patient data from Mayo Clinic to identify cases for Coronary Artery Disease and Diabetes. To the best of our knowledge, this is the first study illustrating a framework and an approach for executing phenotyping criteria modeled in QDM using the Drools business rules management system.
Li, Dingcheng; Endle, Cory M; Murthy, Sahana; Stancl, Craig; Suesse, Dale; Sottara, Davide; Huff, Stanley M.; Chute, Christopher G.; Pathak, Jyotishman
2012-01-01
With increasing adoption of electronic health records (EHRs), the need for formal representations for EHR-driven phenotyping algorithms has been recognized for some time. The recently proposed Quality Data Model from the National Quality Forum (NQF) provides an information model and a grammar that is intended to represent data collected during routine clinical care in EHRs as well as the basic logic required to represent the algorithmic criteria for phenotype definitions. The QDM is further aligned with Meaningful Use standards to ensure that the clinical data and algorithmic criteria are represented in a consistent, unambiguous and reproducible manner. However, phenotype definitions represented in QDM, while structured, cannot be executed readily on existing EHRs. Rather, human interpretation, and subsequent implementation is a required step for this process. To address this need, the current study investigates open-source JBoss® Drools rules engine for automatic translation of QDM criteria into rules for execution over EHR data. In particular, using Apache Foundation’s Unstructured Information Management Architecture (UIMA) platform, we developed a translator tool for converting QDM defined phenotyping algorithm criteria into executable Drools rules scripts, and demonstrated their execution on real patient data from Mayo Clinic to identify cases for Coronary Artery Disease and Diabetes. To the best of our knowledge, this is the first study illustrating a framework and an approach for executing phenotyping criteria modeled in QDM using the Drools business rules management system. PMID:23304325
Warehousing Structured and Unstructured Data for Data Mining.
ERIC Educational Resources Information Center
Miller, L. L.; Honavar, Vasant; Barta, Tom
1997-01-01
Describes an extensible object-oriented view system that supports the integration of both structured and unstructured data sources in either the multidatabase or data warehouse environment. Discusses related work and data mining issues. (AEF)
A Graph Based Interface for Representing Volume Visualization Results
NASA Technical Reports Server (NTRS)
Patten, James M.; Ma, Kwan-Liu
1998-01-01
This paper discusses a graph based user interface for representing the results of the volume visualization process. As images are rendered, they are connected to other images in a graph based on their rendering parameters. The user can take advantage of the information in this graph to understand how certain rendering parameter changes affect a dataset, making the visualization process more efficient. Because the graph contains more information than is contained in an unstructured history of images, the image graph is also helpful for collaborative visualization and animation.
MedXN: an open source medication extraction and normalization tool for clinical text
Sohn, Sunghwan; Clark, Cheryl; Halgrim, Scott R; Murphy, Sean P; Chute, Christopher G; Liu, Hongfang
2014-01-01
Objective We developed the Medication Extraction and Normalization (MedXN) system to extract comprehensive medication information and normalize it to the most appropriate RxNorm concept unique identifier (RxCUI) as specifically as possible. Methods Medication descriptions in clinical notes were decomposed into medication name and attributes, which were separately extracted using RxNorm dictionary lookup and regular expression. Then, each medication name and its attributes were combined together according to RxNorm convention to find the most appropriate RxNorm representation. To do this, we employed serialized hierarchical steps implemented in Apache's Unstructured Information Management Architecture. We also performed synonym expansion, removed false medications, and employed inference rules to improve the medication extraction and normalization performance. Results An evaluation on test data of 397 medication mentions showed F-measures of 0.975 for medication name and over 0.90 for most attributes. The RxCUI assignment produced F-measures of 0.932 for medication name and 0.864 for full medication information. Most false negative RxCUI assignments in full medication information are due to human assumption of missing attributes and medication names in the gold standard. Conclusions The MedXN system (http://sourceforge.net/projects/ohnlp/files/MedXN/) was able to extract comprehensive medication information with high accuracy and demonstrated good normalization capability to RxCUI as long as explicit evidence existed. More sophisticated inference rules might result in further improvements to specific RxCUI assignments for incomplete medication descriptions. PMID:24637954
Stavri, P. Zoë; Freeman, Donna J.; Burroughs, Catherine M.
2003-01-01
Objectives This paper focuses on one dimension of personal health information seeking: perception of quality and trustworthiness of information sources. Design Intensive interviews were conducted using a conversational, unstructured, exploratory interview style. Setting Interviews were conducted at 3 publicly accessible library sites in Arizona, Hawaii and Nevada. Participants: Thirty-eight non-experts were interviewed. Results Three separate and distinct methods used to identify credible health information resources were identified. Consumers may have strong opinions about what they mistrust; use fairly rigorous evaluation protocols; or filter information based on intuition or common sense, eye appeal or an authoritative sounding sponsor or title. Conclusions Many people use a mix of rational and/or intuitive criteria to assess the health information they use. PMID:14728249
Wilson, Richard A.; Chapman, Wendy W.; DeFries, Shawn J.; Becich, Michael J.; Chapman, Brian E.
2010-01-01
Background: Clinical records are often unstructured, free-text documents that create information extraction challenges and costs. Healthcare delivery and research organizations, such as the National Mesothelioma Virtual Bank, require the aggregation of both structured and unstructured data types. Natural language processing offers techniques for automatically extracting information from unstructured, free-text documents. Methods: Five hundred and eight history and physical reports from mesothelioma patients were split into development (208) and test sets (300). A reference standard was developed and each report was annotated by experts with regard to the patient’s personal history of ancillary cancer and family history of any cancer. The Hx application was developed to process reports, extract relevant features, perform reference resolution and classify them with regard to cancer history. Two methods, Dynamic-Window and ConText, for extracting information were evaluated. Hx’s classification responses using each of the two methods were measured against the reference standard. The average Cohen’s weighted kappa served as the human benchmark in evaluating the system. Results: Hx had a high overall accuracy, with each method, scoring 96.2%. F-measures using the Dynamic-Window and ConText methods were 91.8% and 91.6%, which were comparable to the human benchmark of 92.8%. For the personal history classification, Dynamic-Window scored highest with 89.2% and for the family history classification, ConText scored highest with 97.6%, in which both methods were comparable to the human benchmark of 88.3% and 97.2%, respectively. Conclusion: We evaluated an automated application’s performance in classifying a mesothelioma patient’s personal and family history of cancer from clinical reports. To do so, the Hx application must process reports, identify cancer concepts, distinguish the known mesothelioma from ancillary cancers, recognize negation, perform reference resolution and determine the experiencer. Results indicated that both information extraction methods tested were dependant on the domain-specific lexicon and negation extraction. We showed that the more general method, ConText, performed as well as our task-specific method. Although Dynamic- Window could be modified to retrieve other concepts, ConText is more robust and performs better on inconclusive concepts. Hx could greatly improve and expedite the process of extracting data from free-text, clinical records for a variety of research or healthcare delivery organizations. PMID:21031012
Sujansky, Walter; Wilson, Tom
2015-04-01
This report describes a grant-funded project to explore the use of DIRECT secure messaging for the electronic delivery of laboratory test results to outpatient physicians and electronic health record systems. The project seeks to leverage the inherent attributes of DIRECT secure messaging and electronic provider directories to overcome certain barriers to the delivery of lab test results in the outpatient setting. The described system enables laboratories that generate test results as HL7 messages to deliver these results as structured or unstructured documents attached to DIRECT secure messages. The system automatically analyzes generated HL7 messages and consults an electronic provider directory to determine the appropriate DIRECT address and delivery format for each indicated recipient. The system also enables lab results delivered to providers as structured attachments to be consumed by HL7 interface engines and incorporated into electronic health record systems. Lab results delivered as unstructured attachments may be printed or incorporated into patient records as PDF files. The system receives and logs acknowledgement messages to document the status of each transmitted lab result, and a graphical interface allows searching and review of this logged information. The described system is a fully implemented prototype that has been tested in a laboratory setting. Although this approach is promising, further work is required to pilot test the system in production settings with clinical laboratories and outpatient provider organizations. Copyright © 2015 Elsevier Inc. All rights reserved.
Tuned grid generation with ICEM CFD
NASA Technical Reports Server (NTRS)
Wulf, Armin; Akdag, Vedat
1995-01-01
ICEM CFD is a CAD based grid generation package that supports multiblock structured, unstructured tetrahedral and unstructured hexahedral grids. Major development efforts have been spent to extend ICEM CFD's multiblock structured and hexahedral unstructured grid generation capabilities. The modules added are: a parametric grid generation module and a semi-automatic hexahedral grid generation module. A fully automatic version of the hexahedral grid generation module for around a set of predefined objects in rectilinear enclosures has been developed. These modules will be presented and the procedures used will be described, and examples will be discussed.
A three-dimensional structured/unstructured hybrid Navier-Stokes method for turbine blade rows
NASA Technical Reports Server (NTRS)
Tsung, F.-L.; Loellbach, J.; Kwon, O.; Hah, C.
1994-01-01
A three-dimensional viscous structured/unstructured hybrid scheme has been developed for numerical computation of high Reynolds number turbomachinery flows. The procedure allows an efficient structured solver to be employed in the densely clustered, high aspect-ratio grid around the viscous regions near solid surfaces, while employing an unstructured solver elsewhere in the flow domain to add flexibility in mesh generation. Test results for an inviscid flow over an external transonic wing and a Navier-Stokes flow for an internal annular cascade are presented.
Application of an unstructured grid flow solver to planes, trains and automobiles
NASA Technical Reports Server (NTRS)
Spragle, Gregory S.; Smith, Wayne A.; Yadlin, Yoram
1993-01-01
Rampant, an unstructured flow solver developed at Fluent Inc., is used to compute three-dimensional, viscous, turbulent, compressible flow fields within complex solution domains. Rampant is an explicit, finite-volume flow solver capable of computing flow fields using either triangular (2d) or tetrahedral (3d) unstructured grids. Local time stepping, implicit residual smoothing, and multigrid techniques are used to accelerate the convergence of the explicit scheme. The paper describes the Rampant flow solver and presents flow field solutions about a plane, train, and automobile.
Unstructured grids on SIMD torus machines
NASA Technical Reports Server (NTRS)
Bjorstad, Petter E.; Schreiber, Robert
1994-01-01
Unstructured grids lead to unstructured communication on distributed memory parallel computers, a problem that has been considered difficult. Here, we consider adaptive, offline communication routing for a SIMD processor grid. Our approach is empirical. We use large data sets drawn from supercomputing applications instead of an analytic model of communication load. The chief contribution of this paper is an experimental demonstration of the effectiveness of certain routing heuristics. Our routing algorithm is adaptive, nonminimal, and is generally designed to exploit locality. We have a parallel implementation of the router, and we report on its performance.
Real-time analysis of healthcare using big data analytics
NASA Astrophysics Data System (ADS)
Basco, J. Antony; Senthilkumar, N. C.
2017-11-01
Big Data Analytics (BDA) provides a tremendous advantage where there is a need of revolutionary performance in handling large amount of data that covers 4 characteristics such as Volume Velocity Variety Veracity. BDA has the ability to handle such dynamic data providing functioning effectiveness and exceptionally beneficial output in several day to day applications for various organizations. Healthcare is one of the sectors which generate data constantly covering all four characteristics with outstanding growth. There are several challenges in processing patient records which deals with variety of structured and unstructured format. Inducing BDA in to Healthcare (HBDA) will deal with sensitive patient driven information mostly in unstructured format comprising of prescriptions, reports, data from imaging system, etc., the challenges will be overcome by big data with enhanced efficiency in fetching and storing of data. In this project, dataset alike Electronic Medical Records (EMR) produced from numerous medical devices and mobile applications will be induced into MongoDB using Hadoop framework with Improvised processing technique to improve outcome of processing patient records.
Large-scale Parallel Unstructured Mesh Computations for 3D High-lift Analysis
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.; Pirzadeh, S.
1999-01-01
A complete "geometry to drag-polar" analysis capability for the three-dimensional high-lift configurations is described. The approach is based on the use of unstructured meshes in order to enable rapid turnaround for complicated geometries that arise in high-lift configurations. Special attention is devoted to creating a capability for enabling analyses on highly resolved grids. Unstructured meshes of several million vertices are initially generated on a work-station, and subsequently refined on a supercomputer. The flow is solved on these refined meshes on large parallel computers using an unstructured agglomeration multigrid algorithm. Good prediction of lift and drag throughout the range of incidences is demonstrated on a transport take-off configuration using up to 24.7 million grid points. The feasibility of using this approach in a production environment on existing parallel machines is demonstrated, as well as the scalability of the solver on machines using up to 1450 processors.
On Adding Structure to Unstructured Overlay Networks
NASA Astrophysics Data System (ADS)
Leitão, João; Carvalho, Nuno A.; Pereira, José; Oliveira, Rui; Rodrigues, Luís
Unstructured peer-to-peer overlay networks are very resilient to churn and topology changes, while requiring little maintenance cost. Therefore, they are an infrastructure to build highly scalable large-scale services in dynamic networks. Typically, the overlay topology is defined by a peer sampling service that aims at maintaining, in each process, a random partial view of peers in the system. The resulting random unstructured topology is suboptimal when a specific performance metric is considered. On the other hand, structured approaches (for instance, a spanning tree) may optimize a given target performance metric but are highly fragile. In fact, the cost for maintaining structures with strong constraints may easily become prohibitive in highly dynamic networks. This chapter discusses different techniques that aim at combining the advantages of unstructured and structured networks. Namely we focus on two distinct approaches, one based on optimizing the overlay and another based on optimizing the gossip mechanism itself.
An unstructured grid, three-dimensional model based on the shallow water equations
Casulli, V.; Walters, R.A.
2000-01-01
A semi-implicit finite difference model based on the three-dimensional shallow water equations is modified to use unstructured grids. There are obvious advantages in using unstructured grids in problems with a complicated geometry. In this development, the concept of unstructured orthogonal grids is introduced and applied to this model. The governing differential equations are discretized by means of a semi-implicit algorithm that is robust, stable and very efficient. The resulting model is relatively simple, conserves mass, can fit complicated boundaries and yet is sufficiently flexible to permit local mesh refinements in areas of interest. Moreover, the simulation of the flooding and drying is included in a natural and straightforward manner. These features are illustrated by a test case for studies of convergence rates and by examples of flooding on a river plain and flow in a shallow estuary. Copyright ?? 2000 John Wiley & Sons, Ltd.
Parallel performance optimizations on unstructured mesh-based simulations
Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas; ...
2015-06-01
This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cache efficiency, as well as communication reduction approaches.more » We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.« less
Three-Dimensional High-Lift Analysis Using a Parallel Unstructured Multigrid Solver
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1998-01-01
A directional implicit unstructured agglomeration multigrid solver is ported to shared and distributed memory massively parallel machines using the explicit domain-decomposition and message-passing approach. Because the algorithm operates on local implicit lines in the unstructured mesh, special care is required in partitioning the problem for parallel computing. A weighted partitioning strategy is described which avoids breaking the implicit lines across processor boundaries, while incurring minimal additional communication overhead. Good scalability is demonstrated on a 128 processor SGI Origin 2000 machine and on a 512 processor CRAY T3E machine for reasonably fine grids. The feasibility of performing large-scale unstructured grid calculations with the parallel multigrid algorithm is demonstrated by computing the flow over a partial-span flap wing high-lift geometry on a highly resolved grid of 13.5 million points in approximately 4 hours of wall clock time on the CRAY T3E.
Computing Axisymmetric Jet Screech Tones Using Unstructured Grids
NASA Technical Reports Server (NTRS)
Jorgenson, Philip C. E.; Loh, Ching Y.
2002-01-01
The space-time conservation element and solution element (CE/SE) method is used to solve the conservation law form of the compressible axisymmetric Navier-Stokes equations. The equations are time marched to predict the unsteady flow and the near-field screech tone noise issuing from an underexpanded circular jet. The CE/SE method uses an unstructured grid based data structure. The unstructured grids for these calculations are generated based on the method of Delaunay triangulation. The purpose of this paper is to show that an acoustics solution with a feedback loop can be obtained using truly unstructured grid technology. Numerical results are presented for two different nozzle geometries. The first is considered to have a thin nozzle lip and the second has a thick nozzle lip. Comparisons with available experimental data are shown for flows corresponding to several different jet Mach numbers. Generally good agreement is obtained in terms of flow physics, screech tone frequency, and sound pressure level.
Quality factors and local adaption (with applications in Eulerian hydrodynamics)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowley, W.P.
1992-06-17
Adapting the mesh to suit the solution is a technique commonly used for solving both ode`s and pde`s. For Lagrangian hydrodynamics, ALE and Free-Lagrange are examples of structured and unstructured adaptive methods. For Eulerian hydrodynamics the two basic approaches are the macro-unstructuring technique pioneered by Oliger and Berger and the micro-structuring technique due to Lohner and others. Here we will describe a new micro-unstructuring technique, LAM, (for Local Adaptive Mesh) as applied to Eulerian hydrodynamics. The LAM technique consists of two independent parts: (1) the time advance scheme is a variation on the artificial viscosity method; (2) the adaption schememore » uses a micro-unstructured mesh with quadrilateral mesh elements. The adaption scheme makes use of quality factors and the relation between these and truncation errors is discussed. The time advance scheme; the adaption strategy; and the effect of different adaption parameters on numerical solutions are described.« less
Quality factors and local adaption (with applications in Eulerian hydrodynamics)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowley, W.P.
1992-06-17
Adapting the mesh to suit the solution is a technique commonly used for solving both ode's and pde's. For Lagrangian hydrodynamics, ALE and Free-Lagrange are examples of structured and unstructured adaptive methods. For Eulerian hydrodynamics the two basic approaches are the macro-unstructuring technique pioneered by Oliger and Berger and the micro-structuring technique due to Lohner and others. Here we will describe a new micro-unstructuring technique, LAM, (for Local Adaptive Mesh) as applied to Eulerian hydrodynamics. The LAM technique consists of two independent parts: (1) the time advance scheme is a variation on the artificial viscosity method; (2) the adaption schememore » uses a micro-unstructured mesh with quadrilateral mesh elements. The adaption scheme makes use of quality factors and the relation between these and truncation errors is discussed. The time advance scheme; the adaption strategy; and the effect of different adaption parameters on numerical solutions are described.« less
ERIC Educational Resources Information Center
Bahar, Ismail F. S.
A study of curriculum development in Somalia focused on the role of the National Adult Education Centre (NAEC) and involvement of teachers and inspectors. The sample consisted of 80 Mogadisho primary adult school teachers. Information sources were related literature, teacher questionnaires, and unstructured interviews with school inspectors,…
Analysing News for Stock Market Prediction
NASA Astrophysics Data System (ADS)
Ramalingam, V. V.; Pandian, A.; Dwivedi, shivam; Bhatt, Jigar P.
2018-04-01
Stock market means the aggregation of all sellers and buyers of stocks representing their ownership claims on the business. To be completely absolute about the investment on these stocks, proper knowledge about them as well as their pricing, for both present and future is very essential. Large amount of data is collected and parsed to obtain this essential information regarding the fluctuations in the stock market. This data can be any news or public opinions in general. Recently, many methods have been used, especially big unstructured data methods to predict the stock market values. We introduce another method of focusing on deriving the best statistical learning model for predicting the future values. The data set used is very large unstructured data collected from an online social platform, commonly known as Quindl. The data from this platform is then linked to a csv fie and cleaned to obtain the essential information for stock market prediction. The method consists of carrying out the NLP (Natural Language Processing) of the data and then making it easier for the system to understand, finds and identifies the correlation in between this data and the stock market fluctuations. The model is implemented using Python Programming Language throughout the entire project to obtain flexibility and convenience of the system.
Integrating medical and research information: a big data approach.
Tilve Álvarez, Carlos M; Ayora Pais, Alberto; Ruíz Romero, Cristina; Llamas Gómez, Daniel; Carrajo García, Lino; Blanco García, Francisco J; Vázquez González, Guillermo
2015-01-01
Most of the information collected in different fields by Instituto de Investigación Biomédica de A Coruña (INIBIC) is classified as unstructured due to its high volume and heterogeneity. This situation, linked to the recent requirement of integrating it to the medical information, makes it necessary to implant specific architectures to collect and organize it before it can be analysed. The purpose of this article is to present the Hadoop framework as a solution to the problem of integrating research information in the Business Intelligence field. This framework can collect, explore, process and structure the aforementioned information, which allow us to develop an equivalent function to a data mart in an Intelligence Business system.
Sifting Through Chaos: Extracting Information from Unstructured Legal Opinions.
Oliveira, Bruno Miguel; Guimarães, Rui Vasconcellos; Antunes, Luís; Rodrigues, Pedro Pereira
2018-01-01
Abiding to the law is, in some cases, a delicate balance between the rights of different players. Re-using health records is such a case. While the law grants reuse rights to public administration documents, in which health records produced in public health institutions are included, it also grants privacy to personal records. To safeguard a correct usage of data, public hospitals in Portugal employ jurists that are responsible for allowing or withholding access rights to health records. To help decision making, these jurists can consult the legal opinions issued by the national committee on public administration documents usage. While these legal opinions are of undeniable value, due to their doctrine contribution, they are only available in a format best suited from printing, forcing individual consultation of each document, with no option, whatsoever of clustered search, filtering or indexing, which are standard operations nowadays in a document management system. When having to decide on tens of data requests a day, it becomes unfeasible to consult the hundreds of legal opinions already available. With the objective to create a modern document management system, we devised an open, platform agnostic system that extracts and compiles the legal opinions, ex-tracts its contents and produces metadata, allowing for a fast searching and filtering of said legal opinions.
Lewinski, Allison A; Anderson, Ruth A; Vorderstrasse, Allison A; Fisher, Edwin B; Pan, Wei; Johnson, Constance M
2018-02-21
Virtual environments (VEs) facilitate interaction and support among individuals with chronic illness, yet the characteristics of these VE interactions remain unknown. The objective of this study was to describe social interaction and support among individuals with type 2 diabetes (T2D) who interacted in a VE. Data included VE-mediated synchronous conversations and text-chat and asynchronous emails and discussion board posts from a study that facilitated interaction among individuals with T2D and diabetes educators (N=24) in 2 types of sessions: education and support. VE interactions consisted of communication techniques (how individuals interact in the VE), expressions of self-management (T2D-related topics), depth (personalization of topics), and breadth (number of topics discussed). Individuals exchanged support more often in the education (723/1170, 61.79%) than in the support (406/1170, 34.70%) sessions or outside session time (41/1170, 3.50%). Of all support exchanges, 535/1170 (45.73%) were informational, 377/1170 (32.22%) were emotional, 217/1170 (18.55%) were appraisal, and 41/1170 (3.50%) were instrumental. When comparing session types, education sessions predominately provided informational support (357/723, 49.4%), and the support sessions predominately provided emotional (159/406, 39.2%) and informational (159/406, 39.2%) support. VE-mediated interactions resemble those in face-to-face environments, as individuals in VEs engage in bidirectional exchanges with others to obtain self-management education and support. Similar to face-to-face environments, individuals in the VE revealed personal information, sought information, and exchanged support during the moderated education sessions and unstructured support sessions. With this versatility, VEs are able to contribute substantially to support for those with diabetes and, very likely, other chronic diseases. ©Allison A Lewinski, Ruth A Anderson, Allison A Vorderstrasse, Edwin B Fisher, Wei Pan, Constance M Johnson. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 21.02.2018.
On Convergence Acceleration Techniques for Unstructured Meshes
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1998-01-01
A discussion of convergence acceleration techniques as they relate to computational fluid dynamics problems on unstructured meshes is given. Rather than providing a detailed description of particular methods, the various different building blocks of current solution techniques are discussed and examples of solution strategies using one or several of these ideas are given. Issues relating to unstructured grid CFD problems are given additional consideration, including suitability of algorithms to current hardware trends, memory and cpu tradeoffs, treatment of non-linearities, and the development of efficient strategies for handling anisotropy-induced stiffness. The outlook for future potential improvements is also discussed.
Wind-US Unstructured Flow Solutions for a Transonic Diffuser
NASA Technical Reports Server (NTRS)
Mohler, Stanley R., Jr.
2005-01-01
The Wind-US Computational Fluid Dynamics flow solver computed flow solutions for a transonic diffusing duct. The calculations used an unstructured (hexahedral) grid. The Spalart-Allmaras turbulence model was used. Static pressures along the upper and lower wall agreed well with experiment, as did velocity profiles. The effect of the smoothing input parameters on convergence and solution accuracy was investigated. The meaning and proper use of these parameters are discussed for the benefit of Wind-US users. Finally, the unstructured solver is compared to the structured solver in terms of run times and solution accuracy.
Jackson, Richard; Kartoglu, Ismail; Stringer, Clive; Gorrell, Genevieve; Roberts, Angus; Song, Xingyi; Wu, Honghan; Agrawal, Asha; Lui, Kenneth; Groza, Tudor; Lewsley, Damian; Northwood, Doug; Folarin, Amos; Stewart, Robert; Dobson, Richard
2018-06-25
Traditional health information systems are generally devised to support clinical data collection at the point of care. However, as the significance of the modern information economy expands in scope and permeates the healthcare domain, there is an increasing urgency for healthcare organisations to offer information systems that address the expectations of clinicians, researchers and the business intelligence community alike. Amongst other emergent requirements, the principal unmet need might be defined as the 3R principle (right data, right place, right time) to address deficiencies in organisational data flow while retaining the strict information governance policies that apply within the UK National Health Service (NHS). Here, we describe our work on creating and deploying a low cost structured and unstructured information retrieval and extraction architecture within King's College Hospital, the management of governance concerns and the associated use cases and cost saving opportunities that such components present. To date, our CogStack architecture has processed over 300 million lines of clinical data, making it available for internal service improvement projects at King's College London. On generated data designed to simulate real world clinical text, our de-identification algorithm achieved up to 94% precision and up to 96% recall. We describe a toolkit which we feel is of huge value to the UK (and beyond) healthcare community. It is the only open source, easily deployable solution designed for the UK healthcare environment, in a landscape populated by expensive proprietary systems. Solutions such as these provide a crucial foundation for the genomic revolution in medicine.
Another look at roles and functions: has hospital case management lost its way?
Reynolds, John J
2013-01-01
The purpose of this study was to identify the roles, functions, and types of activities that hospital case managers engage in on a day-to-day basis and that leverage the most amounts of time. Previous studies superimpose a priori categories on research tools. Hospital case management. This study analyzes 4,064 spontaneous, unstructured list serve postings from the American Case Management Association Learning Link list serve from August 15, 2011, to August 18, 2012. The study group was a cross section of 415 case management professionals. The data suggest that hospital case managers' time is inordinately leveraged by issues related to observation status/leveling of patients and the Centers for Medicare and Medicaid Services compliance. The data also suggest that hospital case management has taken a conceptual trajectory that has deviated significantly from what was initially conceived (quality, advocacy, and care coordination) and what is publicly purported. Case management education and practical orientation will need to be commensurate with this emerging emphasis. Case management leadership will need to be adept at mitigating the stresses of role confusion, role conflict, and role ambiguity.
Fine-grained information extraction from German transthoracic echocardiography reports.
Toepfer, Martin; Corovic, Hamo; Fette, Georg; Klügl, Peter; Störk, Stefan; Puppe, Frank
2015-11-12
Information extraction techniques that get structured representations out of unstructured data make a large amount of clinically relevant information about patients accessible for semantic applications. These methods typically rely on standardized terminologies that guide this process. Many languages and clinical domains, however, lack appropriate resources and tools, as well as evaluations of their applications, especially if detailed conceptualizations of the domain are required. For instance, German transthoracic echocardiography reports have not been targeted sufficiently before, despite of their importance for clinical trials. This work therefore aimed at development and evaluation of an information extraction component with a fine-grained terminology that enables to recognize almost all relevant information stated in German transthoracic echocardiography reports at the University Hospital of Würzburg. A domain expert validated and iteratively refined an automatically inferred base terminology. The terminology was used by an ontology-driven information extraction system that outputs attribute value pairs. The final component has been mapped to the central elements of a standardized terminology, and it has been evaluated according to documents with different layouts. The final system achieved state-of-the-art precision (micro average.996) and recall (micro average.961) on 100 test documents that represent more than 90 % of all reports. In particular, principal aspects as defined in a standardized external terminology were recognized with f 1=.989 (micro average) and f 1=.963 (macro average). As a result of keyword matching and restraint concept extraction, the system obtained high precision also on unstructured or exceptionally short documents, and documents with uncommon layout. The developed terminology and the proposed information extraction system allow to extract fine-grained information from German semi-structured transthoracic echocardiography reports with very high precision and high recall on the majority of documents at the University Hospital of Würzburg. Extracted results populate a clinical data warehouse which supports clinical research.
Willmer, Marian
2007-03-01
This article makes the case for how evidence-based nursing leadership and management activities could promote, implement and sustain quality patient care by student nurses using Information and Communications Technology. It is on aspects of the findings of a professional doctorate inquiry into Information and Communications Technology use and skills development by student nurses. The 21st century is both an information and knowledge age. Nursing and medical professions are facing the increasing usage of information technology in day-to-day operations with the overall aim of improving the quality of patient care. The quality of the future of the nursing profession is dependent on the calibre of those who are currently socialized to become professional nurses. The new United Kingdom Labour Government, in power since 1997, has placed increasing focus on the effectiveness of the National Health Service and using computers as one way to assist in achieving greater effectiveness. This has implications for nurse education and the future preparation of future nurses to acquire skills in Information and Communications Technology. This is a case study approach using multiple triangulation methodology. This includes: semi-structured interview of six student nurses and four of their mentors; one unstructured meeting with the Research and Development Manager; observational visit to a medical admission ward and a renal unit; one semi-structured meeting with the Information Manager; Review of Documentation - the National Health Service Trust Nursing Strategy; and Review, Application and Development of relevant theory. The overall findings are that student nurses are not using Information and Communications Technology in nursing practice in a structured and systematic way. The reasons for this are very many and very complex but are interrelated. They include strategic resource-based issues, what Jumaa referred to as Time, Human, Equipment, Information, Material and Money resources. These reasons include lack of time for Information and Communications Technology activities by both students and the qualified nurses and some staff with poor Information and Communications Technology skills. This situation is compounded by insufficient computer hardware; lack of information about the essence and value of Information and Communications Technology; perception of the direct relevance of Information and Communications Technology activities to patient care; software materials not adequate for purpose and lack of comprehensive budget and financial recognition for student's engagement with Information and Communications Technology. 'Smile and the whole world smile with you'. This old saying has a lot of truth in it. Applied to Information and Communications Technology skills development and use by student nurses we are confronted with an uncomfortable reality of many qualified nurses who themselves are not comfortable or proficient with the use of Information and Communications Technology. Some do not see the essential need for Information and Communications Technology and its direct relevance to improving patient care, nor is this always supported by the current software and systems. Willmer argued that the achievement of effective implementation of the National Health Service National Programme for Information and Technology requires efficient change management and leading people skills, and an understanding of National Health Service culture. In this article the case is made that evidence-based management and leadership interventions are a feasible approach for a sustained implementation of Information and Communications Technology use and skills development by student nurses.
Meystre, Stéphane M; Thibault, Julien; Shen, Shuying; Hurdle, John F; South, Brett R
2010-01-01
OBJECTIVE To describe a new medication information extraction system-Textractor-developed for the 'i2b2 medication extraction challenge'. The development, functionalities, and official evaluation of the system are detailed. Textractor is based on the Apache Unstructured Information Management Architecture (UMIA) framework, and uses methods that are a hybrid between machine learning and pattern matching. Two modules in the system are based on machine learning algorithms, while other modules use regular expressions, rules, and dictionaries, and one module embeds MetaMap Transfer. The official evaluation was based on a reference standard of 251 discharge summaries annotated by all teams participating in the challenge. The metrics used were recall, precision, and the F(1)-measure. They were calculated with exact and inexact matches, and were averaged at the level of systems and documents. The reference metric for this challenge, the system-level overall F(1)-measure, reached about 77% for exact matches, with a recall of 72% and a precision of 83%. Performance was the best with route information (F(1)-measure about 86%), and was good for dosage and frequency information, with F(1)-measures of about 82-85%. Results were not as good for durations, with F(1)-measures of 36-39%, and for reasons, with F(1)-measures of 24-27%. The official evaluation of Textractor for the i2b2 medication extraction challenge demonstrated satisfactory performance. This system was among the 10 best performing systems in this challenge.
The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuznetsov, Valentin; Fischer, Nils Leif; Guo, Yuyi
The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregatemore » $$\\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.« less
The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC
Kuznetsov, Valentin; Fischer, Nils Leif; Guo, Yuyi
2018-03-19
The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregatemore » $$\\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.« less
Structured and Unstructured Learning.
ERIC Educational Resources Information Center
1996
This document contains four papers presented at a sympoisum on structured and unstructured learning moderated by Catherine Sleezer at the 1996 conference of the Academy of Human Resource Development (AHRD). "Designing Experiential Learning into Organizational Work Life: Proposing a Framework for Theory and Research" (Cheri Maben-Crouch)…
Parallel Performance Optimizations on Unstructured Mesh-based Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas
2015-01-01
© The Authors. Published by Elsevier B.V. This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cachemore » efficiency, as well as communication reduction approaches. We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.« less
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.
1999-01-01
An unstructured-grid Navier-Stokes solver was used to predict the surface pressure distribution, the off-body flow field, the surface flow pattern, and integrated lift and drag coefficients on the ROBIN configuration (a generic helicopter) without a rotor at four angles of attack. The results are compared to those predicted by two structured- grid Navier-Stokes solvers and to experimental surface pressure distributions. The surface pressure distributions from the unstructured-grid Navier-Stokes solver are in good agreement with the results from the structured-grid Navier-Stokes solvers. Agreement with the experimental pressure coefficients is good over the forward portion of the body. However, agreement is poor on the lower portion of the mid-section of the body. Comparison of the predicted surface flow patterns showed similar regions of separated flow. Predicted lift and drag coefficients were in fair agreement with each other.
Computing Flows Using Chimera and Unstructured Grids
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Zheng, Yao
2006-01-01
DRAGONFLOW is a computer program that solves the Navier-Stokes equations of flows in complexly shaped three-dimensional regions discretized by use of a direct replacement of arbitrary grid overlapping by nonstructured (DRAGON) grid. A DRAGON grid (see figure) is a combination of a chimera grid (a composite of structured subgrids) and a collection of unstructured subgrids. DRAGONFLOW incorporates modified versions of two prior Navier-Stokes-equation-solving programs: OVERFLOW, which is designed to solve on chimera grids; and USM3D, which is used to solve on unstructured grids. A master module controls the invocation of individual modules in the libraries. At each time step of a simulated flow, DRAGONFLOW is invoked on the chimera portion of the DRAGON grid in alternation with USM3D, which is invoked on the unstructured subgrids of the DRAGON grid. The USM3D and OVERFLOW modules then immediately exchange their solutions and other data. As a result, USM3D and OVERFLOW are coupled seamlessly.
The design and implementation of a parallel unstructured Euler solver using software primitives
NASA Technical Reports Server (NTRS)
Das, R.; Mavriplis, D. J.; Saltz, J.; Gupta, S.; Ponnusamy, R.
1992-01-01
This paper is concerned with the implementation of a three-dimensional unstructured grid Euler-solver on massively parallel distributed-memory computer architectures. The goal is to minimize solution time by achieving high computational rates with a numerically efficient algorithm. An unstructured multigrid algorithm with an edge-based data structure has been adopted, and a number of optimizations have been devised and implemented in order to accelerate the parallel communication rates. The implementation is carried out by creating a set of software tools, which provide an interface between the parallelization issues and the sequential code, while providing a basis for future automatic run-time compilation support. Large practical unstructured grid problems are solved on the Intel iPSC/860 hypercube and Intel Touchstone Delta machine. The quantitative effect of the various optimizations are demonstrated, and we show that the combined effect of these optimizations leads to roughly a factor of three performance improvement. The overall solution efficiency is compared with that obtained on the CRAY-YMP vector supercomputer.
Mobile Robot Navigation and Obstacle Avoidance in Unstructured Outdoor Environments
2017-12-01
to pull information from the network, it subscribes to a specific topic and is able to receive the messages that are published to that topic. In order...total artificial potential field is characterized “as the sum of an attractive potential pulling the robot toward the goal…and a repulsive potential...of robot laser_max = 20; % robot laser view horizon goaldist = 0.5; % distance metric for reaching goal goali = 1
2010-12-01
Ibid. 24. Ibid. 25. Ibid. 26. Carlson, “Verizon Unifies Communications ,” 1. 27. Bednarz, “Users Turn to Virtual Data Marts,” 56. 28. Coombs ...systems that do not communicate .”7 Data format standards are an oft-tried interoperability approach to homogenize interfaces between functional, physical...instance—and not the collection sources used to create the warning. Unfortunately, the intelligence community (IC) has yet to widely decouple
de Boer, Anna W; de Mutsert, Renée; den Heijer, Martin; Rosendaal, Frits R; Jukema, Johan W; Blom, Jeanet W; Numans, Mattijs E
2016-07-01
In contrast to structured, integrated risk assessment in primary care, unstructured risk factor screening outside primary care and corresponding recommendations to consult a general practitioner (GP) are often based on one abnormal value of a single risk factor. This study investigates the advantages and disadvantages of unstructured screening of blood pressure and cholesterol outside primary care. After the baseline visit of the Netherlands Epidemiology of Obesity study (population-based prospective cohort study in persons aged 45-65 years, recruited 2008-2012) all participants received a letter with results of blood pressure and cholesterol, and a recommendation to consult a GP if results were abnormal. Four years after the start of the study, participants received a questionnaire about the follow-up of their results. The study population consisted of 6343 participants, 48% men, mean age 56 years, mean body mass index 30 kg/m(2). Of all participants 66% had an abnormal result and, of these, 49% had a treatment indication based on the risk estimation system SCORE-NL 2006. Of the 25% of the participants who did not consult a GP, 40% had a treatment indication. Of the participants with an abnormal result 19% were worried, of whom 60% had no treatment indication. In this population 51% of the participants with an abnormal result had unnecessarily received a recommendation to consult a GP, and 10% were unnecessarily worried. GPs should be informed about the complete risk assessment, and only participants at intermediate or high risk should receive a recommendation to consult a GP. © The European Society of Cardiology 2015.
Luyckx, Kim; Luyten, Léon; Daelemans, Walter; Van den Bulcke, Tim
2016-01-01
Objective Enormous amounts of healthcare data are becoming increasingly accessible through the large-scale adoption of electronic health records. In this work, structured and unstructured (textual) data are combined to assign clinical diagnostic and procedural codes (specifically ICD-9-CM) to patient stays. We investigate whether integrating these heterogeneous data types improves prediction strength compared to using the data types in isolation. Methods Two separate data integration approaches were evaluated. Early data integration combines features of several sources within a single model, and late data integration learns a separate model per data source and combines these predictions with a meta-learner. This is evaluated on data sources and clinical codes from a broad set of medical specialties. Results When compared with the best individual prediction source, late data integration leads to improvements in predictive power (eg, overall F-measure increased from 30.6% to 38.3% for International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnostic codes), while early data integration is less consistent. The predictive strength strongly differs between medical specialties, both for ICD-9-CM diagnostic and procedural codes. Discussion Structured data provides complementary information to unstructured data (and vice versa) for predicting ICD-9-CM codes. This can be captured most effectively by the proposed late data integration approach. Conclusions We demonstrated that models using multiple electronic health record data sources systematically outperform models using data sources in isolation in the task of predicting ICD-9-CM codes over a broad range of medical specialties. PMID:26316458
Bracketing as a skill in conducting unstructured qualitative interviews.
Sorsa, Minna Anneli; Kiikkala, Irma; Åstedt-Kurki, Päivi
2015-03-01
To provide an overview of bracketing as a skill in unstructured qualitative research interviews. Researchers affect the qualitative research process. Bracketing in descriptive phenomenology entails researchers setting aside their pre-understanding and acting non-judgementally. In interpretative phenomenology, previous knowledge is used intentionally to create new understanding. A literature search of bracketing in phenomenology and qualitative research. This is a methodology paper examining the researchers' impact in creating data in creating data in qualitative research. Self-knowledge, sensitivity and reflexivity of the researcher enable bracketing. Skilled and experienced researchers are needed to use bracketing in unstructured qualitative research interviews. Bracketing adds scientific rigour and validity to any qualitative study.
NASA Technical Reports Server (NTRS)
Duque, Earl P. N.; Biswas, Rupak; Strawn, Roger C.
1995-01-01
This paper summarizes a method that solves both the three dimensional thin-layer Navier-Stokes equations and the Euler equations using overset structured and solution adaptive unstructured grids with applications to helicopter rotor flowfields. The overset structured grids use an implicit finite-difference method to solve the thin-layer Navier-Stokes/Euler equations while the unstructured grid uses an explicit finite-volume method to solve the Euler equations. Solutions on a helicopter rotor in hover show the ability to accurately convect the rotor wake. However, isotropic subdivision of the tetrahedral mesh rapidly increases the overall problem size.
GeoDataspaces: Simplifying Data Management Tasks with Globus
NASA Astrophysics Data System (ADS)
Malik, T.; Chard, K.; Tchoua, R. B.; Foster, I.
2014-12-01
Data and its management are central to modern scientific enterprise. Typically, geoscientists rely on observations and model output data from several disparate sources (file systems, RDBMS, spreadsheets, remote data sources). Integrated data management solutions that provide intuitive semantics and uniform interfaces, irrespective of the kind of data source are, however, lacking. Consequently, geoscientists are left to conduct low-level and time-consuming data management tasks, individually, and repeatedly for discovering each data source, often resulting in errors in handling. In this talk we will describe how the EarthCube GeoDataspace project is improving this situation for seismologists, hydrologists, and space scientists by simplifying some of the existing data management tasks that arise when developing computational models. We will demonstrate a GeoDataspace, bootstrapped with "geounits", which are self-contained metadata packages that provide complete description of all data elements associated with a model run, including input/output and parameter files, model executable and any associated libraries. Geounits link raw and derived data as well as associating provenance information describing how data was derived. We will discuss challenges in establishing geounits and describe machine learning and human annotation approaches that can be used for extracting and associating ad hoc and unstructured scientific metadata hidden in binary formats with data resources and models. We will show how geounits can improve search and discoverability of data associated with model runs. To support this model, we will describe efforts related towards creating a scalable metadata catalog that helps to maintain, search and discover geounits within the Globus network of accessible endpoints. This talk will focus on the issue of creating comprehensive personal inventories of data assets for computational geoscientists, and describe a publishing mechanism, which can be used to feed into national, international, or thematic discovery portals.
Wang, Hui; Zhang, Weide; Zeng, Qiang; Li, Zuofeng; Feng, Kaiyan; Liu, Lei
2014-04-01
Extracting information from unstructured clinical narratives is valuable for many clinical applications. Although natural Language Processing (NLP) methods have been profoundly studied in electronic medical records (EMR), few studies have explored NLP in extracting information from Chinese clinical narratives. In this study, we report the development and evaluation of extracting tumor-related information from operation notes of hepatic carcinomas which were written in Chinese. Using 86 operation notes manually annotated by physicians as the training set, we explored both rule-based and supervised machine-learning approaches. Evaluating on unseen 29 operation notes, our best approach yielded 69.6% in precision, 58.3% in recall and 63.5% F-score. Copyright © 2014 Elsevier Inc. All rights reserved.
Automated Data Cleansing in Data Harvesting and Data Migration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Mark; Vowell, Lance; King, Ian
2011-03-16
In the proposal for this project, we noted how the explosion of digitized information available through corporate databases, data stores and online search systems has resulted in the knowledge worker being bombarded by information. Knowledge workers typically spend more than 20-30% of their time seeking and sorting information, only finding the information 50-60% of the time . This information exists as unstructured, semi-structured and structured data. The problem of information overload is compounded by the production of duplicate or near-duplicate information. In addition, near-duplicate items frequently have different origins, creating a situation in which each item may have unique informationmore » of value, but their differences are not significant enough to justify maintaining them as separate entities. Effective tools can be provided to eliminate duplicate and near-duplicate information. The proposed approach was to extract unique information from data sets and consolidation that information into a single comprehensive file.« less
Modeling dam-break flows using finite volume method on unstructured grid
USDA-ARS?s Scientific Manuscript database
Two-dimensional shallow water models based on unstructured finite volume method and approximate Riemann solvers for computing the intercell fluxes have drawn growing attention because of their robustness, high adaptivity to complicated geometry and ability to simulate flows with mixed regimes and di...
The 3-D unstructured mesh generation using local transformations
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
1993-01-01
The topics are presented in viewgraph form and include the following: 3D combinatorial edge swapping; 3D incremental triangulation via local transformations; a new approach to multigrid for unstructured meshes; surface mesh generation using local transforms; volume triangulations; viscous mesh generation; and future directions.
Infrastructure for collaborative science and societal applications in the Columbia River estuary
NASA Astrophysics Data System (ADS)
Baptista, António M.; Seaton, Charles; Wilkin, Michael P.; Riseman, Sarah F.; Needoba, Joseph A.; Maier, David; Turner, Paul J.; Kärnä, Tuomas; Lopez, Jesse E.; Herfort, Lydie; Megler, V. M.; McNeil, Craig; Crump, Byron C.; Peterson, Tawnya D.; Spitz, Yvette H.; Simon, Holly M.
2015-12-01
To meet societal needs, modern estuarine science needs to be interdisciplinary and collaborative, combine discovery with hypotheses testing, and be responsive to issues facing both regional and global stakeholders. Such an approach is best conducted with the benefit of data-rich environments, where information from sensors and models is openly accessible within convenient timeframes. Here, we introduce the operational infrastructure of one such data-rich environment, a collaboratory created to support (a) interdisciplinary research in the Columbia River estuary by the multi-institutional team of investigators of the Science and Technology Center for Coastal Margin Observation & Prediction and (b) the integration of scientific knowledge into regional decision making. Core components of the operational infrastructure are an observation network, a modeling system and a cyber-infrastructure, each of which is described. The observation network is anchored on an extensive array of long-term stations, many of them interdisciplinary, and is complemented by on-demand deployment of temporary stations and mobile platforms, often in coordinated field campaigns. The modeling system is based on finiteelement unstructured-grid codes and includes operational and process-oriented simulations of circulation, sediments and ecosystem processes. The flow of information is managed through a dedicated cyber-infrastructure, conversant with regional and national observing systems.
Zhou, Binbin; Hao, Yuanqiang; Wang, Chengshan; Li, Ding; Liu, You-Nian; Zhou, Feimeng
2012-01-01
The intracellular α-synuclein (α-syn) protein, whose conformational change and aggregation have been closely linked to the pathology of Parkingson’s disease (PD), is highly populated at the presynaptic termini and remains there in the α-helical conformation. In this study, circular dichroism confirmed that natively unstructured α-syn in aqueous solution was transformed to its α-helical conformation upon addition of trifluoroethanol (TFE). Electrochemical and UV–visible spectroscopic experiments reveal that both Cu(I) and Cu(II) are stabilized, with the former being stabilized by about two orders of magnitude. Compared to unstructured α-syn (Binolfi et al., J. Am. Chem. Soc. 133 (2011) 194–196), α-helical α-syn stabilizes Cu(I) by more than three orders of magnitude. Through the measurements of H2O2 and hydroxyl radicals (OH•) in solutions containing different forms of Cu(II) (free and complexed by unstructured or α-helical α-syn), we demonstrate that the significantly enhanced Cu(I) binding affinity helps inhibit the production of highly toxic reactive oxygen species, especially the hydroxyl radicals. Our study provides strong evidence that, as a possible means to prevent neuronal cell damage, conversion of the natively unstructured α-syn to its α-helical conformation in vivo could significantly attenuate the copper-modulated ROS production. PMID:23123341
A multidimensional unified gas-kinetic scheme for radiative transfer equations on unstructured mesh
NASA Astrophysics Data System (ADS)
Sun, Wenjun; Jiang, Song; Xu, Kun
2017-12-01
In order to extend the unified gas kinetic scheme (UGKS) to solve radiative transfer equations in a complex geometry, a multidimensional asymptotic preserving implicit method on unstructured mesh is constructed in this paper. With an implicit formulation, the CFL condition for the determination of the time step in UGKS can be much relaxed, and a large time step is used in simulations. Differently from previous direction-by-direction UGKS on orthogonal structured mesh, on unstructured mesh the interface flux transport takes into account multi-dimensional effect, where gradients of radiation intensity and material temperature in both normal and tangential directions of a cell interface are included in the flux evaluation. The multiple scale nature makes the UGKS be able to capture the solutions in both optically thin and thick regions seamlessly. In the optically thick region the condition of cell size being less than photon's mean free path is fully removed, and the UGKS recovers a solver for diffusion equation in such a limit on unstructured mesh. For a distorted quadrilateral mesh, the UGKS goes to a nine-point scheme for the diffusion equation, and it naturally reduces to the standard five-point scheme for a orthogonal quadrilateral mesh. Numerical computations covering a wide range of transport regimes on unstructured and distorted quadrilateral meshes will be presented to validate the current approach.
NASA Technical Reports Server (NTRS)
Engwirda, Darren
2017-01-01
An algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described. This technique is designed to generate very high-quality staggered VoronoiDelaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modelling and numerical weather prediction. Using a recently developed Frontal-Delaunay refinement technique, a method for the construction of high-quality unstructured spheroidal Delaunay triangulations is introduced. A locally orthogonal polygonal grid, derived from the associated Voronoi diagram, is computed as the staggered dual. It is shown that use of the Frontal-Delaunay refinement technique allows for the generation of very high-quality unstructured triangulations, satisfying a priori bounds on element size and shape. Grid quality is further improved through the application of hill-climbing-type optimisation techniques. Overall, the algorithm is shown to produce grids with very high element quality and smooth grading characteristics, while imposing relatively low computational expense. A selection of uniform and non-uniform spheroidal grids appropriate for high-resolution, multi-scale general circulation modelling are presented. These grids are shown to satisfy the geometric constraints associated with contemporary unstructured C-grid-type finite-volume models, including the Model for Prediction Across Scales (MPAS-O). The use of user-defined mesh-spacing functions to generate smoothly graded, non-uniform grids for multi-resolution-type studies is discussed in detail.
NASA Astrophysics Data System (ADS)
Engwirda, Darren
2017-06-01
An algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described. This technique is designed to generate very high-quality staggered Voronoi-Delaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modelling and numerical weather prediction. Using a recently developed Frontal-Delaunay refinement technique, a method for the construction of high-quality unstructured spheroidal Delaunay triangulations is introduced. A locally orthogonal polygonal grid, derived from the associated Voronoi diagram, is computed as the staggered dual. It is shown that use of the Frontal-Delaunay refinement technique allows for the generation of very high-quality unstructured triangulations, satisfying a priori bounds on element size and shape. Grid quality is further improved through the application of hill-climbing-type optimisation techniques. Overall, the algorithm is shown to produce grids with very high element quality and smooth grading characteristics, while imposing relatively low computational expense. A selection of uniform and non-uniform spheroidal grids appropriate for high-resolution, multi-scale general circulation modelling are presented. These grids are shown to satisfy the geometric constraints associated with contemporary unstructured C-grid-type finite-volume models, including the Model for Prediction Across Scales (MPAS-O). The use of user-defined mesh-spacing functions to generate smoothly graded, non-uniform grids for multi-resolution-type studies is discussed in detail.
Unstructured mesh adaptivity for urban flooding modelling
NASA Astrophysics Data System (ADS)
Hu, R.; Fang, F.; Salinas, P.; Pain, C. C.
2018-05-01
Over the past few decades, urban floods have been gaining more attention due to their increase in frequency. To provide reliable flooding predictions in urban areas, various numerical models have been developed to perform high-resolution flood simulations. However, the use of high-resolution meshes across the whole computational domain causes a high computational burden. In this paper, a 2D control-volume and finite-element flood model using adaptive unstructured mesh technology has been developed. This adaptive unstructured mesh technique enables meshes to be adapted optimally in time and space in response to the evolving flow features, thus providing sufficient mesh resolution where and when it is required. It has the advantage of capturing the details of local flows and wetting and drying front while reducing the computational cost. Complex topographic features are represented accurately during the flooding process. For example, the high-resolution meshes around the buildings and steep regions are placed when the flooding water reaches these regions. In this work a flooding event that happened in 2002 in Glasgow, Scotland, United Kingdom has been simulated to demonstrate the capability of the adaptive unstructured mesh flooding model. The simulations have been performed using both fixed and adaptive unstructured meshes, and then results have been compared with those published 2D and 3D results. The presented method shows that the 2D adaptive mesh model provides accurate results while having a low computational cost.
Automated identification of drug and food allergies entered using non-standard terminology.
Epstein, Richard H; St Jacques, Paul; Stockin, Michael; Rothman, Brian; Ehrenfeld, Jesse M; Denny, Joshua C
2013-01-01
An accurate computable representation of food and drug allergy is essential for safe healthcare. Our goal was to develop a high-performance, easily maintained algorithm to identify medication and food allergies and sensitivities from unstructured allergy entries in electronic health record (EHR) systems. An algorithm was developed in Transact-SQL to identify ingredients to which patients had allergies in a perioperative information management system. The algorithm used RxNorm and natural language processing techniques developed on a training set of 24 599 entries from 9445 records. Accuracy, specificity, precision, recall, and F-measure were determined for the training dataset and repeated for the testing dataset (24 857 entries from 9430 records). Accuracy, precision, recall, and F-measure for medication allergy matches were all above 98% in the training dataset and above 97% in the testing dataset for all allergy entries. Corresponding values for food allergy matches were above 97% and above 93%, respectively. Specificities of the algorithm were 90.3% and 85.0% for drug matches and 100% and 88.9% for food matches in the training and testing datasets, respectively. The algorithm had high performance for identification of medication and food allergies. Maintenance is practical, as updates are managed through upload of new RxNorm versions and additions to companion database tables. However, direct entry of codified allergy information by providers (through autocompleters or drop lists) is still preferred to post-hoc encoding of the data. Data tables used in the algorithm are available for download. A high performing, easily maintained algorithm can successfully identify medication and food allergies from free text entries in EHR systems.
Populating the Semantic Web by Macro-reading Internet Text
NASA Astrophysics Data System (ADS)
Mitchell, Tom M.; Betteridge, Justin; Carlson, Andrew; Hruschka, Estevam; Wang, Richard
A key question regarding the future of the semantic web is "how will we acquire structured information to populate the semantic web on a vast scale?" One approach is to enter this information manually. A second approach is to take advantage of pre-existing databases, and to develop common ontologies, publishing standards, and reward systems to make this data widely accessible. We consider here a third approach: developing software that automatically extracts structured information from unstructured text present on the web. We also describe preliminary results demonstrating that machine learning algorithms can learn to extract tens of thousands of facts to populate a diverse ontology, with imperfect but reasonably good accuracy.
Unsupervised Ontology Generation from Unstructured Text. CRESST Report 827
ERIC Educational Resources Information Center
Mousavi, Hamid; Kerr, Deirdre; Iseli, Markus R.
2013-01-01
Ontologies are a vital component of most knowledge acquisition systems, and recently there has been a huge demand for generating ontologies automatically since manual or supervised techniques are not scalable. In this paper, we introduce "OntoMiner", a rule-based, iterative method to extract and populate ontologies from unstructured or…
Marathon Group Therapy with Female Narcotic Addicts.
ERIC Educational Resources Information Center
Kilmann, Peter R.
This study evaluated the impact of structured and unstructured marathon therapy on institutionalized female narcotic addicts. Subjects were randomly assigned to one of five groups: two structured therapy groups, two unstructured therapy groups, and a no-treatment control group. The Personal Orientation Inventory, the Adjective Check List, and a…
Imaging diagnosis--pulmonary metastases in New World camelids.
Gall, David A; Zekas, Lisa J; Van Metre, David; Holt, Timothy
2006-01-01
The radiographic appearance of pulmonary metastatic disease from carcinoma is described in a llama and an alpaca. In one, a diffuse miliary pattern was seen. In the other, a more atypical unstructured interstitial pattern was recognized. Metastatic pulmonary neoplasia in camelids may assume a generalized miliary or unstructured pattern.
Analytics to Better Interpret and Use Large Amounts of Heterogeneous Data
NASA Astrophysics Data System (ADS)
Mathews, T. J.; Baskin, W. E.; Rinsland, P. L.
2014-12-01
Data scientists at NASA's Atmospheric Science Data Center (ASDC) are seasoned software application developers who have worked with the creation, archival, and distribution of large datasets (multiple terabytes and larger). In order for ASDC data scientists to effectively implement the most efficient processes for cataloging and organizing data access applications, they must be intimately familiar with data contained in the datasets with which they are working. Key technologies that are critical components to the background of ASDC data scientists include: large RBMSs (relational database management systems) and NoSQL databases; web services; service-oriented architectures; structured and unstructured data access; as well as processing algorithms. However, as prices of data storage and processing decrease, sources of data increase, and technologies advance - granting more people to access to data at real or near-real time - data scientists are being pressured to accelerate their ability to identify and analyze vast amounts of data. With existing tools this is becoming exceedingly more challenging to accomplish. For example, NASA Earth Science Data and Information System (ESDIS) alone grew from having just over 4PBs of data in 2009 to nearly 6PBs of data in 2011. This amount then increased to roughly10PBs of data in 2013. With data from at least ten new missions to be added to the ESDIS holdings by 2017, the current volume will continue to grow exponentially and drive the need to be able to analyze more data even faster. Though there are many highly efficient, off-the-shelf analytics tools available, these tools mainly cater towards business data, which is predominantly unstructured. Inadvertently, there are very few known analytics tools that interface well to archived Earth science data, which is predominantly heterogeneous and structured. This presentation will identify use cases for data analytics from an Earth science perspective in order to begin to identify specific tools that may be able to address those challenges.
Mayo clinic NLP system for patient smoking status identification.
Savova, Guergana K; Ogren, Philip V; Duffy, Patrick H; Buntrock, James D; Chute, Christopher G
2008-01-01
This article describes our system entry for the 2006 I2B2 contest "Challenges in Natural Language Processing for Clinical Data" for the task of identifying the smoking status of patients. Our system makes the simplifying assumption that patient-level smoking status determination can be achieved by accurately classifying individual sentences from a patient's record. We created our system with reusable text analysis components built on the Unstructured Information Management Architecture and Weka. This reuse of code minimized the development effort related specifically to our smoking status classifier. We report precision, recall, F-score, and 95% exact confidence intervals for each metric. Recasting the classification task for the sentence level and reusing code from other text analysis projects allowed us to quickly build a classification system that performs with a system F-score of 92.64 based on held-out data tests and of 85.57 on the formal evaluation data. Our general medical natural language engine is easily adaptable to a real-world medical informatics application. Some of the limitations as applied to the use-case are negation detection and temporal resolution.
NASA Astrophysics Data System (ADS)
Garfinkle, Noah W.; Selig, Lucas; Perkins, Timothy K.; Calfas, George W.
2017-05-01
Increasing worldwide internet connectivity and access to sources of print and open social media has increased near realtime availability of textual information. Capabilities to structure and integrate textual data streams can contribute to more meaningful representations of operational environment factors (i.e., Political, Military, Economic, Social, Infrastructure, Information, Physical Environment, and Time [PMESII-PT]) and tactical civil considerations (i.e., Areas, Structures, Capabilities, Organizations, People and Events [ASCOPE]). However, relying upon human analysts to encode this information as it arrives quickly proves intractable. While human analysts possess an ability to comprehend context in unstructured text far beyond that of computers, automated geoparsing (the extraction of locations from unstructured text) can empower analysts to automate sifting through datasets for areas of interest. This research evaluates existing approaches to geoprocessing as well as initiating the research and development of locally-improved methods of tagging parts of text as possible locations, resolving possible locations into coordinates, and interfacing such results with human analysts. The objective of this ongoing research is to develop a more contextually-complete picture of an area of interest (AOI) including human-geographic context for events. In particular, our research is working to make improvements to geoparsing (i.e., the extraction of spatial context from documents), which requires development, integration, and validation of named-entity recognition (NER) tools, gazetteers, and entity-attribution. This paper provides an overview of NER models and methodologies as applied to geoparsing, explores several challenges encountered, presents preliminary results from the creation of a flexible geoparsing research pipeline, and introduces ongoing and future work with the intention of contributing to the efficient geocoding of information containing valuable insights into human activities in space.
Powers, Kyle T; Washington, M Todd
2018-01-01
Abstract Eukaryotic DNA polymerase η catalyzes translesion synthesis of thymine dimers and 8-oxoguanines. It is comprised of a polymerase domain and a C-terminal region, both of which are required for its biological function. The C-terminal region mediates interactions with proliferating cell nuclear antigen (PCNA) and other translesion synthesis proteins such as Rev1. This region contains a ubiquitin-binding/zinc-binding (UBZ) motif and a PCNA-interacting protein (PIP) motif. Currently little structural information is available for this region of polymerase η. Using a combination of approaches—including genetic complementation assays, X-ray crystallography, Langevin dynamics simulations, and small-angle X-ray scattering—we show that the C-terminal region is partially unstructured and has high conformational flexibility. This implies that the C-terminal region acts as a flexible tether linking the polymerase domain to PCNA thereby increasing its local concentration. Such tethering would facilitate the sampling of translesion synthesis polymerases to ensure that the most appropriate one is selected to bypass the lesion. PMID:29385534
Pantazatos, Spiro P.; Li, Jianrong; Pavlidis, Paul; Lussier, Yves A.
2009-01-01
An approach towards heterogeneous neuroscience dataset integration is proposed that uses Natural Language Processing (NLP) and a knowledge-based phenotype organizer system (PhenOS) to link ontology-anchored terms to underlying data from each database, and then maps these terms based on a computable model of disease (SNOMED CT®). The approach was implemented using sample datasets from fMRIDC, GEO, The Whole Brain Atlas and Neuronames, and allowed for complex queries such as “List all disorders with a finding site of brain region X, and then find the semantically related references in all participating databases based on the ontological model of the disease or its anatomical and morphological attributes”. Precision of the NLP-derived coding of the unstructured phenotypes in each dataset was 88% (n = 50), and precision of the semantic mapping between these terms across datasets was 98% (n = 100). To our knowledge, this is the first example of the use of both semantic decomposition of disease relationships and hierarchical information found in ontologies to integrate heterogeneous phenotypes across clinical and molecular datasets. PMID:20495688
Wu, Zhenyu; Zou, Ming
2014-10-01
An increasing number of users interact, collaborate, and share information through social networks. Unprecedented growth in social networks is generating a significant amount of unstructured social data. From such data, distilling communities where users have common interests and tracking variations of users' interests over time are important research tracks in fields such as opinion mining, trend prediction, and personalized services. However, these tasks are extremely difficult considering the highly dynamic characteristics of the data. Existing community detection methods are time consuming, making it difficult to process data in real time. In this paper, dynamic unstructured data is modeled as a stream. Tag assignments stream clustering (TASC), an incremental scalable community detection method, is proposed based on locality-sensitive hashing. Both tags and latent interactions among users are incorporated in the method. In our experiments, the social dynamic behaviors of users are first analyzed. The proposed TASC method is then compared with state-of-the-art clustering methods such as StreamKmeans and incremental k-clique; results indicate that TASC can detect communities more efficiently and effectively. Copyright © 2014 Elsevier Ltd. All rights reserved.
Accelerating Exploitation of Low-grade Intelligence through Semantic Text Processing of Social Media
2013-06-01
importance as an information source. The brevity of social media content (e.g., 140 characters per tweet) combined with the increasing usage of mobile...platform imports unstructured text from a variety of sources and then maps the text to an existing ontology of frames (FrameNet, https...framenet.icsi.berkeley.edu/fndrupal/) during a process of Semantic Role Labeling ( SRL ). FrameNet is a structured language model grounded in the theory of Frame
ERIC Educational Resources Information Center
Torrens, George Edward; Newton, Helen
2013-01-01
This paper provides education-based researchers and practitioners with the preferred research and design methods used by Higher Education Institute (HEI) students and Key Stage 3 (KS3) pupils applied within a participatory approach to a design activity. The outcomes were that both pupils and students found informal (unstructured) interview to be…
Constructing Social Networks from Unstructured Group Dialog in Virtual Worlds
NASA Astrophysics Data System (ADS)
Shah, Fahad; Sukthankar, Gita
Virtual worlds and massively multi-player online games are rich sources of information about large-scale teams and groups, offering the tantalizing possibility of harvesting data about group formation, social networks, and network evolution. However these environments lack many of the cues that facilitate natural language processing in other conversational settings and different types of social media. Public chat data often features players who speak simultaneously, use jargon and emoticons, and only erratically adhere to conversational norms. In this paper, we present techniques for inferring the existence of social links from unstructured conversational data collected from groups of participants in the Second Life virtual world. We present an algorithm for addressing this problem, Shallow Semantic Temporal Overlap (SSTO), that combines temporal and language information to create directional links between participants, and a second approach that relies on temporal overlap alone to create undirected links between participants. Relying on temporal overlap is noisy, resulting in a low precision and networks with many extraneous links. In this paper, we demonstrate that we can ameliorate this problem by using network modularity optimization to perform community detection in the noisy networks and severing cross-community links. Although using the content of the communications still results in the best performance, community detection is effective as a noise reduction technique for eliminating the extra links created by temporal overlap alone.
Teaching Social Media Analytics: An Assessment Based on Natural Disaster Postings
ERIC Educational Resources Information Center
Goh, Tiong T.; Sun, Pei-Chen
2015-01-01
Unstructured data in social media is as part of the "big data" spectrum. Unstructured data in Social media can provide useful insights into social phenomena and citizen opinions, both of which are critical to government policy and businesses decisions. Teachers of business intelligence and analytics commonly use quantitative data from…
NASA Astrophysics Data System (ADS)
Park, George Ilhwan; Moin, Parviz
2016-01-01
This paper focuses on numerical and practical aspects associated with a parallel implementation of a two-layer zonal wall model for large-eddy simulation (LES) of compressible wall-bounded turbulent flows on unstructured meshes. A zonal wall model based on the solution of unsteady three-dimensional Reynolds-averaged Navier-Stokes (RANS) equations on a separate near-wall grid is implemented in an unstructured, cell-centered finite-volume LES solver. The main challenge in its implementation is to couple two parallel, unstructured flow solvers for efficient boundary data communication and simultaneous time integrations. A coupling strategy with good load balancing and low processors underutilization is identified. Face mapping and interpolation procedures at the coupling interface are explained in detail. The method of manufactured solution is used for verifying the correct implementation of solver coupling, and parallel performance of the combined wall-modeled LES (WMLES) solver is investigated. The method has successfully been applied to several attached and separated flows, including a transitional flow over a flat plate and a separated flow over an airfoil at an angle of attack.
Investigation of advancing front method for generating unstructured grid
NASA Technical Reports Server (NTRS)
Thomas, A. M.; Tiwari, S. N.
1992-01-01
The advancing front technique is used to generate an unstructured grid about simple aerodynamic geometries. Unstructured grids are generated using VGRID2D and VGRID3D software. Specific problems considered are a NACA 0012 airfoil, a bi-plane consisting of two NACA 0012 airfoil, a four element airfoil in its landing configuration, and an ONERA M6 wing. Inviscid time dependent solutions are computed on these geometries using USM3D and the results are compared with standard test results obtained by other investigators. A grid convergence study is conducted for the NACA 0012 airfoil and compared with a structured grid. A structured grid is generated using GRIDGEN software and inviscid solutions computed using CFL3D flow solver. The results obtained by unstructured grid for NACA 0012 airfoil showed an asymmetric distribution of flow quantities, and a fine distribution of grid was required to remove this asymmetry. On the other hand, the structured grid predicted a very symmetric distribution, but when the total number of points were compared to obtain the same results it was seen that structured grid required more grid points.
Zimmerman, Gregory M; Messner, Steven F; Rees, Carter
2014-07-01
Secondary exposure to community violence, defined as witnessing or hearing violence in the community, has the potential to profoundly impact long-term development, health, happiness, and security. While research has explored pathways to community violence exposure at the individual, family, and neighborhood levels, prior work has largely neglected situational factors conducive to secondary violence exposure. The present study evaluates "unstructured socializing with peers in the absence of authority figures" as a situational process that has implications for secondary exposure to violence. Results indicate that a measure of unstructured socializing was significantly associated with exposure to violence, net of an array of theoretically relevant covariates of violence exposure. Moreover, the relationships between exposure to violence and three of the most well-established correlates of violence exposure in the literature-age, male, and prior violence-were mediated to varying degrees by unstructured socializing. The results suggest a more nuanced approach to the study of secondary violence exposure that expands the focus of attention beyond individual and neighborhood background factors to include situational opportunities presented by patterns of everyday activities. © The Author(s) 2013.
Pappu, J Sharon Mano; Gummadi, Sathyanarayana N
2016-11-01
This study examines the use of unstructured kinetic model and artificial neural networks as predictive tools for xylitol production by Debaryomyces nepalensis NCYC 3413 in bioreactor. An unstructured kinetic model was proposed in order to assess the influence of pH (4, 5 and 6), temperature (25°C, 30°C and 35°C) and volumetric oxygen transfer coefficient kLa (0.14h(-1), 0.28h(-1) and 0.56h(-1)) on growth and xylitol production. A feed-forward back-propagation artificial neural network (ANN) has been developed to investigate the effect of process condition on xylitol production. ANN configuration of 6-10-3 layers was selected and trained with 339 experimental data points from bioreactor studies. Results showed that simulation and prediction accuracy of ANN was apparently higher when compared to unstructured mechanistic model under varying operational conditions. ANN was found to be an efficient data-driven tool to predict the optimal harvest time in xylitol production. Copyright © 2016 Elsevier Ltd. All rights reserved.
Pearce, Matthew; Saunders, David H; Allison, Peter; Turner, Anthony P
2018-01-01
The distribution of adolescent moderate to vigorous physical activity (MVPA) across multiple contexts is unclear. This study examined indoor and outdoor leisure time in terms of being structured or unstructured and explored relationships with total daily MVPA. Between September 2012 and January 2014, 70 participants (aged 11-13 y) from 4 schools in Edinburgh wore an accelerometer and global positioning system receiver over 7 days, reporting structured physical activity using a diary. Time spent and MVPA were summarized according to indoor/outdoor location and whether activity was structured/unstructured. Independent associations between context-specific time spent and total daily MVPA were examined using a multivariate linear regression model. Very little time or MVPA was recorded in structured contexts. Unstructured outdoor leisure time was associated with an increase in total daily MVPA almost twice that of unstructured indoor leisure time [b value (95% confidence interval), 8.45 (1.71 to 14.48) vs 4.38 (0.20 to 8.22) minute increase per hour spent]. The association was stronger for time spent in structured outdoor leisure time [35.81 (20.60 to 52.27)]. Research and interventions should focus on strategies to facilitate time outdoors during unstructured leisure time and maximize MVPA once youth are outdoors. Increasing the proportion of youth engaging in structured activity may be beneficial given that, although time spent was limited, association with MVPA was strongest.
Description of the F-16XL Geometry and Computational Grids Used in CAWAPI
NASA Technical Reports Server (NTRS)
Boelens, O. J.; Badcock, K. J.; Gortz, S.; Morton, S.; Fritz, W.; Karman, S. L., Jr.; Michal, T.; Lamar, J. E.
2009-01-01
The objective of the Cranked-Arrow Wing Aerodynamics Project International (CAWAPI) was to allow a comprehensive validation of Computational Fluid Dynamics methods against the CAWAP flight database. A major part of this work involved the generation of high-quality computational grids. Prior to the grid generation an IGES file containing the air-tight geometry of the F-16XL aircraft was generated by a cooperation of the CAWAPI partners. Based on this geometry description both structured and unstructured grids have been generated. The baseline structured (multi-block) grid (and a family of derived grids) has been generated by the National Aerospace Laboratory NLR. Although the algorithms used by NLR had become available just before CAWAPI and thus only a limited experience with their application to such a complex configuration had been gained, a grid of good quality was generated well within four weeks. This time compared favourably with that required to produce the unstructured grids in CAWAPI. The baseline all-tetrahedral and hybrid unstructured grids has been generated at NASA Langley Research Center and the USAFA, respectively. To provide more geometrical resolution, trimmed unstructured grids have been generated at EADS-MAS, the UTSimCenter, Boeing Phantom Works and KTH/FOI. All grids generated within the framework of CAWAPI will be discussed in the article. Both results obtained on the structured grids and the unstructured grids showed a significant improvement in agreement with flight test data in comparison with those obtained on the structured multi-block grid used during CAWAP.
Unsteady flow simulations around complex geometries using stationary or rotating unstructured grids
NASA Astrophysics Data System (ADS)
Sezer-Uzol, Nilay
In this research, the computational analysis of three-dimensional, unsteady, separated, vortical flows around complex geometries is studied by using stationary or moving unstructured grids. Two main engineering problems are investigated. The first problem is the unsteady simulation of a ship airwake, where helicopter operations become even more challenging, by using stationary unstructured grids. The second problem is the unsteady simulation of wind turbine rotor flow fields by using moving unstructured grids which are rotating with the whole three-dimensional rigid rotor geometry. The three dimensional, unsteady, parallel, unstructured, finite volume flow solver, PUMA2, is used for the computational fluid dynamics (CFD) simulations considered in this research. The code is modified to have a moving grid capability to perform three-dimensional, time-dependent rotor simulations. An instantaneous log-law wall model for Large Eddy Simulations is also implemented in PUMA2 to investigate the very large Reynolds number flow fields of rotating blades. To verify the code modifications, several sample test cases are also considered. In addition, interdisciplinary studies, which are aiming to provide new tools and insights to the aerospace and wind energy scientific communities, are done during this research by focusing on the coupling of ship airwake CFD simulations with the helicopter flight dynamics and control analysis, the coupling of wind turbine rotor CFD simulations with the aeroacoustic analysis, and the analysis of these time-dependent and large-scale CFD simulations with the help of a computational monitoring, steering and visualization tool, POSSE.
Rotor Airloads Prediction Using Unstructured Meshes and Loose CFD/CSD Coupling
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Lee-Rausch, Elizabeth M.
2008-01-01
The FUN3D unsteady Reynolds-averaged Navier-Stokes solver for unstructured grids has been modified to allow prediction of trimmed rotorcraft airloads. The trim of the rotorcraft and the aeroelastic deformation of the rotor blades are accounted for via loose coupling with the CAMRAD II rotorcraft computational structural dynamics code. The set of codes is used to analyze the HART-II Baseline, Minimum Noise and Minimum Vibration test conditions. The loose coupling approach is found to be stable and convergent for the cases considered. Comparison of the resulting airloads and structural deformations with experimentally measured data is presented. The effect of grid resolution and temporal accuracy is examined. Rotorcraft airloads prediction presents a very substantial challenge for Computational Fluid Dynamics (CFD). Not only must the unsteady nature of the flow be accurately modeled, but since most rotorcraft blades are not structurally stiff, an accurate simulation must account for the blade structural dynamics. In addition, trim of the rotorcraft to desired thrust and moment targets depends on both aerodynamic loads and structural deformation, and vice versa. Further, interaction of the fuselage with the rotor flow field can be important, so that relative motion between the blades and the fuselage must be accommodated. Thus a complete simulation requires coupled aerodynamics, structures and trim, with the ability to model geometrically complex configurations. NASA has recently initiated a Subsonic Rotary Wing (SRW) Project under the overall Fundamental Aeronautics Program. Within the context of SRW are efforts aimed at furthering the state of the art of high-fidelity rotorcraft flow simulations, using both structured and unstructured meshes. Structured-mesh solvers have an advantage in computation speed, but even though remarkably complex configurations may be accommodated using the overset grid approach, generation of complex structured-mesh systems can require months to set up. As a result, many rotorcraft simulations using structured-grid CFD neglect the fuselage. On the other hand, unstructured-mesh solvers are easily able to handle complex geometries, but suffer from slower execution speed. However, advances in both computer hardware and CFD algorithms have made previously state-of-the-art computations routine for unstructured-mesh solvers, so that rotorcraft simulations using unstructured grids are now viable. The aim of the present work is to develop a first principles rotorcraft simulation tool based on an unstructured CFD solver.
Unstructured medical image query using big data - An epilepsy case study.
Istephan, Sarmad; Siadat, Mohammad-Reza
2016-02-01
Big data technologies are critical to the medical field which requires new frameworks to leverage them. Such frameworks would benefit medical experts to test hypotheses by querying huge volumes of unstructured medical data to provide better patient care. The objective of this work is to implement and examine the feasibility of having such a framework to provide efficient querying of unstructured data in unlimited ways. The feasibility study was conducted specifically in the epilepsy field. The proposed framework evaluates a query in two phases. In phase 1, structured data is used to filter the clinical data warehouse. In phase 2, feature extraction modules are executed on the unstructured data in a distributed manner via Hadoop to complete the query. Three modules have been created, volume comparer, surface to volume conversion and average intensity. The framework allows for user-defined modules to be imported to provide unlimited ways to process the unstructured data hence potentially extending the application of this framework beyond epilepsy field. Two types of criteria were used to validate the feasibility of the proposed framework - the ability/accuracy of fulfilling an advanced medical query and the efficiency that Hadoop provides. For the first criterion, the framework executed an advanced medical query that spanned both structured and unstructured data with accurate results. For the second criterion, different architectures were explored to evaluate the performance of various Hadoop configurations and were compared to a traditional Single Server Architecture (SSA). The surface to volume conversion module performed up to 40 times faster than the SSA (using a 20 node Hadoop cluster) and the average intensity module performed up to 85 times faster than the SSA (using a 40 node Hadoop cluster). Furthermore, the 40 node Hadoop cluster executed the average intensity module on 10,000 models in 3h which was not even practical for the SSA. The current study is limited to epilepsy field and further research and more feature extraction modules are required to show its applicability in other medical domains. The proposed framework advances data-driven medicine by unleashing the content of unstructured medical data in an efficient and unlimited way to be harnessed by medical experts. Copyright © 2015 Elsevier Inc. All rights reserved.
Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain
Garcia, Gabriel J.; Corrales, Juan A.; Pomares, Jorge; Torres, Fernando
2009-01-01
Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors. PMID:22303146
Cosmological Particle Data Compression in Practice
NASA Astrophysics Data System (ADS)
Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.
2017-12-01
In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.
Mota, Jorge; Esculcas, Carlos
2002-01-01
The main goals of this cross-sectional survey were (a) to describe the associations between sex, age, and physical activity behavior and (b) to describe the age and sex-related associations with the choice of structured (formal) and unstructured (nonformal) physical activity programs. At baseline, data were selected randomly from 1,013 students, from the 7th to the 12th grades. A response rate of 73% (n = 739) was obtained. Accordingly, the sample of this study consisted of 594 adolescents (304 females and 290 males) with mean age of 15.9 years (range 13-20). Physical activity was assessed by means of a questionnaire. A questionnaire about leisure activities was applied to the sample to define the nominal variable "nature of physical activity." The data showed that significantly more girls than boys (p < or = .001) belonged to the sedentary group (80.7% girls) and low activity group (64.5% girls). Boys more frequently belonged to the more active groups (92.1%; p < or = .001). The older participants were more engaged in formal physical activities, whereas the younger mostly chose informal ones whatever their level of physical activity. There were more significant differences in girls' physical activity groups (chi 2 = 20.663, p < or = .001) than in boys' (chi 2 = 7.662, p < or = .05). Furthermore, active girls chose more structured physical activities than their sedentary counterparts (18.8% vs. 83.3%). However, boys preferred unstructured activities regardless of physical activity group (83.7% vs. 58.5%; p < or = .05). It can be concluded that as age increased, organized sports activities became a relatively more important component of total weekly activity for both male and female participants.
Relational coordination and healthcare management in lung cancer
Romero, José Antonio Vinagre; Señarís, Juan Del Llano; Heredero, Carmen De Pablos; Nuijten, Mark
2014-01-01
In the current socio-economic scenario characterized by a growing shortage of resources and progressive budget constraints, the need to better coordinate processes in health institutions appears as a relevant aspect to ensure the future sustainability of system. In this sense, Relational Coordination (RC) provides a valuable opportunity for the reconfiguration of clinical guidelines concerning isolated single-level considerations. In this research the RC model has been applied to explain best results in the process of diagnosing and offering clinical treatments for lung cancer. Lung cancer presents the higher rates of tumor’s mortality worldwide. Through unstructured and informal interviews with clinicians at both levels (Primary/Specialist Care), a diagnosis of the situation in relation to joint management of lung cancer is provided. Solutions of continuity in terms of coordination are explained due to the observation of lack of effective knowledge transfer between the two levels. It is this disconnection which justifies the introduction of a modified model of RC for the study and implementation of transfer relations between the knowledge holders, in order to structure consolidated and cooperative evidence-based models that lead to a substantial shortening in the response times with a marked outcomes improvement. To our knowledge, the application of this model to a Public Health problem bringing together both levels of care, hasn’t been made till now. PMID:25516851
Unstructured Adaptive (UA) NAS Parallel Benchmark. Version 1.0
NASA Technical Reports Server (NTRS)
Feng, Huiyu; VanderWijngaart, Rob; Biswas, Rupak; Mavriplis, Catherine
2004-01-01
We present a complete specification of a new benchmark for measuring the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. It complements the existing NAS Parallel Benchmark suite. The benchmark involves the solution of a stylized heat transfer problem in a cubic domain, discretized on an adaptively refined, unstructured mesh.
ERIC Educational Resources Information Center
Laird, Shelby Gull; McFarland-Piazza, Laura; Allen, Sydnye
2014-01-01
Outdoor environmental education and provision of unstructured exploration of nature are often forgotten aspects of the early childhood experience. The aim of this study was to understand how adults' early experiences in nature relate to their attitudes and practices in providing such experiences for young children. This study surveyed 33 parents…
2011-11-01
the Poisson form of the equations can also be generated by manipulating the computational space , so forcing functions become superfluous . The...ABSTRACT Unstructured methods for region discretization have become common in computational fluid dynamics (CFD) analysis because of certain benefits...application of Winslow elliptic smoothing equations to unstructured meshes. It has been shown that it is not necessary for the computational space of
ERIC Educational Resources Information Center
Lawton-Sticklor, Nastasia; Bodamer, Scott F.
2016-01-01
This article explores a research partnership between a university-based researcher and a middle school science teacher. Our partnership began with project-based inquiry and continued with unstructured thought-partner spaces: meetings with no agenda where we wrestled with problems of practice. Framed as incubation periods, these meetings allowed us…
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.; Das, Raja; Saltz, Joel; Vermeland, R. E.
1992-01-01
An efficient three dimensional unstructured Euler solver is parallelized on a Cray Y-MP C90 shared memory computer and on an Intel Touchstone Delta distributed memory computer. This paper relates the experiences gained and describes the software tools and hardware used in this study. Performance comparisons between two differing architectures are made.
NASA Technical Reports Server (NTRS)
Usab, William J., Jr.; Jiang, Yi-Tsann
1991-01-01
The objective of the present research is to develop a general solution adaptive scheme for the accurate prediction of inviscid quasi-three-dimensional flow in advanced compressor and turbine designs. The adaptive solution scheme combines an explicit finite-volume time-marching scheme for unstructured triangular meshes and an advancing front triangular mesh scheme with a remeshing procedure for adapting the mesh as the solution evolves. The unstructured flow solver has been tested on a series of two-dimensional airfoil configurations including a three-element analytic test case presented here. Mesh adapted quasi-three-dimensional Euler solutions are presented for three spanwise stations of the NASA rotor 67 transonic fan. Computed solutions are compared with available experimental data.
NCAR global model topography generation software for unstructured grids
NASA Astrophysics Data System (ADS)
Lauritzen, P. H.; Bacmeister, J. T.; Callaghan, P. F.; Taylor, M. A.
2015-06-01
It is the purpose of this paper to document the NCAR global model topography generation software for unstructured grids. Given a model grid, the software computes the fraction of the grid box covered by land, the gridbox mean elevation, and associated sub-grid scale variances commonly used for gravity wave and turbulent mountain stress parameterizations. The software supports regular latitude-longitude grids as well as unstructured grids; e.g. icosahedral, Voronoi, cubed-sphere and variable resolution grids. As an example application and in the spirit of documenting model development, exploratory simulations illustrating the impacts of topographic smoothing with the NCAR-DOE CESM (Community Earth System Model) CAM5.2-SE (Community Atmosphere Model version 5.2 - Spectral Elements dynamical core) are shown.
NASA Technical Reports Server (NTRS)
Oliger, Joseph
1997-01-01
Topics considered include: high-performance computing; cognitive and perceptual prostheses (computational aids designed to leverage human abilities); autonomous systems. Also included: development of a 3D unstructured grid code based on a finite volume formulation and applied to the Navier-stokes equations; Cartesian grid methods for complex geometry; multigrid methods for solving elliptic problems on unstructured grids; algebraic non-overlapping domain decomposition methods for compressible fluid flow problems on unstructured meshes; numerical methods for the compressible navier-stokes equations with application to aerodynamic flows; research in aerodynamic shape optimization; S-HARP: a parallel dynamic spectral partitioner; numerical schemes for the Hamilton-Jacobi and level set equations on triangulated domains; application of high-order shock capturing schemes to direct simulation of turbulence; multicast technology; network testbeds; supercomputer consolidation project.
NASA Technical Reports Server (NTRS)
Vemaganti, Gururaja R.
1994-01-01
This report presents computations for the Type 4 shock-shock interference flow under laminar and turbulent conditions using unstructured grids. Mesh adaptation was accomplished by remeshing, refinement, and mesh movement. Two two-equation turbulence models were used to analyze turbulent flows. The mean flow governing equations and the turbulence governing equations are solved in a coupled manner. The solution algorithm and the details pertaining to its implementation on unstructured grids are described. Computations were performed at two different freestream Reynolds numbers at a freestream Mach number of 11. Effects of the variation in the impinging shock location are studied. The comparison of the results in terms of wall heat flux and wall pressure distributions is presented.
A note on implementation of decaying product correlation structures for quasi-least squares.
Shults, Justine; Guerra, Matthew W
2014-08-30
This note implements an unstructured decaying product matrix via the quasi-least squares approach for estimation of the correlation parameters in the framework of generalized estimating equations. The structure we consider is fairly general without requiring the large number of parameters that are involved in a fully unstructured matrix. It is straightforward to show that the quasi-least squares estimators of the correlation parameters yield feasible values for the unstructured decaying product structure. Furthermore, subject to conditions that are easily checked, the quasi-least squares estimators are valid for longitudinal Bernoulli data. We demonstrate implementation of the structure in a longitudinal clinical trial with both a continuous and binary outcome variable. Copyright © 2014 John Wiley & Sons, Ltd.
Biomedical data mining in clinical routine: expanding the impact of hospital information systems.
Müller, Marcel; Markó, Kornel; Daumke, Philipp; Paetzold, Jan; Roesner, Arnold; Klar, Rüdiger
2007-01-01
In this paper we want to describe how the promising technology of biomedical data mining can improve the use of hospital information systems: a large set of unstructured, narrative clinical data from a dermatological university hospital like discharge letters or other dermatological reports were processed through a morpho-semantic text retrieval engine ("MorphoSaurus") and integrated with other clinical data using a web-based interface and brought into daily clinical routine. The user evaluation showed a very high user acceptance - this system seems to meet the clinicians' requirements for a vertical data mining in the electronic patient records. What emerges is the need for integration of biomedical data mining into hospital information systems for clinical, scientific, educational and economic reasons.
Building biomedical web communities using a semantically aware content management system.
Das, Sudeshna; Girard, Lisa; Green, Tom; Weitzman, Louis; Lewis-Bowen, Alister; Clark, Tim
2009-03-01
Web-based biomedical communities are becoming an increasingly popular vehicle for sharing information amongst researchers and are fast gaining an online presence. However, information organization and exchange in such communities is usually unstructured, rendering interoperability between communities difficult. Furthermore, specialized software to create such communities at low cost-targeted at the specific common information requirements of biomedical researchers-has been largely lacking. At the same time, a growing number of biological knowledge bases and biomedical resources are being structured for the Semantic Web. Several groups are creating reference ontologies for the biomedical domain, actively publishing controlled vocabularies and making data available in Resource Description Framework (RDF) language. We have developed the Science Collaboration Framework (SCF) as a reusable platform for advanced structured online collaboration in biomedical research that leverages these ontologies and RDF resources. SCF supports structured 'Web 2.0' style community discourse amongst researchers, makes heterogeneous data resources available to the collaborating scientist, captures the semantics of the relationship among the resources and structures discourse around the resources. The first instance of the SCF framework is being used to create an open-access online community for stem cell research-StemBook (http://www.stembook.org). We believe that such a framework is required to achieve optimal productivity and leveraging of resources in interdisciplinary scientific research. We expect it to be particularly beneficial in highly interdisciplinary areas, such as neurodegenerative disease and neurorepair research, as well as having broad utility across the natural sciences.
Biron, P; Metzger, M H; Pezet, C; Sebban, C; Barthuet, E; Durand, T
2014-01-01
A full-text search tool was introduced into the daily practice of Léon Bérard Center (France), a health care facility devoted to treatment of cancer. This tool was integrated into the hospital information system by the IT department having been granted full autonomy to improve the system. To describe the development and various uses of a tool for full-text search of computerized patient records. The technology is based on Solr, an open-source search engine. It is a web-based application that processes HTTP requests and returns HTTP responses. A data processing pipeline that retrieves data from different repositories, normalizes, cleans and publishes it to Solr, was integrated in the information system of the Leon Bérard center. The IT department developed also user interfaces to allow users to access the search engine within the computerized medical record of the patient. From January to May 2013, 500 queries were launched per month by an average of 140 different users. Several usages of the tool were described, as follows: medical management of patients, medical research, and improving the traceability of medical care in medical records. The sensitivity of the tool for detecting the medical records of patients diagnosed with both breast cancer and diabetes was 83.0%, and its positive predictive value was 48.7% (gold standard: manual screening by a clinical research assistant). The project demonstrates that the introduction of full-text-search tools allowed practitioners to use unstructured medical information for various purposes.
Pathak, Jyotishman; Bailey, Kent R; Beebe, Calvin E; Bethard, Steven; Carrell, David S; Chen, Pei J; Dligach, Dmitriy; Endle, Cory M; Hart, Lacey A; Haug, Peter J; Huff, Stanley M; Kaggal, Vinod C; Li, Dingcheng; Liu, Hongfang; Marchant, Kyle; Masanz, James; Miller, Timothy; Oniki, Thomas A; Palmer, Martha; Peterson, Kevin J; Rea, Susan; Savova, Guergana K; Stancl, Craig R; Sohn, Sunghwan; Solbrig, Harold R; Suesse, Dale B; Tao, Cui; Taylor, David P; Westberg, Les; Wu, Stephen; Zhuo, Ning; Chute, Christopher G
2013-01-01
Research objective To develop scalable informatics infrastructure for normalization of both structured and unstructured electronic health record (EHR) data into a unified, concept-based model for high-throughput phenotype extraction. Materials and methods Software tools and applications were developed to extract information from EHRs. Representative and convenience samples of both structured and unstructured data from two EHR systems—Mayo Clinic and Intermountain Healthcare—were used for development and validation. Extracted information was standardized and normalized to meaningful use (MU) conformant terminology and value set standards using Clinical Element Models (CEMs). These resources were used to demonstrate semi-automatic execution of MU clinical-quality measures modeled using the Quality Data Model (QDM) and an open-source rules engine. Results Using CEMs and open-source natural language processing and terminology services engines—namely, Apache clinical Text Analysis and Knowledge Extraction System (cTAKES) and Common Terminology Services (CTS2)—we developed a data-normalization platform that ensures data security, end-to-end connectivity, and reliable data flow within and across institutions. We demonstrated the applicability of this platform by executing a QDM-based MU quality measure that determines the percentage of patients between 18 and 75 years with diabetes whose most recent low-density lipoprotein cholesterol test result during the measurement year was <100 mg/dL on a randomly selected cohort of 273 Mayo Clinic patients. The platform identified 21 and 18 patients for the denominator and numerator of the quality measure, respectively. Validation results indicate that all identified patients meet the QDM-based criteria. Conclusions End-to-end automated systems for extracting clinical information from diverse EHR systems require extensive use of standardized vocabularies and terminologies, as well as robust information models for storing, discovering, and processing that information. This study demonstrates the application of modular and open-source resources for enabling secondary use of EHR data through normalization into standards-based, comparable, and consistent format for high-throughput phenotyping to identify patient cohorts. PMID:24190931
Assessment of Hybrid RANS/LES Turbulence Models for Aeroacoustics Applications
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Lockard, David P.
2010-01-01
Predicting the noise from aircraft with exposed landing gear remains a challenging problem for the aeroacoustics community. Although computational fluid dynamics (CFD) has shown promise as a technique that could produce high-fidelity flow solutions, generating grids that can resolve the pertinent physics around complex configurations can be very challenging. Structured grids are often impractical for such configurations. Unstructured grids offer a path forward for simulating complex configurations. However, few unstructured grid codes have been thoroughly tested for unsteady flow problems in the manner needed for aeroacoustic prediction. A widely used unstructured grid code, FUN3D, is examined for resolving the near field in unsteady flow problems. Although the ultimate goal is to compute the flow around complex geometries such as the landing gear, simpler problems that include some of the relevant physics, and are easily amenable to the structured grid approaches are used for testing the unstructured grid approach. The test cases chosen for this study correspond to the experimental work on single and tandem cylinders conducted in the Basic Aerodynamic Research Tunnel (BART) and the Quiet Flow Facility (QFF) at NASA Langley Research Center. These configurations offer an excellent opportunity to assess the performance of hybrid RANS/LES turbulence models that transition from RANS in unresolved regions near solid bodies to LES in the outer flow field. Several of these models have been implemented and tested in both structured and unstructured grid codes to evaluate their dependence on the solver and mesh type. Comparison of FUN3D solutions with experimental data and numerical solutions from a structured grid flow solver are found to be encouraging.
Hong, Na; Wen, Andrew; Shen, Feichen; Sohn, Sunghwan; Liu, Sijia; Liu, Hongfang; Jiang, Guoqian
2018-01-01
Standards-based modeling of electronic health records (EHR) data holds great significance for data interoperability and large-scale usage. Integration of unstructured data into a standard data model, however, poses unique challenges partially due to heterogeneous type systems used in existing clinical NLP systems. We introduce a scalable and standards-based framework for integrating structured and unstructured EHR data leveraging the HL7 Fast Healthcare Interoperability Resources (FHIR) specification. We implemented a clinical NLP pipeline enhanced with an FHIR-based type system and performed a case study using medication data from Mayo Clinic's EHR. Two UIMA-based NLP tools known as MedXN and MedTime were integrated in the pipeline to extract FHIR MedicationStatement resources and related attributes from unstructured medication lists. We developed a rule-based approach for assigning the NLP output types to the FHIR elements represented in the type system, whereas we investigated the FHIR elements belonging to the source of the structured EMR data. We used the FHIR resource "MedicationStatement" as an example to illustrate our integration framework and methods. For evaluation, we manually annotated FHIR elements in 166 medication statements from 14 clinical notes generated by Mayo Clinic in the course of patient care, and used standard performance measures (precision, recall and f-measure). The F-scores achieved ranged from 0.73 to 0.99 for the various FHIR element representations. The results demonstrated that our framework based on the FHIR type system is feasible for normalizing and integrating both structured and unstructured EHR data.
Schapschröer, M; Baker, J; Schorer, J
2016-08-01
In the context of perceptual-cognitive expertise it is important to know whether physiological loads influence perceptual-cognitive performance. This study examined whether a handball specific physical exercise load influenced participants' speed and accuracy in a flicker task. At rest and during a specific interval exercise of 86.5-90% HRmax, 35 participants (experts: n=8, advanced: n=13, novices, n=14) performed a handball specific flicker task with two types of patterns (structured and unstructured). For reaction time, results revealed moderate effect sizes for group, with experts reacting faster than advanced and advanced reacting faster than novices, and for structure, with structured videos being performed faster than unstructured ones. A significant interaction for structure×group was also found, with experts and advanced players faster for structured videos, and novices faster for unstructured videos. For accuracy, significant main effects were found for structure with structured videos solved more accurately. A significant interaction for structure×group was revealed, with experts and advanced more accurate for structured scenes and novices more accurate for unstructured scenes. A significant interaction was also found for condition×structure; at rest, unstructured and structured scenes were performed with the same accuracy while under physical exercise, structured scenes were solved more accurately. No other interactions were found. These results were somewhat surprising given previous work in this area, although the impact of a specific physical exercise on a specific perceptual-cognitive task may be different from those tested generally. Copyright © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Tibi, Moanes H.
2018-01-01
This study aims to investigate and analyze the attitudes and opinions of computer science students at two academic colleges of education with regards to the use of structured and unstructured discussion forums in computer science courses conducted entirely online. Fifty-two students participated in two online courses. The students in each course…
A mesh regeneration method using quadrilateral and triangular elements for compressible flows
NASA Technical Reports Server (NTRS)
Vemaganti, G. R.; Thornton, E. A.
1989-01-01
An adaptive remeshing method using both triangular and quadrilateral elements suitable for high-speed viscous flows is presented. For inviscid flows, the method generates completely unstructured meshes. For viscous flows, structured meshes are generated for boundary layers, and unstructured meshes are generated for inviscid flow regions. Examples of inviscid and viscous adaptations for high-speed flows are presented.
Center for Efficient Exascale Discretizations Software Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolev, Tzanio; Dobrev, Veselin; Tomov, Vladimir
The CEED Software suite is a collection of generally applicable software tools focusing on the following computational motives: PDE discretizations on unstructured meshes, high-order finite element and spectral element methods and unstructured adaptive mesh refinement. All of this software is being developed as part of CEED, a co-design Center for Efficient Exascale Discretizations, within DOE's Exascale Computing Project (ECP) program.
ERIC Educational Resources Information Center
Siennick, Sonja E.; Osgood, D. Wayne
2012-01-01
Companions are central to explanations of the risky nature of unstructured and unsupervised socializing, yet we know little about whom adolescents are with when hanging out. We examine predictors of how often friendship dyads hang out via multilevel analyses of longitudinal friendship-level data on over 5,000 middle schoolers. Adolescents hang out…
Impact of Machine-Translated Text on Entity and Relationship Extraction
2014-12-01
20 1 1. Introduction Using social network analysis tools is an important asset in...semantic modeling software to automatically build detailed network models from unstructured text. Contour imports unstructured text and then maps the text...onto an existing ontology of frames at the sentence level, using FrameNet, a structured language model, and through Semantic Role Labeling ( SRL
On unstructured grids and solvers
NASA Technical Reports Server (NTRS)
Barth, T. J.
1990-01-01
The fundamentals and the state-of-the-art technology for unstructured grids and solvers are highlighted. Algorithms and techniques pertinent to mesh generation are discussed. It is shown that grid generation and grid manipulation schemes rely on fast multidimensional searching. Flow solution techniques for the Euler equations, which can be derived from the integral form of the equations are discussed. Sample calculations are also provided.
Petrovskii, Sergei; Blackshaw, Rod; Li, Bai-Lian
2008-02-01
The impact of intraspecific interactions on ecological stability and population persistence in terms of steady state(s) existence is considered theoretically based on a general competition model. We compare persistence of a structured population consisting of a few interacting (competitive) subpopulations, or groups, to persistence of the corresponding unstructured population. For a general case, we show that if the intra-group competition is stronger than the inter-group competition, then the structured population is less prone to extinction, i.e. it can persist in a parameter range where the unstructured population goes extinct. For a more specific case of a population with hierarchical competition, we show that relative viability of structured and unstructured populations depend on the type of density dependence in the population growth. Namely, while in the case of logistic growth, structured and unstructured populations exhibit equivalent persistence; in the case of Allee dynamics, the persistence of a hierarchically structured population is shown to be higher. We then apply these results to the case of behaviourally structured populations and demonstrate that an extreme form of individual aggression can be beneficial at the population level and enhance population persistence.
Queries over Unstructured Data: Probabilistic Methods to the Rescue
NASA Astrophysics Data System (ADS)
Sarawagi, Sunita
Unstructured data like emails, addresses, invoices, call transcripts, reviews, and press releases are now an integral part of any large enterprise. A challenge of modern business intelligence applications is analyzing and querying data seamlessly across structured and unstructured sources. This requires the development of automated techniques for extracting structured records from text sources and resolving entity mentions in data from various sources. The success of any automated method for extraction and integration depends on how effectively it unifies diverse clues in the unstructured source and in existing structured databases. We argue that statistical learning techniques like Conditional Random Fields (CRFs) provide a accurate, elegant and principled framework for tackling these tasks. Given the inherent noise in real-world sources, it is important to capture the uncertainty of the above operations via imprecise data models. CRFs provide a sound probability distribution over extractions but are not easy to represent and query in a relational framework. We present methods of approximating this distribution to query-friendly row and column uncertainty models. Finally, we present models for representing the uncertainty of de-duplication and algorithms for various Top-K count queries on imprecise duplicates.
Probabilistic Flood Mapping using Volunteered Geographical Information
NASA Astrophysics Data System (ADS)
Rivera, S. J.; Girons Lopez, M.; Seibert, J.; Minsker, B. S.
2016-12-01
Flood extent maps are widely used by decision makers and first responders to provide critical information that prevents economic impacts and the loss of human lives. These maps are usually obtained from sensory data and/or hydrologic models, which often have limited coverage in space and time. Recent developments in social media and communication technology have created a wealth of near-real-time, user-generated content during flood events in many urban areas, such as flooded locations, pictures of flooding extent and height, etc. These data could improve decision-making and response operations as events unfold. However, the integration of these data sources has been limited due to the need for methods that can extract and translate the data into useful information for decision-making. This study presents an approach that uses volunteer geographic information (VGI) and non-traditional data sources (i.e., Twitter, Flicker, YouTube, and 911 and 311 calls) to generate/update the flood extent maps in areas where no models and/or gauge data are operational. The approach combines Web-crawling and computer vision techniques to gather information about the location, extent, and water height of the flood from unstructured textual data, images, and videos. These estimates are then used to provide an updated flood extent map for areas surrounding the geo-coordinate of the VGI through the application of a Hydro Growing Region Algorithm (HGRA). HGRA combines hydrologic and image segmentation concepts to estimate a probabilistic flooding extent along the corresponding creeks. Results obtained for a case study in Austin, TX (i.e., 2015 Memorial Day flood) were comparable to those obtained by a calibrated hydrologic model and had good spatial correlation with flooding extents estimated by the Federal Emergency Management Agency (FEMA).
Decision support system for drinking water management
NASA Astrophysics Data System (ADS)
Janža, M.
2012-04-01
The problems in drinking water management are complex and often solutions must be reached under strict time constrains. This is especially distinct in case of environmental accidents in the catchment areas of the wells that are used for drinking water supply. The beneficial tools that can help decision makers and make program of activities more efficient are decision support systems (DSS). In general they are defined as computer-based support systems that help decision makers utilize data and models to solve unstructured problems. The presented DSS was developed in the frame of INCOME project which is focused on the long-term stable and safe drinking water supply in Ljubljana. The two main water resources Ljubljana polje and Barje alluvial aquifers are characterized by a strong interconnection of surface and groundwater, high vulnerability, high velocities of groundwater flow and pollutant transport. In case of sudden pollution, reactions should be very fast to avoid serious impact to the water supply. In the area high pressures arising from urbanization, industry, traffic, agriculture and old environmental burdens. The aim of the developed DSS is to optimize the activities in cases of emergency water management and to optimize the administrative work regarding the activities that can improve groundwater quality status. The DSS is an interactive computer system that utilizes data base, hydrological modelling, and experts' and stakeholders' knowledge. It consists of three components, tackling the different abovementioned issues in water management. The first one utilizes the work on identification, cleaning up and restoration of illegal dumpsites that are a serious threat to the qualitative status of groundwater. The other two components utilize the predictive capability of the hydrological model and scenario analysis. The user interacts with the system by a graphical interface that guides the user step-by-step to the recommended remedial measures. Consequently, the acquisition of information to support the water management's decisions is simplified and faster, thus contributing to more efficient water management and a safer supply of drinking water.
Robust and efficient overset grid assembly for partitioned unstructured meshes
NASA Astrophysics Data System (ADS)
Roget, Beatrice; Sitaraman, Jayanarayanan
2014-03-01
This paper presents a method to perform efficient and automated Overset Grid Assembly (OGA) on a system of overlapping unstructured meshes in a parallel computing environment where all meshes are partitioned into multiple mesh-blocks and processed on multiple cores. The main task of the overset grid assembler is to identify, in parallel, among all points in the overlapping mesh system, at which points the flow solution should be computed (field points), interpolated (receptor points), or ignored (hole points). Point containment search or donor search, an algorithm to efficiently determine the cell that contains a given point, is the core procedure necessary for accomplishing this task. Donor search is particularly challenging for partitioned unstructured meshes because of the complex irregular boundaries that are often created during partitioning.
Boundary-Layer Stability Analysis of the Mean Flows Obtained Using Unstructured Grids
NASA Technical Reports Server (NTRS)
Liao, Wei; Malik, Mujeeb R.; Lee-Rausch, Elizabeth M.; Li, Fei; Nielsen, Eric J.; Buning, Pieter G.; Chang, Chau-Lyan; Choudhari, Meelan M.
2012-01-01
Boundary-layer stability analyses of mean flows extracted from unstructured-grid Navier- Stokes solutions have been performed. A procedure has been developed to extract mean flow profiles from the FUN3D unstructured-grid solutions. Extensive code-to-code validations have been performed by comparing the extracted mean ows as well as the corresponding stability characteristics to the predictions based on structured-grid solutions. Comparisons are made on a range of problems from a simple at plate to a full aircraft configuration-a modified Gulfstream-III with a natural laminar flow glove. The future aim of the project is to extend the adjoint-based design capability in FUN3D to include natural laminar flow and laminar flow control by integrating it with boundary-layer stability analysis codes, such as LASTRAC.
Mixed Element Type Unstructured Grid Generation for Viscous Flow Applications
NASA Technical Reports Server (NTRS)
Marcum, David L.; Gaither, J. Adam
2000-01-01
A procedure is presented for efficient generation of high-quality unstructured grids suitable for CFD simulation of high Reynolds number viscous flow fields. Layers of anisotropic elements are generated by advancing along prescribed normals from solid boundaries. The points are generated such that either pentahedral or tetrahedral elements with an implied connectivity can be be directly recovered. As points are generated they are temporarily attached to a volume triangulation of the boundary points. This triangulation allows efficient local search algorithms to be used when checking merging layers, The existing advancing-front/local-reconnection procedure is used to generate isotropic elements outside of the anisotropic region. Results are presented for a variety of applications. The results demonstrate that high-quality anisotropic unstructured grids can be efficiently and consistently generated for complex configurations.
RNA-protein interactions in an unstructured context.
Zagrovic, Bojan; Bartonek, Lukas; Polyansky, Anton A
2018-05-31
Despite their importance, our understanding of noncovalent RNA-protein interactions is incomplete. This especially concerns the binding between RNA and unstructured protein regions, a widespread class of such interactions. Here, we review the recent experimental and computational work on RNA-protein interactions in an unstructured context with a particular focus on how such interactions may be shaped by the intrinsic interaction affinities between individual nucleobases and protein side chains. Specifically, we articulate the claim that the universal genetic code reflects the binding specificity between nucleobases and protein side chains and that, in turn, the code may be seen as the Rosetta stone for understanding RNA-protein interactions in general. © 2018 The Authors. FEBS Letters published by John Wiley & Sons Ltd on behalf of Federation of European Biochemical Societies.
Big Data in Medicine is Driving Big Changes
Verspoor, K.
2014-01-01
Summary Objectives To summarise current research that takes advantage of “Big Data” in health and biomedical informatics applications. Methods Survey of trends in this work, and exploration of literature describing how large-scale structured and unstructured data sources are being used to support applications from clinical decision making and health policy, to drug design and pharmacovigilance, and further to systems biology and genetics. Results The survey highlights ongoing development of powerful new methods for turning that large-scale, and often complex, data into information that provides new insights into human health, in a range of different areas. Consideration of this body of work identifies several important paradigm shifts that are facilitated by Big Data resources and methods: in clinical and translational research, from hypothesis-driven research to data-driven research, and in medicine, from evidence-based practice to practice-based evidence. Conclusions The increasing scale and availability of large quantities of health data require strategies for data management, data linkage, and data integration beyond the limits of many existing information systems, and substantial effort is underway to meet those needs. As our ability to make sense of that data improves, the value of the data will continue to increase. Health systems, genetics and genomics, population and public health; all areas of biomedicine stand to benefit from Big Data and the associated technologies. PMID:25123716
Technology concept in the view of Iranian nurses.
Mehraban, Marzieh Adel; Hassanpour, Marzieh; Yazdannik, Ahmadreza; Ajami, Sima
2013-05-01
Over the years, the concept technology has modified, especially from the viewpoint of the development of scientific knowledge as well as the philosophical and artistic aspects. However, the concept of technology in nursing are still poorly understood. Only small qualitative studies, especially in Iran, have investigated this phenomenon and they just are about information technology. The aim of this study is to gain a better understanding of the concept of technology in the view of Iranian nurses. This study was qualitative explorative study which was done with a purposeful sampling of 23 nurses (staffs, supervisors and chief nurse managers) working in Isfahan hospitals. Unstructured interviews were including 13 individual interviews and 2 focused-group interviews. In addition to this, filed notes and memos were used in data collection. After this data transcribing was done and then conventional content analysis was used for data coding and classification. The results showed that there are various definitions for technology among nurses. In the view of nurses, technology means using new equipment, computers, information technology, etc). Data analysis revealed that nurses understand technology up to three main concepts: Change, Equipment and Knowledge. In deep overview on categories, we found that the most important concept about technology in nursing perspective is equipment. Therefore, it is necessary to develop deep understanding about the possible concepts technology among nurses. We suppose that technology concepts must be defined separately in all disciplines.
Do, Bao H; Wu, Andrew; Biswal, Sandip; Kamaya, Aya; Rubin, Daniel L
2010-11-01
Storing and retrieving radiology cases is an important activity for education and clinical research, but this process can be time-consuming. In the process of structuring reports and images into organized teaching files, incidental pathologic conditions not pertinent to the primary teaching point can be omitted, as when a user saves images of an aortic dissection case but disregards the incidental osteoid osteoma. An alternate strategy for identifying teaching cases is text search of reports in radiology information systems (RIS), but retrieved reports are unstructured, teaching-related content is not highlighted, and patient identifying information is not removed. Furthermore, searching unstructured reports requires sophisticated retrieval methods to achieve useful results. An open-source, RadLex(®)-compatible teaching file solution called RADTF, which uses natural language processing (NLP) methods to process radiology reports, was developed to create a searchable teaching resource from the RIS and the picture archiving and communication system (PACS). The NLP system extracts and de-identifies teaching-relevant statements from full reports to generate a stand-alone database, thus converting existing RIS archives into an on-demand source of teaching material. Using RADTF, the authors generated a semantic search-enabled, Web-based radiology archive containing over 700,000 cases with millions of images. RADTF combines a compact representation of the teaching-relevant content in radiology reports and a versatile search engine with the scale of the entire RIS-PACS collection of case material. ©RSNA, 2010
The Effect of Shared Information on Pilot/Controller And Controller/Controller Interactions
NASA Technical Reports Server (NTRS)
Hansman, R. John
1999-01-01
In order to respond to the increasing demand on limited airspace system resources, a number of applications of information technology have been proposed, or are under investigation, to improve the efficiency, capacity and reliability of ATM (Asynchronous Transfer Mode) operations. Much of the attention in advanced ATM technology has focused on advanced automation systems or decision aiding systems to improve the performance of individual Pilots or Controllers. However, the most significant overall potential for information technology appears to he in increasing the shared information between human agents such as Pilots, Controllers or between interacting Controllers or traffic flow managers. Examples of proposed shared information systems in the US include; Controller Pilot Databank Communication (CPDLC), Traffic Management Advisor (TMA); Automatic Dependent Surveillance (ADS); Collaborative Decision Making (CDM) and NAS Level Common Information Exchange. Air Traffic Management is fundamentally a human centered process consisting of the negotiation, execution and monitoring of contracts between human agents for the allocation of limited airspace, runway and airport surface resources. The decision processes within ATM tend to be Semistructured. Many of the routine elements in ATM decision making on the part of the Controllers or Pilots are well Structured and can be represented by well defined rules or procedures. However in disrupted conditions, the ATM decision processes are often Unstructured and cannot be reduced to a set of discrete rules. As a consequence, the ability to automate ATM processes will be limited and ATM will continue to be a human centric process where the responsibility and the authority for the negotiation will continue to rest with human Controllers and Pilots. The use of information technology to support the human decision process will therefore be an important aspect of ATM modernization. The premise of many of the proposed shared information systems is that the performance of ATM operations will improve with an increase in Shared Situation Awareness between agents (Pilots, Controller, Dispatchers). This will allow better informed control decisions and an improved ability to negotiate between agents. A common information basis may reduce communication load and may increase the level of collaboration in the decision process. In general, information sharing is expected to have advantages for all agents within the system. However there are important questions which remain to be,addressed. For example: What shared information is most important for developing effective Shared Situation Awareness? Are there issues of information saturation? Does information parity create ambiguity in control authority? Will information sharing induce undesirable or unstable gaming behavior between agents? This paper will explore the effect of current and proposed information sharing between different ATM agents. The paper will primarily concentrate on bilateral tactical interactions between specific agents (Pilot/Controller; Controller/Controller; Pilot/Dispatcher; Controller/Dispatcher) however it will also briefly discuss multilateral interaction and more strategic interactions.
A Semantic Approach for Geospatial Information Extraction from Unstructured Documents
NASA Astrophysics Data System (ADS)
Sallaberry, Christian; Gaio, Mauro; Lesbegueries, Julien; Loustau, Pierre
Local cultural heritage document collections are characterized by their content, which is strongly attached to a territory and its land history (i.e., geographical references). Our contribution aims at making the content retrieval process more efficient whenever a query includes geographic criteria. We propose a core model for a formal representation of geographic information. It takes into account characteristics of different modes of expression, such as written language, captures of drawings, maps, photographs, etc. We have developed a prototype that fully implements geographic information extraction (IE) and geographic information retrieval (IR) processes. All PIV prototype processing resources are designed as Web Services. We propose a geographic IE process based on semantic treatment as a supplement to classical IE approaches. We implement geographic IR by using intersection computing algorithms that seek out any intersection between formal geocoded representations of geographic information in a user query and similar representations in document collection indexes.
Amanzi: An Open-Source Multi-process Simulator for Environmental Applications
NASA Astrophysics Data System (ADS)
Moulton, J. D.; Molins, S.; Johnson, J. N.; Coon, E.; Lipnikov, K.; Day, M.; Barker, E.
2014-12-01
The Advanced Simulation Capabililty for Environmental Management (ASCEM) program is developing an approach and open-source tool suite for standardized risk and performance assessments at legacy nuclear waste sites. These assessments begin with simplified models, and add geometric and geologic complexity as understanding is gained. The Platform toolsets (Akuna) generates these conceptual models and Amanzi provides the computational engine to perform the simulations, returning the results for analysis and visualization. In this presentation we highlight key elements of the design, algorithms and implementations used in Amanzi. In particular, the hierarchical and modular design is aligned with the coupled processes being sumulated, and naturally supports a wide range of model complexity. This design leverages a dynamic data manager and the synergy of two graphs (one from the high-level perspective of the models the other from the dependencies of the variables in the model) to enable this flexible model configuration at run time. Moreover, to model sites with complex hydrostratigraphy, as well as engineered systems, we are developing a dual unstructured/structured capability. Recently, these capabilities have been collected in a framework named Arcos, and efforts have begun to improve interoperability between the unstructured and structured AMR approaches in Amanzi. To leverage a range of biogeochemistry capability from the community (e.g., CrunchFlow, PFLOTRAN, etc.), a biogeochemistry interface library was developed called Alquimia. To ensure that Amanzi is truly an open-source community code we require a completely open-source tool chain for our development. We will comment on elements of this tool chain, including the testing and documentation development tools such as docutils, and Sphinx. Finally, we will show simulation results from our phased demonstrations, including the geochemically complex Savannah River F-Area seepage basins.
Rambo, Robert P.; Tainer, John A.
2011-01-01
Unstructured proteins, RNA or DNA components provide functionally important flexibility that is key to many macromolecular assemblies throughout cell biology. As objective, quantitative experimental measures of flexibility and disorder in solution are limited, small angle scattering (SAS), and in particular small angle X-ray scattering (SAXS), provides a critical technology to assess macromolecular flexibility as well as shape and assembly. Here, we consider the Porod-Debye law as a powerful tool for detecting biopolymer flexibility in SAS experiments. We show that the Porod-Debye region fundamentally describes the nature of the scattering intensity decay, which captures information needed for distinguishing between folded and flexible particles. Particularly for comparative SAS experiments, application of the law, as described here, can distinguish between discrete conformational changes and localized flexibility relevant to molecular recognition and interaction networks. This approach aids insightful analyses of fully and partly flexible macromolecules that is more robust and conclusive than traditional Kratky analyses. Furthermore, we demonstrate for prototypic SAXS data that the ability to calculate particle density by the Porod-Debye criteria, as shown here, provides an objective quality assurance parameter that may prove of general use for SAXS modeling and validation. PMID:21509745
Experiences with Text Mining Large Collections of Unstructured Systems Development Artifacts at JPL
NASA Technical Reports Server (NTRS)
Port, Dan; Nikora, Allen; Hihn, Jairus; Huang, LiGuo
2011-01-01
Often repositories of systems engineering artifacts at NASA's Jet Propulsion Laboratory (JPL) are so large and poorly structured that they have outgrown our capability to effectively manually process their contents to extract useful information. Sophisticated text mining methods and tools seem a quick, low-effort approach to automating our limited manual efforts. Our experiences of exploring such methods mainly in three areas including historical risk analysis, defect identification based on requirements analysis, and over-time analysis of system anomalies at JPL, have shown that obtaining useful results requires substantial unanticipated efforts - from preprocessing the data to transforming the output for practical applications. We have not observed any quick 'wins' or realized benefit from short-term effort avoidance through automation in this area. Surprisingly we have realized a number of unexpected long-term benefits from the process of applying text mining to our repositories. This paper elaborates some of these benefits and our important lessons learned from the process of preparing and applying text mining to large unstructured system artifacts at JPL aiming to benefit future TM applications in similar problem domains and also in hope for being extended to broader areas of applications.
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Vatsa, Veer N.; Atkins, Harold L.
2005-01-01
We apply an unsteady Reynolds-averaged Navier-Stokes (URANS) solver for unstructured grids to unsteady flows on moving and stationary grids. Example problems considered are relevant to active flow control and stability and control. Computational results are presented using the Spalart-Allmaras turbulence model and are compared to experimental data. The effect of grid and time-step refinement are examined.
NASA Astrophysics Data System (ADS)
Balsara, Dinshaw S.; Dumbser, Michael
2015-10-01
Several advances have been reported in the recent literature on divergence-free finite volume schemes for Magnetohydrodynamics (MHD). Almost all of these advances are restricted to structured meshes. To retain full geometric versatility, however, it is also very important to make analogous advances in divergence-free schemes for MHD on unstructured meshes. Such schemes utilize a staggered Yee-type mesh, where all hydrodynamic quantities (mass, momentum and energy density) are cell-centered, while the magnetic fields are face-centered and the electric fields, which are so useful for the time update of the magnetic field, are centered at the edges. Three important advances are brought together in this paper in order to make it possible to have high order accurate finite volume schemes for the MHD equations on unstructured meshes. First, it is shown that a divergence-free WENO reconstruction of the magnetic field can be developed for unstructured meshes in two and three space dimensions using a classical cell-centered WENO algorithm, without the need to do a WENO reconstruction for the magnetic field on the faces. This is achieved via a novel constrained L2-projection operator that is used in each time step as a postprocessor of the cell-centered WENO reconstruction so that the magnetic field becomes locally and globally divergence free. Second, it is shown that recently-developed genuinely multidimensional Riemann solvers (called MuSIC Riemann solvers) can be used on unstructured meshes to obtain a multidimensionally upwinded representation of the electric field at each edge. Third, the above two innovations work well together with a high order accurate one-step ADER time stepping strategy, which requires the divergence-free nonlinear WENO reconstruction procedure to be carried out only once per time step. The resulting divergence-free ADER-WENO schemes with MuSIC Riemann solvers give us an efficient and easily-implemented strategy for divergence-free MHD on unstructured meshes. Several stringent two- and three-dimensional problems are shown to work well with the methods presented here.
The Frictionless Data Package: Data Containerization for Automated Scientific Workflows
NASA Astrophysics Data System (ADS)
Shepherd, A.; Fils, D.; Kinkade, D.; Saito, M. A.
2017-12-01
As cross-disciplinary geoscience research increasingly relies on machines to discover and access data, one of the critical questions facing data repositories is how data and supporting materials should be packaged for consumption. Traditionally, data repositories have relied on a human's involvement throughout discovery and access workflows. This human could assess fitness for purpose by reading loosely coupled, unstructured information from web pages and documentation. In attempts to shorten the time to science and access data resources across may disciplines, expectations for machines to mediate the process of discovery and access is challenging data repository infrastructure. This challenge is to find ways to deliver data and information in ways that enable machines to make better decisions by enabling them to understand the data and metadata of many data types. Additionally, once machines have recommended a data resource as relevant to an investigator's needs, the data resource should be easy to integrate into that investigator's toolkits for analysis and visualization. The Biological and Chemical Oceanography Data Management Office (BCO-DMO) supports NSF-funded OCE and PLR investigators with their project's data management needs. These needs involve a number of varying data types some of which require multiple files with differing formats. Presently, BCO-DMO has described these data types and the important relationships between the type's data files through human-readable documentation on web pages. For machines directly accessing data files from BCO-DMO, this documentation could be overlooked and lead to misinterpreting the data. Instead, BCO-DMO is exploring the idea of data containerization, or packaging data and related information for easier transport, interpretation, and use. In researching the landscape of data containerization, the Frictionlessdata Data Package (http://frictionlessdata.io/) provides a number of valuable advantages over similar solutions. This presentation will focus on these advantages and how the Frictionlessdata Data Package addresses a number of real-world use cases faced for data discovery, access, analysis and visualization.
NASA Astrophysics Data System (ADS)
Ahlers, Dirk; Boll, Susanne
In recent years, the relation of Web information to a physical location has gained much attention. However, Web content today often carries only an implicit relation to a location. In this chapter, we present a novel location-based search engine that automatically derives spatial context from unstructured Web resources and allows for location-based search: our focused crawler applies heuristics to crawl and analyze Web pages that have a high probability of carrying a spatial relation to a certain region or place; the location extractor identifies the actual location information from the pages; our indexer assigns a geo-context to the pages and makes them available for a later spatial Web search. We illustrate the usage of our spatial Web search for location-based applications that provide information not only right-in-time but also right-on-the-spot.
NASA Astrophysics Data System (ADS)
Zheng, J.; Zhu, J.; Wang, Z.; Fang, F.; Pain, C. C.; Xiang, J.
2015-06-01
A new anisotropic hr-adaptive mesh technique has been applied to modelling of multiscale transport phenomena, which is based on a discontinuous Galerkin/control volume discretization on unstructured meshes. Over existing air quality models typically based on static-structured grids using a locally nesting technique, the advantage of the anisotropic hr-adaptive model has the ability to adapt the mesh according to the evolving pollutant distribution and flow features. That is, the mesh resolution can be adjusted dynamically to simulate the pollutant transport process accurately and effectively. To illustrate the capability of the anisotropic adaptive unstructured mesh model, three benchmark numerical experiments have been setup for two-dimensional (2-D) transport phenomena. Comparisons have been made between the results obtained using uniform resolution meshes and anisotropic adaptive resolution meshes.
Tsigelny, Igor F.; Sharikov, Yuriy; Miller, Mark A.; Masliah, Eliezer
2008-01-01
Misfolding and oligomerization of unstructured proteins is involved in the pathogenesis of Parkinson’s (PD), Alzheimer’s (AD), Huntington’s, and other neurodegenerative disorders. Elucidation of possible conformations of these proteins and their interactions with the membrane is necessary to understand the molecular mechanisms of neurodegeneration. We developed a strategy that makes it possible to elucidate the molecular mechanisms of of alpha-synuclein aggregation- a key molecular event in the pathogenesis of PD. This strategy can be also useful for the study of other unstructured proteins involved in neurodegeneration. The results of these theoretical studies have been confirmed with biochemical and electrophysiological studies. Our studies provide insights into the molecular mechanism for PD initiation and progression, and provide a useful paradigm for identifying possible therapeutic interventions through computational modeling. PMID:18640077
Discrete Adjoint-Based Design Optimization of Unsteady Turbulent Flows on Dynamic Unstructured Grids
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Diskin, Boris; Yamaleev, Nail K.
2009-01-01
An adjoint-based methodology for design optimization of unsteady turbulent flows on dynamic unstructured grids is described. The implementation relies on an existing unsteady three-dimensional unstructured grid solver capable of dynamic mesh simulations and discrete adjoint capabilities previously developed for steady flows. The discrete equations for the primal and adjoint systems are presented for the backward-difference family of time-integration schemes on both static and dynamic grids. The consistency of sensitivity derivatives is established via comparisons with complex-variable computations. The current work is believed to be the first verified implementation of an adjoint-based optimization methodology for the true time-dependent formulation of the Navier-Stokes equations in a practical computational code. Large-scale shape optimizations are demonstrated for turbulent flows over a tiltrotor geometry and a simulated aeroelastic motion of a fighter jet.
The computation of three-dimensional flows using unstructured grids
NASA Technical Reports Server (NTRS)
Morgan, K.; Peraire, J.; Peiro, J.; Hassan, O.
1991-01-01
A general method is described for automatically discretizing, into unstructured assemblies of tetrahedra, the three-dimensional solution domains of complex shape which are of interest in practical computational aerodynamics. An algorithm for the solution of the compressible Euler equations which can be implemented on such general unstructured tetrahedral grids is described. This is an explicit cell-vertex scheme which follows a general Taylor-Galerkin philosophy. The approach is employed to compute a transonic inviscid flow over a standard wing and the results are shown to compare favorably with experimental observations. As a more practical demonstration, the method is then applied to the analysis of inviscid flow over a complete modern fighter configuration. The effect of using mesh adaptivity is illustrated when the method is applied to the solution of high speed flow in an engine inlet.
NASA Astrophysics Data System (ADS)
Song, Yang; Srinivasan, Bhuvana
2017-10-01
The discontinuous Galerkin (DG) method has the advantage of resolving shocks and sharp gradients that occur in neutral fluids and plasmas. An unstructured DG code has been developed in this work to study plasma instabilities using the two-fluid plasma model. Unstructured meshes are known to produce small and randomized grid errors compared to traditional structured meshes. Computational tests for Rayleigh-Taylor instabilities in radially-converging flows are performed using the MHD model. Choice of grid geometry is not obvious for simulations of instabilities in these circular configurations. Comparisons of the effects for different grids are made. A 2D magnetic nozzle simulation using the two-fluid plasma model is also performed. A vacuum boundary condition technique is applied to accurately solve the Riemann problem on the edge of the plume.
Identifying and managing inappropriate hospital utilization: a policy synthesis.
Payne, S M
1987-01-01
Utilization review, the assessment of the appropriateness and efficiency of hospital care through review of the medical record, and utilization management, deliberate action by payers or hospital administrators to influence providers of hospital services to increase the efficiency and effectiveness with which services are provided, are valuable but relatively unfamiliar strategies for containing hospital costs. The purpose of this synthesis is to increase awareness of the scope of and potential for these approaches among health services managers and administrators, third-party payers, policy analysts, and health services researchers. The synthesis will assist the reader to trace the conceptual context and the historical development of utilization review from unstructured methods using individual physicians' professional judgment to structured methods using explicit criteria; to establish the context of utilization review and clarify its uses; to understand the concepts and tools used in assessing the efficiency of hospital use; and to select, design, and evaluate utilization review and utilization management programs. The extent of inappropriate (medical unnecessary) hospital utilization and the factors associated with it are described. Implications for managers, providers, and third-party payers in targeting utilization review and in designing and evaluating utilization management programs are discussed. PMID:3121538
Chasing the long tail of environmental data: PEcAn is nuts about Brown Dog
NASA Astrophysics Data System (ADS)
Dietze, M.; Cowdery, E.; Desai, A. R.; Gardella, A.; Kelly, R.; Kooper, R.; LeBauer, D.; Mantooth, J.; McHenry, K.; Serbin, S.; Shiklomanov, A. N.; Simkins, J.; Viskari, T.; Raiho, A.
2015-12-01
The Predictive Ecosystem Analyzer (PEcAn) is a ecological modeling informatics system that manages the flows of information in and out of terrestrial biosphere models, provenance tracking, visualization, analysis, and model-data fusion. We are in the process of scaling the PEcAn system from one that currently supports a handful of models and system nodes to one that aims to provide bottom-up connectivity across much of the model-data integration done by the terrestrial biogeochemistry community. This talk reports on the current state of PEcAn, it's data processing workflows, and the near- and long-term challenges faced. Particular emphasis will be given to the tools being developed by the Brown Dog project to make unstructured, un-curated data more accessible: the Data Access Proxy (DAP) and the Data Tilling Service (DTS). The use of the DAP to process meteorological data and the DTS to read vegetation data will be demonstrated and other Brown Dog environmental case studies will be briefly touched on. Beyond data processing, facilitating data discovery and import into PEcAn and distributing analyses across the PEcAn network (i.e. bringing models to data) are key challenges moving forward.
GramHealth: a bottom-up approach to provide preventive healthcare services for unreached community.
Ahmed, Ashir; Kabir, Lutfe; Kai, Eiko; Inoue, Sozo
2013-01-01
Insufficient healthcare facilities and unavailability of medical experts in rural areas are the two major reasons that kept the people unreached to healthcare services. Recent penetration of mobile phone and the demand to basic healthcare services, remote health consultancy over mobile phone became popular in developing countries. In this paper, we introduce two such representative initiatives from Bangladesh and discuss the technical challenges they face to serve a remote patient. To solve these issues, we have prototyped a box with necessary diagnostic tools, we call it a "portable clinic" and a software tool, "GramHealth" for managing the patient information. We carried out experiments in three villages in Bangladesh to observe the usability of the portable clinic and verify the functionality of "GramHealth". We display the qualitative analysis of the results obtained from the experiment. GramHealth DB has a unique combination of structured, semi-structured and un-structured data. We are currently looking at these data to see whether these can be treated as BigData and if yes, how to analyze the data and what to expect from these data to make a better clinical decision support.
U-Compare: share and compare text mining tools with UIMA.
Kano, Yoshinobu; Baumgartner, William A; McCrohon, Luke; Ananiadou, Sophia; Cohen, K Bretonnel; Hunter, Lawrence; Tsujii, Jun'ichi
2009-08-01
Due to the increasing number of text mining resources (tools and corpora) available to biologists, interoperability issues between these resources are becoming significant obstacles to using them effectively. UIMA, the Unstructured Information Management Architecture, is an open framework designed to aid in the construction of more interoperable tools. U-Compare is built on top of the UIMA framework, and provides both a concrete framework for out-of-the-box text mining and a sophisticated evaluation platform allowing users to run specific tools on any target text, generating both detailed statistics and instance-based visualizations of outputs. U-Compare is a joint project, providing the world's largest, and still growing, collection of UIMA-compatible resources. These resources, originally developed by different groups for a variety of domains, include many famous tools and corpora. U-Compare can be launched straight from the web, without needing to be manually installed. All U-Compare components are provided ready-to-use and can be combined easily via a drag-and-drop interface without any programming. External UIMA components can also simply be mixed with U-Compare components, without distinguishing between locally and remotely deployed resources. http://u-compare.org/
Learning Long-Range Vision for an Offroad Robot
2008-09-01
robot to perceive and navigate in an unstructured natural world is a difficult task. Without learning, navigation systems are short-range and extremely...unsupervised or weakly supervised learning methods are necessary for training general feature representations for natural scenes. The process was...the world looked dark, and Legos when I was weary. iii ABSTRACT Teaching a robot to perceive and navigate in an unstructured natural world is a
WENO schemes on arbitrary mixed-element unstructured meshes in three space dimensions
NASA Astrophysics Data System (ADS)
Tsoutsanis, P.; Titarev, V. A.; Drikakis, D.
2011-02-01
The paper extends weighted essentially non-oscillatory (WENO) methods to three dimensional mixed-element unstructured meshes, comprising tetrahedral, hexahedral, prismatic and pyramidal elements. Numerical results illustrate the convergence rates and non-oscillatory properties of the schemes for various smooth and discontinuous solutions test cases and the compressible Euler equations on various types of grids. Schemes of up to fifth order of spatial accuracy are considered.
NASA Technical Reports Server (NTRS)
McCloud, Peter L.
2010-01-01
Thermal Protection System (TPS) Cavity Heating is predicted using Computational Fluid Dynamics (CFD) on unstructured grids for both simplified cavities and actual cavity geometries. Validation was performed using comparisons to wind tunnel experimental results and CFD predictions using structured grids. Full-scale predictions were made for simplified and actual geometry configurations on the Space Shuttle Orbiter in a mission support timeframe.
Implementation of a parallel unstructured Euler solver on the CM-5
NASA Technical Reports Server (NTRS)
Morano, Eric; Mavriplis, D. J.
1995-01-01
An efficient unstructured 3D Euler solver is parallelized on a Thinking Machine Corporation Connection Machine 5, distributed memory computer with vectoring capability. In this paper, the single instruction multiple data (SIMD) strategy is employed through the use of the CM Fortran language and the CMSSL scientific library. The performance of the CMSSL mesh partitioner is evaluated and the overall efficiency of the parallel flow solver is discussed.
Unstructured grids for sonic-boom analysis
NASA Technical Reports Server (NTRS)
Fouladi, Kamran
1993-01-01
A fast and efficient unstructured grid scheme is evaluated for sonic-boom applications. The scheme is used to predict the near-field pressure signatures of a body of revolution at several body lengths below the configuration, and those results are compared with experimental data. The introduction of the 'sonic-boom grid topology' to this scheme make it well suited for sonic-boom applications, thus providing an alternative to conventional multiblock structured grid schemes.
A perspective on unstructured grid flow solvers
NASA Technical Reports Server (NTRS)
Venkatakrishnan, V.
1995-01-01
This survey paper assesses the status of compressible Euler and Navier-Stokes solvers on unstructured grids. Different spatial and temporal discretization options for steady and unsteady flows are discussed. The integration of these components into an overall framework to solve practical problems is addressed. Issues such as grid adaptation, higher order methods, hybrid discretizations and parallel computing are briefly discussed. Finally, some outstanding issues and future research directions are presented.
Control of nonlinear systems using terminal sliding modes
NASA Technical Reports Server (NTRS)
Venkataraman, S. T.; Gulati, S.
1992-01-01
The development of an approach to control synthesis for robust robot operations in unstructured environments is discussed. To enhance control performance with full model information, the authors introduce the notion of terminal convergence and develop control laws based on a class of sliding modes, denoted as terminal sliders. They demonstrate that terminal sliders provide robustness to parametric uncertainty without having to resort to high-frequency control switching, as in the case of conventional sliders. It is shown that the proposed method leads to greater guaranteed precision in all control cases discussed.
Distributed software framework and continuous integration in hydroinformatics systems
NASA Astrophysics Data System (ADS)
Zhou, Jianzhong; Zhang, Wei; Xie, Mengfei; Lu, Chengwei; Chen, Xiao
2017-08-01
When encountering multiple and complicated models, multisource structured and unstructured data, complex requirements analysis, the platform design and integration of hydroinformatics systems become a challenge. To properly solve these problems, we describe a distributed software framework and it’s continuous integration process in hydroinformatics systems. This distributed framework mainly consists of server cluster for models, distributed database, GIS (Geographic Information System) servers, master node and clients. Based on it, a GIS - based decision support system for joint regulating of water quantity and water quality of group lakes in Wuhan China is established.
Managing knowledge business intelligence: A cognitive analytic approach
NASA Astrophysics Data System (ADS)
Surbakti, Herison; Ta'a, Azman
2017-10-01
The purpose of this paper is to identify and analyze integration of Knowledge Management (KM) and Business Intelligence (BI) in order to achieve competitive edge in context of intellectual capital. Methodology includes review of literatures and analyzes the interviews data from managers in corporate sector and models established by different authors. BI technologies have strong association with process of KM for attaining competitive advantage. KM have strong influence from human and social factors and turn them to the most valuable assets with efficient system run under BI tactics and technologies. However, the term of predictive analytics is based on the field of BI. Extracting tacit knowledge is a big challenge to be used as a new source for BI to use in analyzing. The advanced approach of the analytic methods that address the diversity of data corpus - structured and unstructured - required a cognitive approach to provide estimative results and to yield actionable descriptive, predictive and prescriptive results. This is a big challenge nowadays, and this paper aims to elaborate detail in this initial work.
Software Vulnerability Taxonomy Consolidation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polepeddi, Sriram S.
2004-12-07
In today's environment, computers and networks are increasing exposed to a number of software vulnerabilities. Information about these vulnerabilities is collected and disseminated via various large publicly available databases such as BugTraq, OSVDB and ICAT. Each of these databases, individually, do not cover all aspects of a vulnerability and lack a standard format among them, making it difficult for end-users to easily compare various vulnerabilities. A central database of vulnerabilities has not been available until today for a number of reasons, such as the non-uniform methods by which current vulnerability database providers receive information, disagreement over which features of amore » particular vulnerability are important and how best to present them, and the non-utility of the information presented in many databases. The goal of this software vulnerability taxonomy consolidation project is to address the need for a universally accepted vulnerability taxonomy that classifies vulnerabilities in an unambiguous manner. A consolidated vulnerability database (CVDB) was implemented that coalesces and organizes vulnerability data from disparate data sources. Based on the work done in this paper, there is strong evidence that a consolidated taxonomy encompassing and organizing all relevant data can be achieved. However, three primary obstacles remain: lack of referencing a common ''primary key'', un-structured and free-form descriptions of necessary vulnerability data, and lack of data on all aspects of a vulnerability. This work has only considered data that can be unambiguously extracted from various data sources by straightforward parsers. It is felt that even with the use of more advanced, information mining tools, which can wade through the sea of unstructured vulnerability data, this current integration methodology would still provide repeatable, unambiguous, and exhaustive results. Though the goal of coalescing all available data, which would be of use to system administrators, software developers and vulnerability researchers is not yet achieved, this work has resulted in the most exhaustive collection of vulnerability data to date.« less
Asselbergs, Folkert W; Visseren, Frank Lj; Bots, Michiel L; de Borst, Gert J; Buijsrogge, Marc P; Dieleman, Jan M; van Dinther, Baukje Gf; Doevendans, Pieter A; Hoefer, Imo E; Hollander, Monika; de Jong, Pim A; Koenen, Steven V; Pasterkamp, Gerard; Ruigrok, Ynte M; van der Schouw, Yvonne T; Verhaar, Marianne C; Grobbee, Diederick E
2017-05-01
Background Cardiovascular disease remains the major contributor to morbidity and mortality. In routine care for patients with an elevated cardiovascular risk or with symptomatic cardiovascular disease information is mostly collected in an unstructured manner, making the data of limited use for structural feedback, quality control, learning and scientific research. Objective The Utrecht Cardiovascular Cohort (UCC) initiative aims to create an infrastructure for uniform registration of cardiovascular information in routine clinical practice for patients referred for cardiovascular care at the University Medical Center Utrecht, the Netherlands. This infrastructure will promote optimal care according to guidelines, continuous quality control in a learning healthcare system and creation of a research database. Methods The UCC comprises three parts. UCC-1 comprises enrolment of all eligible cardiovascular patients in whom the same information will be collected, based on the Dutch cardiovascular management guideline. A sample of UCC-1 will be invited for UCC-2. UCC-2 involves an enrichment through extensive clinical measurements with emphasis on heart failure, cerebral ischaemia, arterial aneurysms, diabetes mellitus and elevated blood pressure. UCC-3 comprises on-top studies, with in-depth measurements in smaller groups of participants typically based on dedicated project grants. All participants are followed up for morbidity and mortality through linkage with national registries. Conclusion In a multidisciplinary effort with physicians, patients and researchers the UCC sets a benchmark for a learning cardiovascular healthcare system. UCC offers an invaluable resource for future high quality care as well as for first-class research for investigators.
Transforming microbial genotyping: a robotic pipeline for genotyping bacterial strains.
O'Farrell, Brian; Haase, Jana K; Velayudhan, Vimalkumar; Murphy, Ronan A; Achtman, Mark
2012-01-01
Microbial genotyping increasingly deals with large numbers of samples, and data are commonly evaluated by unstructured approaches, such as spread-sheets. The efficiency, reliability and throughput of genotyping would benefit from the automation of manual manipulations within the context of sophisticated data storage. We developed a medium- throughput genotyping pipeline for MultiLocus Sequence Typing (MLST) of bacterial pathogens. This pipeline was implemented through a combination of four automated liquid handling systems, a Laboratory Information Management System (LIMS) consisting of a variety of dedicated commercial operating systems and programs, including a Sample Management System, plus numerous Python scripts. All tubes and microwell racks were bar-coded and their locations and status were recorded in the LIMS. We also created a hierarchical set of items that could be used to represent bacterial species, their products and experiments. The LIMS allowed reliable, semi-automated, traceable bacterial genotyping from initial single colony isolation and sub-cultivation through DNA extraction and normalization to PCRs, sequencing and MLST sequence trace evaluation. We also describe robotic sequencing to facilitate cherrypicking of sequence dropouts. This pipeline is user-friendly, with a throughput of 96 strains within 10 working days at a total cost of < €25 per strain. Since developing this pipeline, >200,000 items were processed by two to three people. Our sophisticated automated pipeline can be implemented by a small microbiology group without extensive external support, and provides a general framework for semi-automated bacterial genotyping of large numbers of samples at low cost.
Transforming Microbial Genotyping: A Robotic Pipeline for Genotyping Bacterial Strains
Velayudhan, Vimalkumar; Murphy, Ronan A.; Achtman, Mark
2012-01-01
Microbial genotyping increasingly deals with large numbers of samples, and data are commonly evaluated by unstructured approaches, such as spread-sheets. The efficiency, reliability and throughput of genotyping would benefit from the automation of manual manipulations within the context of sophisticated data storage. We developed a medium- throughput genotyping pipeline for MultiLocus Sequence Typing (MLST) of bacterial pathogens. This pipeline was implemented through a combination of four automated liquid handling systems, a Laboratory Information Management System (LIMS) consisting of a variety of dedicated commercial operating systems and programs, including a Sample Management System, plus numerous Python scripts. All tubes and microwell racks were bar-coded and their locations and status were recorded in the LIMS. We also created a hierarchical set of items that could be used to represent bacterial species, their products and experiments. The LIMS allowed reliable, semi-automated, traceable bacterial genotyping from initial single colony isolation and sub-cultivation through DNA extraction and normalization to PCRs, sequencing and MLST sequence trace evaluation. We also describe robotic sequencing to facilitate cherrypicking of sequence dropouts. This pipeline is user-friendly, with a throughput of 96 strains within 10 working days at a total cost of < €25 per strain. Since developing this pipeline, >200,000 items were processed by two to three people. Our sophisticated automated pipeline can be implemented by a small microbiology group without extensive external support, and provides a general framework for semi-automated bacterial genotyping of large numbers of samples at low cost. PMID:23144721
Non-Cooperative Group Decision Support Systems: Problems and Some Solutions.
1986-09-01
appears that in these situations the 46 content of the problem and the structure of the problem is " fuzzy ." It requires an active cooperation between the...some unstructured parts will remain. This partial ’unstructurability’ is due to uncertainty, fuzziness , ignorance, and an inability to...according to the Analytic Hierarchy Process ( AHP ) technique (Gui, 1985). The AHP algorithm consists of the following steps; (i) Perform a pairwise comparison
A 3D Unstructured Mesh Euler Solver Based on the Fourth-Order CESE Method
2013-06-01
Form 298 (Rev. 8-98) Prescribed by ANSI Std. 239.18 A 3D Unstructured Mesh Euler Solver Based on the Fourth-Order CESE Method David L. Bilyeu ∗1,2...Similarly, the fluxes, f x,y,z i , and their derivatives inside a SE are also discretized by the Taylor series expansion: ∂ Cfx ,y,zi ∂xI∂yJ∂zK∂tL = A
NASA Technical Reports Server (NTRS)
Loh, Ching Y.; Himansu, Ananda; Hultgren, Lennart S.
2003-01-01
A 3-D space-time CE/SE Navier-Stokes solver using an unstructured hexahedral grid is described and applied to a circular jet screech noise computation. The present numerical results for an underexpanded jet, corresponding to a fully expanded Mach number of 1.42, capture the dominant and nonaxisymmetric 'B' screech mode and are generally in good agreement with existing experiments.
A New Approach to Parallel Dynamic Partitioning for Adaptive Unstructured Meshes
NASA Technical Reports Server (NTRS)
Heber, Gerd; Biswas, Rupak; Gao, Guang R.
1999-01-01
Classical mesh partitioning algorithms were designed for rather static situations, and their straightforward application in a dynamical framework may lead to unsatisfactory results, e.g., excessive data migration among processors. Furthermore, special attention should be paid to their amenability to parallelization. In this paper, a novel parallel method for the dynamic partitioning of adaptive unstructured meshes is described. It is based on a linear representation of the mesh using self-avoiding walks.
Three-dimensional unstructured grid refinement and optimization using edge-swapping
NASA Technical Reports Server (NTRS)
Gandhi, Amar; Barth, Timothy
1993-01-01
This paper presents a three-dimensional (3-D) 'edge-swapping method based on local transformations. This method extends Lawson's edge-swapping algorithm into 3-D. The 3-D edge-swapping algorithm is employed for the purpose of refining and optimizing unstructured meshes according to arbitrary mesh-quality measures. Several criteria including Delaunay triangulations are examined. Extensions from two to three dimensions of several known properties of Delaunay triangulations are also discussed.
PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.
NASA Technical Reports Server (NTRS)
Oliker, Leonid
1998-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.
Tompa, P.; Bánki, P.; Bokor, M.; Kamasa, P.; Kovács, D.; Lasanda, G.; Tompa, K.
2006-01-01
Proton NMR intensity and differential scanning calorimetry measurements were carried out on an intrinsically unstructured late embryogenesis abundant protein, ERD10, the globular BSA, and various buffer solutions to characterize water and ion binding of proteins by this novel combination of experimental approaches. By quantifying the number of hydration water molecules, the results demonstrate the interaction between the protein and NaCl and between buffer and NaCl on a microscopic level. The findings overall provide direct evidence that the intrinsically unstructured ERD10 not only has a high hydration capacity but can also bind a large amount of charged solute ions. In accord, the dehydration stress function of this protein probably results from its simultaneous action of retaining water in the drying cells and preventing an adverse increase in ionic strength, thus countering deleterious effects such as protein denaturation. PMID:16798808
An Exact Dual Adjoint Solution Method for Turbulent Flows on Unstructured Grids
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Lu, James; Park, Michael A.; Darmofal, David L.
2003-01-01
An algorithm for solving the discrete adjoint system based on an unstructured-grid discretization of the Navier-Stokes equations is presented. The method is constructed such that an adjoint solution exactly dual to a direct differentiation approach is recovered at each time step, yielding a convergence rate which is asymptotically equivalent to that of the primal system. The new approach is implemented within a three-dimensional unstructured-grid framework and results are presented for inviscid, laminar, and turbulent flows. Improvements to the baseline solution algorithm, such as line-implicit relaxation and a tight coupling of the turbulence model, are also presented. By storing nearest-neighbor terms in the residual computation, the dual scheme is computationally efficient, while requiring twice the memory of the flow solution. The scheme is expected to have a broad impact on computational problems related to design optimization as well as error estimation and grid adaptation efforts.
Domain Decomposition By the Advancing-Partition Method for Parallel Unstructured Grid Generation
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.; Zagaris, George
2009-01-01
A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.
Unstructured viscous grid generation by advancing-front method
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar
1993-01-01
A new method of generating unstructured triangular/tetrahedral grids with high-aspect-ratio cells is proposed. The method is based on new grid-marching strategy referred to as 'advancing-layers' for construction of highly stretched cells in the boundary layer and the conventional advancing-front technique for generation of regular, equilateral cells in the inviscid-flow region. Unlike the existing semi-structured viscous grid generation techniques, the new procedure relies on a totally unstructured advancing-front grid strategy resulting in a substantially enhanced grid flexibility and efficiency. The method is conceptually simple but powerful, capable of producing high quality viscous grids for complex configurations with ease. A number of two-dimensional, triangular grids are presented to demonstrate the methodology. The basic elements of the method, however, have been primarily designed with three-dimensional problems in mind, making it extendible for tetrahedral, viscous grid generation.
A Robust and Scalable Software Library for Parallel Adaptive Refinement on Unstructured Meshes
NASA Technical Reports Server (NTRS)
Lou, John Z.; Norton, Charles D.; Cwik, Thomas A.
1999-01-01
The design and implementation of Pyramid, a software library for performing parallel adaptive mesh refinement (PAMR) on unstructured meshes, is described. This software library can be easily used in a variety of unstructured parallel computational applications, including parallel finite element, parallel finite volume, and parallel visualization applications using triangular or tetrahedral meshes. The library contains a suite of well-designed and efficiently implemented modules that perform operations in a typical PAMR process. Among these are mesh quality control during successive parallel adaptive refinement (typically guided by a local-error estimator), parallel load-balancing, and parallel mesh partitioning using the ParMeTiS partitioner. The Pyramid library is implemented in Fortran 90 with an interface to the Message-Passing Interface (MPI) library, supporting code efficiency, modularity, and portability. An EM waveguide filter application, adaptively refined using the Pyramid library, is illustrated.
Discrete Adjoint-Based Design for Unsteady Turbulent Flows On Dynamic Overset Unstructured Grids
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Diskin, Boris
2012-01-01
A discrete adjoint-based design methodology for unsteady turbulent flows on three-dimensional dynamic overset unstructured grids is formulated, implemented, and verified. The methodology supports both compressible and incompressible flows and is amenable to massively parallel computing environments. The approach provides a general framework for performing highly efficient and discretely consistent sensitivity analysis for problems involving arbitrary combinations of overset unstructured grids which may be static, undergoing rigid or deforming motions, or any combination thereof. General parent-child motions are also accommodated, and the accuracy of the implementation is established using an independent verification based on a complex-variable approach. The methodology is used to demonstrate aerodynamic optimizations of a wind turbine geometry, a biologically-inspired flapping wing, and a complex helicopter configuration subject to trimming constraints. The objective function for each problem is successfully reduced and all specified constraints are satisfied.
Schuler, Benjamin; Soranno, Andrea; Hofmann, Hagen; Nettels, Daniel
2016-07-05
The properties of unfolded proteins have long been of interest because of their importance to the protein folding process. Recently, the surprising prevalence of unstructured regions or entirely disordered proteins under physiological conditions has led to the realization that such intrinsically disordered proteins can be functional even in the absence of a folded structure. However, owing to their broad conformational distributions, many of the properties of unstructured proteins are difficult to describe with the established concepts of structural biology. We have thus seen a reemergence of polymer physics as a versatile framework for understanding their structure and dynamics. An important driving force for these developments has been single-molecule spectroscopy, as it allows structural heterogeneity, intramolecular distance distributions, and dynamics to be quantified over a wide range of timescales and solution conditions. Polymer concepts provide an important basis for relating the physical properties of unstructured proteins to folding and function.
Parallel volume ray-casting for unstructured-grid data on distributed-memory architectures
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu
1995-01-01
As computing technology continues to advance, computational modeling of scientific and engineering problems produces data of increasing complexity: large in size and unstructured in shape. Volume visualization of such data is a challenging problem. This paper proposes a distributed parallel solution that makes ray-casting volume rendering of unstructured-grid data practical. Both the data and the rendering process are distributed among processors. At each processor, ray-casting of local data is performed independent of the other processors. The global image composing processes, which require inter-processor communication, are overlapped with the local ray-casting processes to achieve maximum parallel efficiency. This algorithm differs from previous ones in four ways: it is completely distributed, less view-dependent, reasonably scalable, and flexible. Without using dynamic load balancing, test results on the Intel Paragon using from two to 128 processors show, on average, about 60% parallel efficiency.
Population-level mating patterns and fluctuating asymmetry in swordtail hybrids
NASA Astrophysics Data System (ADS)
Culumber, Zachary W.; Rosenthal, Gil G.
2013-08-01
Morphological symmetry is a correlate of fitness-related traits or even a direct target of mate choice in a variety of taxa. In these taxa, when females discriminate among potential mates, increased selection on males should reduce fluctuating asymmetry (FA). Hybrid populations of the swordtails Xiphophorus birchmanni and Xiphophorus malinche vary from panmictic (unstructured) to highly structured, in which reproductive isolation is maintained among hybrids and parental species. We predicted that FA in flanking vertical bars used in sexual signalling should be lower in structured populations, where non-random mating patterns are observed. FA in vertical bars was markedly lower in structured populations than in parental and unstructured hybrid populations. There was no difference in FA between parentals and hybrids, suggesting that hybridisation does not directly affect FA. Rather, variation in FA likely results from contrasting mating patterns in unstructured and structured populations.
An effective lattice Boltzmann flux solver on arbitrarily unstructured meshes
NASA Astrophysics Data System (ADS)
Wu, Qi-Feng; Shu, Chang; Wang, Yan; Yang, Li-Ming
2018-05-01
The recently proposed lattice Boltzmann flux solver (LBFS) is a new approach for the simulation of incompressible flow problems. It applies the finite volume method (FVM) to discretize the governing equations, and the flux at the cell interface is evaluated by local reconstruction of lattice Boltzmann solution from macroscopic flow variables at cell centers. In the previous application of the LBFS, the structured meshes have been commonly employed, which may cause inconvenience for problems with complex geometries. In this paper, the LBFS is extended to arbitrarily unstructured meshes for effective simulation of incompressible flows. Two test cases, the lid-driven flow in a triangular cavity and flow around a circular cylinder, are carried out for validation. The obtained results are compared with the data available in the literature. Good agreement has been achieved, which demonstrates the effectiveness and reliability of the LBFS in simulating flows on arbitrarily unstructured meshes.
Workspace Safe Operation of a Force- or Impedance-Controlled Robot
NASA Technical Reports Server (NTRS)
Abdallah, Muhammad E. (Inventor); Hargrave, Brian (Inventor); Strawser, Philip A. (Inventor); Yamokoski, John D. (Inventor)
2013-01-01
A method of controlling a robotic manipulator of a force- or impedance-controlled robot within an unstructured workspace includes imposing a saturation limit on a static force applied by the manipulator to its surrounding environment, and may include determining a contact force between the manipulator and an object in the unstructured workspace, and executing a dynamic reflex when the contact force exceeds a threshold to thereby alleviate an inertial impulse not addressed by the saturation limited static force. The method may include calculating a required reflex torque to be imparted by a joint actuator to a robotic joint. A robotic system includes a robotic manipulator having an unstructured workspace and a controller that is electrically connected to the manipulator, and which controls the manipulator using force- or impedance-based commands. The controller, which is also disclosed herein, automatically imposes the saturation limit and may execute the dynamic reflex noted above.
Numerical approach for unstructured quantum key distribution
Coles, Patrick J.; Metodiev, Eric M.; Lütkenhaus, Norbert
2016-01-01
Quantum key distribution (QKD) allows for communication with security guaranteed by quantum theory. The main theoretical problem in QKD is to calculate the secret key rate for a given protocol. Analytical formulas are known for protocols with symmetries, since symmetry simplifies the analysis. However, experimental imperfections break symmetries, hence the effect of imperfections on key rates is difficult to estimate. Furthermore, it is an interesting question whether (intentionally) asymmetric protocols could outperform symmetric ones. Here we develop a robust numerical approach for calculating the key rate for arbitrary discrete-variable QKD protocols. Ultimately this will allow researchers to study ‘unstructured' protocols, that is, those that lack symmetry. Our approach relies on transforming the key rate calculation to the dual optimization problem, which markedly reduces the number of parameters and hence the calculation time. We illustrate our method by investigating some unstructured protocols for which the key rate was previously unknown. PMID:27198739
Multi-Resolution Unstructured Grid-Generation for Geophysical Applications on the Sphere
NASA Technical Reports Server (NTRS)
Engwirda, Darren
2015-01-01
An algorithm for the generation of non-uniform unstructured grids on ellipsoidal geometries is described. This technique is designed to generate high quality triangular and polygonal meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric and ocean simulation, and numerical weather predication. Using a recently developed Frontal-Delaunay-refinement technique, a method for the construction of high-quality unstructured ellipsoidal Delaunay triangulations is introduced. A dual polygonal grid, derived from the associated Voronoi diagram, is also optionally generated as a by-product. Compared to existing techniques, it is shown that the Frontal-Delaunay approach typically produces grids with near-optimal element quality and smooth grading characteristics, while imposing relatively low computational expense. Initial results are presented for a selection of uniform and non-uniform ellipsoidal grids appropriate for large-scale geophysical applications. The use of user-defined mesh-sizing functions to generate smoothly graded, non-uniform grids is discussed.
Assessment of the Unstructured Grid Software TetrUSS for Drag Prediction of the DLR-F4 Configuration
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.; Frink, Neal T.
2002-01-01
An application of the NASA unstructured grid software system TetrUSS is presented for the prediction of aerodynamic drag on a transport configuration. The paper briefly describes the underlying methodology and summarizes the results obtained on the DLR-F4 transport configuration recently presented in the first AIAA computational fluid dynamics (CFD) Drag Prediction Workshop. TetrUSS is a suite of loosely coupled unstructured grid CFD codes developed at the NASA Langley Research Center. The meshing approach is based on the advancing-front and the advancing-layers procedures. The flow solver employs a cell-centered, finite volume scheme for solving the Reynolds Averaged Navier-Stokes equations on tetrahedral grids. For the present computations, flow in the viscous sublayer has been modeled with an analytical wall function. The emphasis of the paper is placed on the practicality of the methodology for accurately predicting aerodynamic drag data.
NASA Astrophysics Data System (ADS)
Lv, X.; Zhao, Y.; Huang, X. Y.; Xia, G. H.; Su, X. H.
2007-07-01
A new three-dimensional (3D) matrix-free implicit unstructured multigrid finite volume (FV) solver for structural dynamics is presented in this paper. The solver is first validated using classical 2D and 3D cantilever problems. It is shown that very accurate predictions of the fundamental natural frequencies of the problems can be obtained by the solver with fast convergence rates. This method has been integrated into our existing FV compressible solver [X. Lv, Y. Zhao, et al., An efficient parallel/unstructured-multigrid preconditioned implicit method for simulating 3d unsteady compressible flows with moving objects, Journal of Computational Physics 215(2) (2006) 661-690] based on the immersed membrane method (IMM) [X. Lv, Y. Zhao, et al., as mentioned above]. Results for the interaction between the fluid and an immersed fixed-free cantilever are also presented to demonstrate the potential of this integrated fluid-structure interaction approach.
Electronic Health Records Data and Metadata: Challenges for Big Data in the United States.
Sweet, Lauren E; Moulaison, Heather Lea
2013-12-01
This article, written by researchers studying metadata and standards, represents a fresh perspective on the challenges of electronic health records (EHRs) and serves as a primer for big data researchers new to health-related issues. Primarily, we argue for the importance of the systematic adoption of standards in EHR data and metadata as a way of promoting big data research and benefiting patients. EHRs have the potential to include a vast amount of longitudinal health data, and metadata provides the formal structures to govern that data. In the United States, electronic medical records (EMRs) are part of the larger EHR. EHR data is submitted by a variety of clinical data providers and potentially by the patients themselves. Because data input practices are not necessarily standardized, and because of the multiplicity of current standards, basic interoperability in EHRs is hindered. Some of the issues with EHR interoperability stem from the complexities of the data they include, which can be both structured and unstructured. A number of controlled vocabularies are available to data providers. The continuity of care document standard will provide interoperability in the United States between the EMR and the larger EHR, potentially making data input by providers directly available to other providers. The data involved is nonetheless messy. In particular, the use of competing vocabularies such as the Systematized Nomenclature of Medicine-Clinical Terms, MEDCIN, and locally created vocabularies inhibits large-scale interoperability for structured portions of the records, and unstructured portions, although potentially not machine readable, remain essential. Once EMRs for patients are brought together as EHRs, the EHRs must be managed and stored. Adequate documentation should be created and maintained to assure the secure and accurate use of EHR data. There are currently a few notable international standards initiatives for EHRs. Organizations such as Health Level Seven International and Clinical Data Interchange Standards Consortium are developing and overseeing implementation of interoperability standards. Denmark and Singapore are two countries that have successfully implemented national EHR systems. Future work in electronic health information initiatives should underscore the importance of standards and reinforce interoperability of EHRs for big data research and for the sake of patients.
Abdellah, A M; Balla, Q I
2013-11-15
Due to rapid urbanization in Khartoum State, Domestic Solid Waste (DSW) management remains the biggest obsession that recurrently attracts the attention of the concern authorities and stakeholders. As one of the seven localities comprised the state, the Sharg El Neel Locality was chosen to study the DSW management efficiency. The materials and methods employed in collection of data is a package of techniques, one of which was by conducting interviews using structured and unstructured questions mainly directed to appropriate persons i.e., householders and particular government employees directly engaged in DSW management operations. The main findings reached in this study were that local authorities lack the necessary capacities to handle the immense problems of DSW management. Shortages of funds, inadequate number of workers, lack of transport and facilities and weakness of attitudes of respondents found to be among factors hindering the DSW management. Accordingly, proper scheduled and timing, well-trained public health officers and sanitary overseers and strict sustainable program to controlling flies, rodents, cockroach and other disease vectors are essential to properly managing DSW. Otherwise, problems resulting from solid waste generation in the study area will be magnitudized and the surrounding environment will definitely be deteriorated.
Automatic generation of Web mining environments
NASA Astrophysics Data System (ADS)
Cibelli, Maurizio; Costagliola, Gennaro
1999-02-01
The main problem related to the retrieval of information from the world wide web is the enormous number of unstructured documents and resources, i.e., the difficulty of locating and tracking appropriate sources. This paper presents a web mining environment (WME), which is capable of finding, extracting and structuring information related to a particular domain from web documents, using general purpose indices. The WME architecture includes a web engine filter (WEF), to sort and reduce the answer set returned by a web engine, a data source pre-processor (DSP), which processes html layout cues in order to collect and qualify page segments, and a heuristic-based information extraction system (HIES), to finally retrieve the required data. Furthermore, we present a web mining environment generator, WMEG, that allows naive users to generate a WME specific to a given domain by providing a set of specifications.
NASA Technical Reports Server (NTRS)
Alexandrov, N. M.; Nielsen, E. J.; Lewis, R. M.; Anderson, W. K.
2000-01-01
First-order approximation and model management is a methodology for a systematic use of variable-fidelity models or approximations in optimization. The intent of model management is to attain convergence to high-fidelity solutions with minimal expense in high-fidelity computations. The savings in terms of computationally intensive evaluations depends on the ability of the available lower-fidelity model or a suite of models to predict the improvement trends for the high-fidelity problem, Variable-fidelity models can be represented by data-fitting approximations, variable-resolution models. variable-convergence models. or variable physical fidelity models. The present work considers the use of variable-fidelity physics models. We demonstrate the performance of model management on an aerodynamic optimization of a multi-element airfoil designed to operate in the transonic regime. Reynolds-averaged Navier-Stokes equations represent the high-fidelity model, while the Euler equations represent the low-fidelity model. An unstructured mesh-based analysis code FUN2D evaluates functions and sensitivity derivatives for both models. Model management for the present demonstration problem yields fivefold savings in terms of high-fidelity evaluations compared to optimization done with high-fidelity computations alone.
Operational tsunami modeling with TsunAWI - Examples for Indonesia and Chile
NASA Astrophysics Data System (ADS)
Rakowsky, Natalja; Androsov, Alexey; Harig, Sven; Immerz, Antonia; Fuchs, Annika; Behrens, Jörn; Danilov, Sergey; Hiller, Wolfgang; Schröter, Jens
2014-05-01
The numerical simulation code TsunAWI was developed in the framework of the German-Indonesian Tsunami Early Warning System (GITEWS). The numerical simulation of prototypical tsunami scenarios plays a decisive role in the a priory risk assessment for coastal regions and in the early warning process itself. TsunAWI is based on a finite element discretization, employs unstructured grids with high resolution along the coast, and includes inundation. This contribution gives an overview of the model itself and presents two applications. For GITEWS, the existing scenario database covering 528 epicenters / 3450 scenarios from Sumatra to Bali was extended by 187 epicenters / 1100 scenarios in the Eastern Sunda Arc. Furthermore, about 1100 scenarios for the Western Sunda Arc were recomputed on the new model domain covering the whole Indonesian Seas. These computations would not have been feasible in the beginning of the project. The unstructured computational grid contains 7 million nodes and resolves all coastal regions with 150m, some project regions and the surrounding of tide gauges with 50m, and the deep ocean with 12km edge length. While in the Western Sunda Arc, the large islands of Sumatra and Java shield the Northern Indonesian Archipelago, tsunamis in the Eastern Sunda Arc can propagate to the North. The unstructured grid approach allows TsunAWI to easily simulate the complex propagation patterns with the self-interactions and the reflections at the coastal regions of myriads of islands. For the Hydrographic and Oceanographic Service of the Chilean Navy (SHOA), we calculated a small scenario database of 100 scenarios (sources by Universidad de Chile) to provide data for a lightweight decision support system prototype (built by DLR). This work is part of the initiation project "Multi hazard information and early warning system in cooperation with Chile" and aims at sharing our experience from GITEWS with the Chilean partners.
A Hybrid P2P Overlay Network for Non-strictly Hierarchically Categorized Content
NASA Astrophysics Data System (ADS)
Wan, Yi; Asaka, Takuya; Takahashi, Tatsuro
In P2P content distribution systems, there are many cases in which the content can be classified into hierarchically organized categories. In this paper, we propose a hybrid overlay network design suitable for such content called Pastry/NSHCC (Pastry for Non-Strictly Hierarchically Categorized Content). The semantic information of classification hierarchies of the content can be utilized regardless of whether they are in a strict tree structure or not. By doing so, the search scope can be restrained to any granularity, and the number of query messages also decreases while maintaining keyword searching availability. Through simulation, we showed that the proposed method provides better performance and lower overhead than unstructured overlays exploiting the same semantic information.
Semantic Technologies for Re-Use of Clinical Routine Data.
Kreuzthaler, Markus; Martínez-Costa, Catalina; Kaiser, Peter; Schulz, Stefan
2017-01-01
Routine patient data in electronic patient records are only partly structured, and an even smaller segment is coded, mainly for administrative purposes. Large parts are only available as free text. Transforming this content into a structured and semantically explicit form is a prerequisite for querying and information extraction. The core of the system architecture presented in this paper is based on SAP HANA in-memory database technology using the SAP Connected Health platform for data integration as well as for clinical data warehousing. A natural language processing pipeline analyses unstructured content and maps it to a standardized vocabulary within a well-defined information model. The resulting semantically standardized patient profiles are used for a broad range of clinical and research application scenarios.
Corazza, Ornella; Bersani, Francesco Saverio; Brunoro, Roberto; Valeriani, Giuseppe; Martinotti, Giovanni; Schifano, Fabrizio
2014-12-01
Performance and image-enhancing drugs (PIEDs), also known as "lifestyle drugs," are increasingly sold on the Internet to enhance cognitive as well as sexual, muscular, attentive, and other natural capacities. Our analysis focuses on the misuse of the cognitive enhancer piracetam. A literature review was carried out in PsychInfo and Pubmed database. Considering the absence of peer-reviewed data, review of additional sources of unstructured information from the Internet was carried out between February 2012 and July 2013. Additional searches were conducted using the Global Public Health Intelligence Network (GPHIN), a secure Internet-based early warning system developed by Health Canada and the World Health Organization (WHO), which monitors media reports in six languages, Arabic, Chinese, English, French, Russian, and Spanish. Piracetam is sold via illicit online pharmacies with no need of prescription at low prices. Buyers, mainly healthy individuals, purchase the product to enhance study- and work-related performances as well as for recreational purposes. Its nonmedical use is often associated with the occurrence of side effects such as hallucinations, psychomotor agitation, dysphoria, tiredness, dizziness, memory loss, headache, and severe diarrhoea; moreover, several users declared to have neither felt any cognitive improvement nor psychedelic effects. This is a new and fast-growing trend of abuse that needs to be extensively monitored and studied also by using near real-time and unstructured sources of information such as Internet news and online reports in order to acquire rapid knowledge and understanding. Products sold online might be counterfeits and this enhances related health risks.
Brinker, Titus Josef; Rudolph, Stefanie; Richter, Daniela; von Kalle, Christof
2018-05-11
This article describes the DataBox project which offers a perspective of a new health data management solution in Germany. DataBox was initially conceptualized as a repository of individual lung cancer patient data (structured and unstructured). The patient is the owner of the data and is able to share his or her data with different stakeholders. Data is transferred, displayed, and stored online, but not archived. In the long run, the project aims at replacing the conventional method of paper- and storage-device-based handling of data for all patients in Germany, leading to better organization and availability of data which reduces duplicate diagnostic procedures, treatment errors, and enables the training as well as usage of artificial intelligence algorithms on large datasets. ©Titus Josef Brinker, Stefanie Rudolph, Daniela Richter, Christof von Kalle. Originally published in JMIR Cancer (http://cancer.jmir.org), 11.05.2018.
Third-order accurate conservative method on unstructured meshes for gasdynamic simulations
NASA Astrophysics Data System (ADS)
Shirobokov, D. A.
2017-04-01
A third-order accurate finite-volume method on unstructured meshes is proposed for solving viscous gasdynamic problems. The method is described as applied to the advection equation. The accuracy of the method is verified by computing the evolution of a vortex on meshes of various degrees of detail with variously shaped cells. Additionally, unsteady flows around a cylinder and a symmetric airfoil are computed. The numerical results are presented in the form of plots and tables.
Unstructured Polyhedral Mesh Thermal Radiation Diffusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmer, T.S.; Zika, M.R.; Madsen, N.K.
2000-07-27
Unstructured mesh particle transport and diffusion methods are gaining wider acceptance as mesh generation, scientific visualization and linear solvers improve. This paper describes an algorithm that is currently being used in the KULL code at Lawrence Livermore National Laboratory to solve the radiative transfer equations. The algorithm employs a point-centered diffusion discretization on arbitrary polyhedral meshes in 3D. We present the results of a few test problems to illustrate the capabilities of the radiation diffusion module.
Unstructured 3D Delaunay mesh generation applied to planes, trains and automobiles
NASA Technical Reports Server (NTRS)
Blake, Kenneth R.; Spragle, Gregory S.
1993-01-01
Technical issues associated with domain-tessellation production, including initial boundary node triangulation and volume mesh refinement, are presented for the 'TGrid' 3D Delaunay unstructured grid generation program. The approach employed is noted to be capable of preserving predefined triangular surface facets in the final tessellation. The capabilities of the approach are demonstrated by generating grids about an entire fighter aircraft configuration, a train, and a wind tunnel model of an automobile.
Solution algorithms for the two-dimensional Euler equations on unstructured meshes
NASA Technical Reports Server (NTRS)
Whitaker, D. L.; Slack, David C.; Walters, Robert W.
1990-01-01
The objective of the study was to analyze implicit techniques employed in structured grid algorithms for solving two-dimensional Euler equations and extend them to unstructured solvers in order to accelerate convergence rates. A comparison is made between nine different algorithms for both first-order and second-order accurate solutions. Higher-order accuracy is achieved by using multidimensional monotone linear reconstruction procedures. The discussion is illustrated by results for flow over a transonic circular arc.
Jonnagaddala, Jitendra; Liaw, Siaw-Teng; Ray, Pradeep; Kumar, Manish; Chang, Nai-Wen; Dai, Hong-Jie
2015-12-01
Coronary artery disease (CAD) often leads to myocardial infarction, which may be fatal. Risk factors can be used to predict CAD, which may subsequently lead to prevention or early intervention. Patient data such as co-morbidities, medication history, social history and family history are required to determine the risk factors for a disease. However, risk factor data are usually embedded in unstructured clinical narratives if the data is not collected specifically for risk assessment purposes. Clinical text mining can be used to extract data related to risk factors from unstructured clinical notes. This study presents methods to extract Framingham risk factors from unstructured electronic health records using clinical text mining and to calculate 10-year coronary artery disease risk scores in a cohort of diabetic patients. We developed a rule-based system to extract risk factors: age, gender, total cholesterol, HDL-C, blood pressure, diabetes history and smoking history. The results showed that the output from the text mining system was reliable, but there was a significant amount of missing data to calculate the Framingham risk score. A systematic approach for understanding missing data was followed by implementation of imputation strategies. An analysis of the 10-year Framingham risk scores for coronary artery disease in this cohort has shown that the majority of the diabetic patients are at moderate risk of CAD. Copyright © 2015 Elsevier Inc. All rights reserved.
Simulating hydrodynamics and ice cover in Lake Erie using an unstructured grid model
NASA Astrophysics Data System (ADS)
Fujisaki-Manome, A.; Wang, J.
2016-02-01
An unstructured grid Finite-Volume Coastal Ocean Model (FVCOM) is applied to Lake Erie to simulate seasonal ice cover. The model is coupled with an unstructured-grid, finite-volume version of the Los Alamos Sea Ice Model (UG-CICE). We replaced the original 2-time-step Euler forward scheme in time integration by the central difference (i.e., leapfrog) scheme to assure a neutrally inertial stability. The modified version of FVCOM coupled with the ice model is applied to the shallow freshwater lake in this study using unstructured grids to represent the complicated coastline in the Laurentian Great Lakes and refining the spatial resolution locally. We conducted multi-year simulations in Lake Erie from 2002 to 2013. The results were compared with the observed ice extent, water surface temperature, ice thickness, currents, and water temperature profiles. Seasonal and interannual variation of ice extent and water temperature was captured reasonably, while the modeled thermocline was somewhat diffusive. The modeled ice thickness tends to be systematically thinner than the observed values. The modeled lake currents compared well with measurements obtained from an Acoustic Doppler Current Profiler located in the deep part of the lake, whereas the simulated currents deviated from measurements near the surface, possibly due to the model's inability to reproduce the sharp thermocline during the summer and the lack of detailed representation of offshore wind fields in the interpolated meteorological forcing.
Advanced Computational Aeroacoustics Methods for Fan Noise Prediction
NASA Technical Reports Server (NTRS)
Envia, Edmane (Technical Monitor); Tam, Christopher
2003-01-01
Direct computation of fan noise is presently not possible. One of the major difficulties is the geometrical complexity of the problem. In the case of fan noise, the blade geometry is critical to the loading on the blade and hence the intensity of the radiated noise. The precise geometry must be incorporated into the computation. In computational fluid dynamics (CFD), there are two general ways to handle problems with complex geometry. One way is to use unstructured grids. The other is to use body fitted overset grids. In the overset grid method, accurate data transfer is of utmost importance. For acoustic computation, it is not clear that the currently used data transfer methods are sufficiently accurate as not to contaminate the very small amplitude acoustic disturbances. In CFD, low order schemes are, invariably, used in conjunction with unstructured grids. However, low order schemes are known to be numerically dispersive and dissipative. dissipative errors are extremely undesirable for acoustic wave problems. The objective of this project is to develop a high order unstructured grid Dispersion-Relation-Preserving (DRP) scheme. would minimize numerical dispersion and dissipation errors. contains the results of the funded portion of the project. scheme on an unstructured grid has been developed. constructed in the wave number space. The characteristics of the scheme can be improved by the inclusion of additional constraints. Stability of the scheme has been investigated. Stability can be improved by adopting the upwinding strategy.
Transonic Drag Prediction Using an Unstructured Multigrid Solver
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.; Levy, David W.
2001-01-01
This paper summarizes the results obtained with the NSU-3D unstructured multigrid solver for the AIAA Drag Prediction Workshop held in Anaheim, CA, June 2001. The test case for the workshop consists of a wing-body configuration at transonic flow conditions. Flow analyses for a complete test matrix of lift coefficient values and Mach numbers at a constant Reynolds number are performed, thus producing a set of drag polars and drag rise curves which are compared with experimental data. Results were obtained independently by both authors using an identical baseline grid and different refined grids. Most cases were run in parallel on commodity cluster-type machines while the largest cases were run on an SGI Origin machine using 128 processors. The objective of this paper is to study the accuracy of the subject unstructured grid solver for predicting drag in the transonic cruise regime, to assess the efficiency of the method in terms of convergence, cpu time, and memory, and to determine the effects of grid resolution on this predictive ability and its computational efficiency. A good predictive ability is demonstrated over a wide range of conditions, although accuracy was found to degrade for cases at higher Mach numbers and lift values where increasing amounts of flow separation occur. The ability to rapidly compute large numbers of cases at varying flow conditions using an unstructured solver on inexpensive clusters of commodity computers is also demonstrated.
The Tera Multithreaded Architecture and Unstructured Meshes
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.; Mavriplis, Dimitri J.
1998-01-01
The Tera Multithreaded Architecture (MTA) is a new parallel supercomputer currently being installed at San Diego Supercomputing Center (SDSC). This machine has an architecture quite different from contemporary parallel machines. The computational processor is a custom design and the machine uses hardware to support very fine grained multithreading. The main memory is shared, hardware randomized and flat. These features make the machine highly suited to the execution of unstructured mesh problems, which are difficult to parallelize on other architectures. We report the results of a study carried out during July-August 1998 to evaluate the execution of EUL3D, a code that solves the Euler equations on an unstructured mesh, on the 2 processor Tera MTA at SDSC. Our investigation shows that parallelization of an unstructured code is extremely easy on the Tera. We were able to get an existing parallel code (designed for a shared memory machine), running on the Tera by changing only the compiler directives. Furthermore, a serial version of this code was compiled to run in parallel on the Tera by judicious use of directives to invoke the "full/empty" tag bits of the machine to obtain synchronization. This version achieves 212 and 406 Mflop/s on one and two processors respectively, and requires no attention to partitioning or placement of data issues that would be of paramount importance in other parallel architectures.
Recognition and characterization of unstructured environmental sounds
NASA Astrophysics Data System (ADS)
Chu, Selina
2011-12-01
Environmental sounds are what we hear everyday, or more generally sounds that surround us ambient or background audio. Humans utilize both vision and hearing to respond to their surroundings, a capability still quite limited in machine processing. The first step toward achieving multimodal input applications is the ability to process unstructured audio and recognize audio scenes (or environments). Such ability would have applications in content analysis and mining of multimedia data or improving robustness in context aware applications through multi-modality, such as in assistive robotics, surveillances, or mobile device-based services. The goal of this thesis is on the characterization of unstructured environmental sounds for understanding and predicting the context surrounding of an agent or device. Most research on audio recognition has focused primarily on speech and music. Less attention has been paid to the challenges and opportunities for using audio to characterize unstructured audio. My research focuses on investigating challenging issues in characterizing unstructured environmental audio and to develop novel algorithms for modeling the variations of the environment. The first step in building a recognition system for unstructured auditory environment was to investigate on techniques and audio features for working with such audio data. We begin by performing a study that explore suitable features and the feasibility of designing an automatic environment recognition system using audio information. In my initial investigation to explore the feasibility of designing an automatic environment recognition system using audio information, I have found that traditional recognition and feature extraction for audio were not suitable for environmental sound, as they lack any type of structures, unlike those of speech and music which contain formantic and harmonic structures, thus dispelling the notion that traditional speech and music recognition techniques can simply be used for realistic environmental sound. Natural unstructured environment sounds contain a large variety of sounds, which are in fact noise-like and are not effectively modeled by Mel-frequency cepstral coefficients (MFCCs) or other commonly-used audio features, e.g. energy, zero-crossing, etc. Due to the lack of appropriate features that is suitable for environmental audio and to achieve a more effective representation, I proposed a specialized feature extraction algorithm for environmental sounds that utilizes the matching pursuit (MP) algorithm to learn the inherent structure of each type of sounds, which we called MP-features. MP-features have shown to capture and represent sounds from different sources and different ranges, where frequency domain features (e.g., MFCCs) fail and can be advantageous when combining with MFCCs to improve the overall performance. The third component leads to our investigation on modeling and detecting the background audio. One of the goals of this research is to characterize an environment. Since many events would blend into the background, I wanted to look for a way to achieve a general model for any particular environment. Once we have an idea of the background, it will enable us to identify foreground events even if we havent seen these events before. Therefore, the next step is to investigate into learning the audio background model for each environment type, despite the occurrences of different foreground events. In this work, I presented a framework for robust audio background modeling, which includes learning the models for prediction, data knowledge and persistent characteristics of the environment. This approach has the ability to model the background and detect foreground events as well as the ability to verify whether the predicted background is indeed the background or a foreground event that protracts for a longer period of time. In this work, I also investigated the use of a semi-supervised learning technique to exploit and label new unlabeled audio data. The final components of my thesis will involve investigating on learning sound structures for generalization and applying the proposed ideas to context aware applications. The inherent nature of environmental sound is noisy and contains relatively large amounts of overlapping events between different environments. Environmental sounds contain large variances even within a single environment type, and frequently, there are no divisible or clear boundaries between some types. Traditional methods of classification are generally not robust enough to handle classes with overlaps. This audio, hence, requires representation by complex models. Using deep learning architecture provides a way to obtain a generative model-based method for classification. Specifically, I considered the use of Deep Belief Networks (DBNs) to model environmental audio and investigate its applicability with noisy data to improve robustness and generalization. A framework was proposed using composite-DBNs to discover high-level representations and to learn a hierarchical structure for different acoustic environments in a data-driven fashion. Experimental results on real data sets demonstrate its effectiveness over traditional methods with over 90% accuracy on recognition for a high number of environmental sound types.
NMR relaxation studies on the hydrate layer of intrinsically unstructured proteins.
Bokor, Mónika; Csizmók, Veronika; Kovács, Dénes; Bánki, Péter; Friedrich, Peter; Tompa, Peter; Tompa, Kálmán
2005-03-01
Intrinsically unstructured/disordered proteins (IUPs) exist in a disordered and largely solvent-exposed, still functional, structural state under physiological conditions. As their function is often directly linked with structural disorder, understanding their structure-function relationship in detail is a great challenge to structural biology. In particular, their hydration and residual structure, both closely linked with their mechanism of action, require close attention. Here we demonstrate that the hydration of IUPs can be adequately approached by a technique so far unexplored with respect to IUPs, solid-state NMR relaxation measurements. This technique provides quantitative information on various features of hydrate water bound to these proteins. By freezing nonhydrate (bulk) water out, we have been able to measure free induction decays pertaining to protons of bound water from which the amount of hydrate water, its activation energy, and correlation times could be calculated. Thus, for three IUPs, the first inhibitory domain of calpastatin, microtubule-associated protein 2c, and plant dehydrin early responsive to dehydration 10, we demonstrate that they bind a significantly larger amount of water than globular proteins, whereas their suboptimal hydration and relaxation parameters are correlated with their differing modes of function. The theoretical treatment and experimental approach presented in this article may have general utility in characterizing proteins that belong to this novel structural class.
Lin, Frank Po-Yen; Pokorny, Adrian; Teng, Christina; Epstein, Richard J
2017-07-31
Vast amounts of clinically relevant text-based variables lie undiscovered and unexploited in electronic medical records (EMR). To exploit this untapped resource, and thus facilitate the discovery of informative covariates from unstructured clinical narratives, we have built a novel computational pipeline termed Text-based Exploratory Pattern Analyser for Prognosticator and Associator discovery (TEPAPA). This pipeline combines semantic-free natural language processing (NLP), regular expression induction, and statistical association testing to identify conserved text patterns associated with outcome variables of clinical interest. When we applied TEPAPA to a cohort of head and neck squamous cell carcinoma patients, plausible concepts known to be correlated with human papilloma virus (HPV) status were identified from the EMR text, including site of primary disease, tumour stage, pathologic characteristics, and treatment modalities. Similarly, correlates of other variables (including gender, nodal status, recurrent disease, smoking and alcohol status) were also reliably recovered. Using highly-associated patterns as covariates, a patient's HPV status was classifiable using a bootstrap analysis with a mean area under the ROC curve of 0.861, suggesting its predictive utility in supporting EMR-based phenotyping tasks. These data support using this integrative approach to efficiently identify disease-associated factors from unstructured EMR narratives, and thus to efficiently generate testable hypotheses.
A positional estimation technique for an autonomous land vehicle in an unstructured environment
NASA Technical Reports Server (NTRS)
Talluri, Raj; Aggarwal, J. K.
1990-01-01
This paper presents a solution to the positional estimation problem of an autonomous land vehicle navigating in an unstructured mountainous terrain. A Digital Elevation Map (DEM) of the area in which the robot is to navigate is assumed to be given. It is also assumed that the robot is equipped with a camera that can be panned and tilted, and a device to measure the elevation of the robot above the ground surface. No recognizable landmarks are assumed to be present in the environment in which the robot is to navigate. The solution presented makes use of the DEM information, and structures the problem as a heuristic search in the DEM for the possible robot location. The shape and position of the horizon line in the image plane and the known camera geometry of the perspective projection are used as parameters to search the DEM. Various heuristics drawn from the geometric constraints are used to prune the search space significantly. The algorithm is made robust to errors in the imaging process by accounting for the worst care errors. The approach is tested using DEM data of areas in Colorado and Texas. The method is suitable for use in outdoor mobile robots and planetary rovers.
Document Exploration and Automatic Knowledge Extraction for Unstructured Biomedical Text
NASA Astrophysics Data System (ADS)
Chu, S.; Totaro, G.; Doshi, N.; Thapar, S.; Mattmann, C. A.; Ramirez, P.
2015-12-01
We describe our work on building a web-browser based document reader with built-in exploration tool and automatic concept extraction of medical entities for biomedical text. Vast amounts of biomedical information are offered in unstructured text form through scientific publications and R&D reports. Utilizing text mining can help us to mine information and extract relevant knowledge from a plethora of biomedical text. The ability to employ such technologies to aid researchers in coping with information overload is greatly desirable. In recent years, there has been an increased interest in automatic biomedical concept extraction [1, 2] and intelligent PDF reader tools with the ability to search on content and find related articles [3]. Such reader tools are typically desktop applications and are limited to specific platforms. Our goal is to provide researchers with a simple tool to aid them in finding, reading, and exploring documents. Thus, we propose a web-based document explorer, which we called Shangri-Docs, which combines a document reader with automatic concept extraction and highlighting of relevant terms. Shangri-Docsalso provides the ability to evaluate a wide variety of document formats (e.g. PDF, Words, PPT, text, etc.) and to exploit the linked nature of the Web and personal content by performing searches on content from public sites (e.g. Wikipedia, PubMed) and private cataloged databases simultaneously. Shangri-Docsutilizes Apache cTAKES (clinical Text Analysis and Knowledge Extraction System) [4] and Unified Medical Language System (UMLS) to automatically identify and highlight terms and concepts, such as specific symptoms, diseases, drugs, and anatomical sites, mentioned in the text. cTAKES was originally designed specially to extract information from clinical medical records. Our investigation leads us to extend the automatic knowledge extraction process of cTAKES for biomedical research domain by improving the ontology guided information extraction process. We will describe our experience and implementation of our system and share lessons learned from our development. We will also discuss ways in which this could be adapted to other science fields. [1] Funk et al., 2014. [2] Kang et al., 2014. [3] Utopia Documents, http://utopiadocs.com [4] Apache cTAKES, http://ctakes.apache.org
NASA Technical Reports Server (NTRS)
Newman, James C., III
1995-01-01
The limiting factor in simulating flows past realistic configurations of interest has been the discretization of the physical domain on which the governing equations of fluid flow may be solved. In an attempt to circumvent this problem, many Computational Fluid Dynamic (CFD) methodologies that are based on different grid generation and domain decomposition techniques have been developed. However, due to the costs involved and expertise required, very few comparative studies between these methods have been performed. In the present work, the two CFD methodologies which show the most promise for treating complex three-dimensional configurations as well as unsteady moving boundary problems are evaluated. These are namely the structured-overlapped and the unstructured grid schemes. Both methods use a cell centered, finite volume, upwind approach. The structured-overlapped algorithm uses an approximately factored, alternating direction implicit scheme to perform the time integration, whereas, the unstructured algorithm uses an explicit Runge-Kutta method. To examine the accuracy, efficiency, and limitations of each scheme, they are applied to the same steady complex multicomponent configurations and unsteady moving boundary problems. The steady complex cases consist of computing the subsonic flow about a two-dimensional high-lift multielement airfoil and the transonic flow about a three-dimensional wing/pylon/finned store assembly. The unsteady moving boundary problems are a forced pitching oscillation of an airfoil in a transonic freestream and a two-dimensional, subsonic airfoil/store separation sequence. Accuracy was accessed through the comparison of computed and experimentally measured pressure coefficient data on several of the wing/pylon/finned store assembly's components and at numerous angles-of-attack for the pitching airfoil. From this study, it was found that both the structured-overlapped and the unstructured grid schemes yielded flow solutions of comparable accuracy for these simulations. This study also indicated that, overall, the structured-overlapped scheme was slightly more CPU efficient than the unstructured approach.
Panday, Sorab; Langevin, Christian D.; Niswonger, Richard G.; Ibaraki, Motomu; Hughes, Joseph D.
2013-01-01
A new version of MODFLOW, called MODFLOW–USG (for UnStructured Grid), was developed to support a wide variety of structured and unstructured grid types, including nested grids and grids based on prismatic triangles, rectangles, hexagons, and other cell shapes. Flexibility in grid design can be used to focus resolution along rivers and around wells, for example, or to subdiscretize individual layers to better represent hydrostratigraphic units. MODFLOW–USG is based on an underlying control volume finite difference (CVFD) formulation in which a cell can be connected to an arbitrary number of adjacent cells. To improve accuracy of the CVFD formulation for irregular grid-cell geometries or nested grids, a generalized Ghost Node Correction (GNC) Package was developed, which uses interpolated heads in the flow calculation between adjacent connected cells. MODFLOW–USG includes a Groundwater Flow (GWF) Process, based on the GWF Process in MODFLOW–2005, as well as a new Connected Linear Network (CLN) Process to simulate the effects of multi-node wells, karst conduits, and tile drains, for example. The CLN Process is tightly coupled with the GWF Process in that the equations from both processes are formulated into one matrix equation and solved simultaneously. This robustness results from using an unstructured grid with unstructured matrix storage and solution schemes. MODFLOW–USG also contains an optional Newton-Raphson formulation, based on the formulation in MODFLOW–NWT, for improving solution convergence and avoiding problems with the drying and rewetting of cells. Because the existing MODFLOW solvers were developed for structured and symmetric matrices, they were replaced with a new Sparse Matrix Solver (SMS) Package developed specifically for MODFLOW–USG. The SMS Package provides several methods for resolving nonlinearities and multiple symmetric and asymmetric linear solution schemes to solve the matrix arising from the flow equations and the Newton-Raphson formulation, respectively.
The key role of supply chain actors in groundwater irrigation development in North Africa
NASA Astrophysics Data System (ADS)
Lejars, Caroline; Daoudi, Ali; Amichi, Hichem
2017-09-01
The role played by supply chain actors in the rapid development of groundwater-based irrigated agriculture is analyzed. Agricultural groundwater use has increased tremendously in the past 50 years, leading to the decline of water tables. Groundwater use has enabled intensification of existing farming systems and ensured economic growth. This "groundwater economy" has been growing rapidly due to the initiative of farmers and the involvement of a wide range of supply chain actors, including suppliers of equipment, inputs retailers, and distributors of irrigated agricultural products. In North Africa, the actors in irrigated production chains often operate at the margin of public policies and are usually described as "informal", "unstructured", and as participating in "groundwater anarchy". This paper underlines the crucial role of supply chain actors in the development of groundwater irrigation, a role largely ignored by public policies and rarely studied. The analysis is based on three case studies in Morocco, Tunisia and Algeria, and focuses on the horticultural sub-sector, in particular on onions and tomatoes, which are irrigated high value crops. The study demonstrates that although supply chain actors are catalyzers of the expansion of groundwater irrigation, they could also become actors in adaptation to the declining water tables. Through their informal activities, they help reduce market risks, facilitate credit and access to subsidies, and disseminate innovation. The interest associated with making these actors visible to agricultural institutions is discussed, along with methods of getting them involved in the management of the resource on which they depend.
Computation of UH-60A Airloads Using CFD/CSD Coupling on Unstructured Meshes
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Lee-Rausch, Elizabeth M.
2011-01-01
An unsteady Reynolds-averaged Navier-Stokes solver for unstructured grids is used to compute the rotor airloads on the UH-60A helicopter at high-speed and high thrust conditions. The flow solver is coupled to a rotorcraft comprehensive code in order to account for trim and aeroelastic deflections. Simulations are performed both with and without the fuselage, and the effects of grid resolution, temporal resolution and turbulence model are examined. Computed airloads are compared to flight data.
Multiple chiral topological states in liquid crystals from unstructured light beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loussert, Charles; Brasselet, Etienne, E-mail: e.brasselet@loma.u-bordeaux1.fr
2014-02-03
It is shown experimentally that unstructured light beams can generate a wealth of distinct metastable defect structures in thin films of chiral liquid crystals. Various kinds of individual chiral topological states are obtained as well as dimers and trimers, which correspond to the entanglement of several topological unit cells. Self-assembled nested assemblies of several metastable particle-like topological states can also be formed. Finally, we propose and experimentally demonstrate an opto-electrical approach to generate tailor-made architectures.
Exchanging Peers to Establish P2P Networks
NASA Astrophysics Data System (ADS)
Akon, Mursalin; Islam, Mohammad Towhidul; Shen, Xuemin(Sherman); Singh, Ajit
Structure-wise, P2P networks can be divided into two major categories: (1) structured and (2) unstructured. In this chapter, we survey a group of unstructured P2P networks. This group of networks employs a gossip or epidemic protocol to maintain the members of the network and during a gossip, peers exchange a subset of their neighbors with each other. It is reported that this kind of networks are scalable, robust and resilient to severe network failure, at the same time very inexpensive to operate.
Segmentation of Unstructured Datasets
NASA Technical Reports Server (NTRS)
Bhat, Smitha
1996-01-01
Datasets generated by computer simulations and experiments in Computational Fluid Dynamics tend to be extremely large and complex. It is difficult to visualize these datasets using standard techniques like Volume Rendering and Ray Casting. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This thesis explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and from Finite Element Analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Eric M.
2004-05-20
The YAP software library computes (1) electromagnetic modes, (2) electrostatic fields, (3) magnetostatic fields and (4) particle trajectories in 2d and 3d models. The code employs finite element methods on unstructured grids of tetrahedral, hexahedral, prism and pyramid elements, with linear through cubic element shapes and basis functions to provide high accuracy. The novel particle tracker is robust, accurate and efficient, even on unstructured grids with discontinuous fields. This software library is a component of the MICHELLE 3d finite element gun code.
An unstructured shock-fitting solver for hypersonic plasma flows in chemical non-equilibrium
NASA Astrophysics Data System (ADS)
Pepe, R.; Bonfiglioli, A.; D'Angola, A.; Colonna, G.; Paciorri, R.
2015-11-01
A CFD solver, using Residual Distribution Schemes on unstructured grids, has been extended to deal with inviscid chemical non-equilibrium flows. The conservative equations have been coupled with a kinetic model for argon plasma which includes the argon metastable state as independent species, taking into account electron-atom and atom-atom processes. Results in the case of an hypersonic flow around an infinite cylinder, obtained by using both shock-capturing and shock-fitting approaches, show higher accuracy of the shock-fitting approach.
CFD in the 1980's from one point of view
NASA Technical Reports Server (NTRS)
Lomax, Harvard
1991-01-01
The present interpretive treatment of the development history of CFD in the 1980s gives attention to advancements in such algorithmic techniques as flux Jacobian-based upwind differencing, total variation-diminishing and essentially nonoscillatory schemes, multigrid methods, unstructured grids, and nonrectangular structured grids. At the same time, computational turbulence research gave attention to turbulence modeling on the bases of increasingly powerful supercomputers and meticulously constructed databases. The major future developments in CFD will encompass such capabilities as structured and unstructured three-dimensional grids.
Work-Facilitating Information Visualization Techniques for Complex Wastewater Systems
NASA Astrophysics Data System (ADS)
Ebert, Achim; Einsfeld, Katja
The design and the operation of urban drainage systems and wastewater treatment plants (WWTP) have become increasingly complex. This complexity is due to increased requirements concerning process technology, technical, environmental, economical, and occupational safety aspects. The plant operator has access not only to some timeworn filers and measured parameters but also to numerous on-line and off-line parameters that characterize the current state of the plant in detail. Moreover, expert databases and specific support pages of plant manufactures are accessible through the World Wide Web. Thus, the operator is overwhelmed with predominantly unstructured data.
NASA Technical Reports Server (NTRS)
Agah, Arvin; Bekey, George A.
1994-01-01
This paper describes autonomous mobile robot teams performing tasks in unstructured environments. The behavior and the intelligence of the group is distributed, and the system does not include a central command base or leader. The novel concept of the Tropism-Based Cognitive Architecture is introduced, which is used by the robots in order to produce behavior transforming their sensory information to proper action. The results of a number of simulation experiments are presented. These experiments include worlds where the robot teams must locate, decompose, and gather objects, and defend themselves against hostile predators, while navigating around stationary and mobile obstacles.
Will the future of knowledge work automation transform personalized medicine?
Naik, Gauri; Bhide, Sanika S
2014-09-01
Today, we live in a world of 'information overload' which demands high level of knowledge-based work. However, advances in computer hardware and software have opened possibilities to automate 'routine cognitive tasks' for knowledge processing. Engineering intelligent software systems that can process large data sets using unstructured commands and subtle judgments and have the ability to learn 'on the fly' are a significant step towards automation of knowledge work. The applications of this technology for high throughput genomic analysis, database updating, reporting clinically significant variants, and diagnostic imaging purposes are explored using case studies.
Hively, Lee M.
2014-09-16
Data collected from devices and human condition may be used to forewarn of critical events such as machine/structural failure or events from brain/heart wave data stroke. By monitoring the data, and determining what values are indicative of a failure forewarning, one can provide adequate notice of the impending failure in order to take preventive measures. This disclosure teaches a computer-based method to convert dynamical numeric data representing physical objects (unstructured data) into discrete-phase-space states, and hence into a graph (structured data) for extraction of condition change.
Preserved figure-ground segregation and symmetry perception in visual neglect.
Driver, J; Baylis, G C; Rafal, R D
1992-11-05
A central controversy in current research on visual attention is whether figures are segregated from their background preattentively, or whether attention is first directed to unstructured regions of the image. Here we present neurological evidence for the former view from studies of a brain-injured patient with visual neglect. His attentional impairment arises after normal segmentation of the image into figures and background has taken place. Our results indicate that information which is neglected and unavailable to higher levels of visual processing can nevertheless be processed by earlier stages in the visual system concerned with segmentation.
Canary: An NLP Platform for Clinicians and Researchers.
Malmasi, Shervin; Sandor, Nicolae L; Hosomura, Naoshi; Goldberg, Matt; Skentzos, Stephen; Turchin, Alexander
2017-05-03
Information Extraction methods can help discover critical knowledge buried in the vast repositories of unstructured clinical data. However, these methods are underutilized in clinical research, potentially due to the absence of free software geared towards clinicians with little technical expertise. The skills required for developing/using such software constitute a major barrier for medical researchers wishing to employ these methods. To address this, we have developed Canary, a free and open-source solution designed for users without natural language processing (NLP) or software engineering experience. It was designed to be fast and work out of the box via a user-friendly graphical interface.
2017-01-01
This paper presents a method for formation flight and collision avoidance of multiple UAVs. Due to the shortcomings such as collision avoidance caused by UAV’s high-speed and unstructured environments, this paper proposes a modified tentacle algorithm to ensure the high performance of collision avoidance. Different from the conventional tentacle algorithm which uses inverse derivation, the modified tentacle algorithm rapidly matches the radius of each tentacle and the steering command, ensuring that the data calculation problem in the conventional tentacle algorithm is solved. Meanwhile, both the speed sets and tentacles in one speed set are reduced and reconstructed so as to be applied to multiple UAVs. Instead of path iterative optimization, the paper selects the best tentacle to obtain the UAV collision avoidance path quickly. The simulation results show that the method presented in the paper effectively enhances the performance of flight formation and collision avoidance for multiple high-speed UAVs in unstructured environments. PMID:28763498
Efficient Parallelization of a Dynamic Unstructured Application on the Tera MTA
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak
1999-01-01
The success of parallel computing in solving real-life computationally-intensive problems relies on their efficient mapping and execution on large-scale multiprocessor architectures. Many important applications are both unstructured and dynamic in nature, making their efficient parallel implementation a daunting task. This paper presents the parallelization of a dynamic unstructured mesh adaptation algorithm using three popular programming paradigms on three leading supercomputers. We examine an MPI message-passing implementation on the Cray T3E and the SGI Origin2OOO, a shared-memory implementation using cache coherent nonuniform memory access (CC-NUMA) of the Origin2OOO, and a multi-threaded version on the newly-released Tera Multi-threaded Architecture (MTA). We compare several critical factors of this parallel code development, including runtime, scalability, programmability, and memory overhead. Our overall results demonstrate that multi-threaded systems offer tremendous potential for quickly and efficiently solving some of the most challenging real-life problems on parallel computers.
Jali - Unstructured Mesh Infrastructure for Multi-Physics Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garimella, Rao V; Berndt, Markus; Coon, Ethan
2017-04-13
Jali is a parallel unstructured mesh infrastructure library designed for use by multi-physics simulations. It supports 2D and 3D arbitrary polyhedral meshes distributed over hundreds to thousands of nodes. Jali can read write Exodus II meshes along with fields and sets on the mesh and support for other formats is partially implemented or is (https://github.com/MeshToolkit/MSTK), an open source general purpose unstructured mesh infrastructure library from Los Alamos National Laboratory. While it has been made to work with other mesh frameworks such as MOAB and STKmesh in the past, support for maintaining the interface to these frameworks has been suspended formore » now. Jali supports distributed as well as on-node parallelism. Support of on-node parallelism is through direct use of the the mesh in multi-threaded constructs or through the use of "tiles" which are submeshes or sub-partitions of a partition destined for a compute node.« less
NASA Technical Reports Server (NTRS)
Parikh, Paresh; Pirzadeh, Shahyar; Loehner, Rainald
1990-01-01
A set of computer programs for 3-D unstructured grid generation, fluid flow calculations, and flow field visualization was developed. The grid generation program, called VGRID3D, generates grids over complex configurations using the advancing front method. In this method, the point and element generation is accomplished simultaneously, VPLOT3D is an interactive, menudriven pre- and post-processor graphics program for interpolation and display of unstructured grid data. The flow solver, VFLOW3D, is an Euler equation solver based on an explicit, two-step, Taylor-Galerkin algorithm which uses the Flux Corrected Transport (FCT) concept for a wriggle-free solution. Using these programs, increasingly complex 3-D configurations of interest to aerospace community were gridded including a complete Space Transportation System comprised of the space-shuttle orbitor, the solid-rocket boosters, and the external tank. Flow solutions were obtained on various configurations in subsonic, transonic, and supersonic flow regimes.
On the application of Chimera/unstructured hybrid grids for conjugate heat transfer
NASA Technical Reports Server (NTRS)
Kao, Kai-Hsiung; Liou, Meng-Sing
1995-01-01
A hybrid grid system that combines the Chimera overset grid scheme and an unstructured grid method is developed to study fluid flow and heat transfer problems. With the proposed method, the solid structural region, in which only the heat conduction is considered, can be easily represented using an unstructured grid method. As for the fluid flow region external to the solid material, the Chimera overset grid scheme has been shown to be very flexible and efficient in resolving complex configurations. The numerical analyses require the flow field solution and material thermal response to be obtained simultaneously. A continuous transfer of temperature and heat flux is specified at the interface, which connects the solid structure and the fluid flow as an integral system. Numerical results are compared with analytical and experimental data for a flat plate and a C3X cooled turbine cascade. A simplified drum-disk system is also simulated to show the effectiveness of this hybrid grid system.
Revisiting the Least-squares Procedure for Gradient Reconstruction on Unstructured Meshes
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.; Thomas, James L. (Technical Monitor)
2003-01-01
The accuracy of the least-squares technique for gradient reconstruction on unstructured meshes is examined. While least-squares techniques produce accurate results on arbitrary isotropic unstructured meshes, serious difficulties exist for highly stretched meshes in the presence of surface curvature. In these situations, gradients are typically under-estimated by up to an order of magnitude. For vertex-based discretizations on triangular and quadrilateral meshes, and cell-centered discretizations on quadrilateral meshes, accuracy can be recovered using an inverse distance weighting in the least-squares construction. For cell-centered discretizations on triangles, both the unweighted and weighted least-squares constructions fail to provide suitable gradient estimates for highly stretched curved meshes. Good overall flow solution accuracy can be retained in spite of poor gradient estimates, due to the presence of flow alignment in exactly the same regions where the poor gradient accuracy is observed. However, the use of entropy fixes has the potential for generating large but subtle discretization errors.
Unstructured Grids for Sonic Boom Analysis and Design
NASA Technical Reports Server (NTRS)
Campbell, Richard L.; Nayani, Sudheer N.
2015-01-01
An evaluation of two methods for improving the process for generating unstructured CFD grids for sonic boom analysis and design has been conducted. The process involves two steps: the generation of an inner core grid using a conventional unstructured grid generator such as VGRID, followed by the extrusion of a sheared and stretched collar grid through the outer boundary of the core grid. The first method evaluated, known as COB, automatically creates a cylindrical outer boundary definition for use in VGRID that makes the extrusion process more robust. The second method, BG, generates the collar grid by extrusion in a very efficient manner. Parametric studies have been carried out and new options evaluated for each of these codes with the goal of establishing guidelines for best practices for maintaining boom signature accuracy with as small a grid as possible. In addition, a preliminary investigation examining the use of the CDISC design method for reducing sonic boom utilizing these grids was conducted, with initial results confirming the feasibility of a new remote design approach.
Lewinski, Allison A; Anderson, Ruth A; Vorderstrasse, Allison A; Fisher, Edwin B; Pan, Wei; Johnson, Constance M
2017-04-24
Individuals with type 2 diabetes have an increased risk for comorbidities such as heart disease, lower limb amputations, stroke, and renal failure. Multiple factors influence development of complications in a person living with type 2 diabetes; however, an individual's self-management behaviors may delay the onset of, or lessen the severity of, these complications. Social support provides personal, informal advice and knowledge that helps individuals initiate and sustain self-management and adherence. Our aim was to gain an understanding of type 2 diabetes social interaction in a virtual environment, one type of computer-mediated environment (CME), and the social support characteristics that increase and sustain self-management in adults living with chronic illness. This study is a secondary analysis of longitudinal data collected in a CME study, Second Life Impacts Diabetes Education & Self-Management (1R21-LM010727-01). This virtual environment replicated a real-life community where 6 months of naturalistic synchronous voice conversations, emails, and text chats were recorded among participants and providers. This analysis uses a mixed-methods approach to explore and compare qualitative and quantitative findings. This analysis is guided by two theories: Strong/Weak Ties Theory and Social Penetration Theory. Qualitative data will be analyzed using content analysis, and we will complete descriptive statistics on the quantified variables (eg, average number of ties). Institutional review board approval was obtained in June 2016. This study is in progress. Interventions provided through virtual environments are a promising solution to increasing self-management practices. However, little is known of the depth, breadth, and quality of social support that is exchanged and how interaction supports self-management and relates to health outcomes. This study will provide knowledge that will help guide clinical practice and policy to enhance social support for chronic illness via the Internet. ©Allison A Lewinski, Ruth A Anderson, Allison A Vorderstrasse, Edwin B Fisher, Wei Pan, Constance M Johnson. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 24.04.2017.
Sentiment Analysis Using Common-Sense and Context Information
Mittal, Namita; Bansal, Pooja; Garg, Sonal
2015-01-01
Sentiment analysis research has been increasing tremendously in recent times due to the wide range of business and social applications. Sentiment analysis from unstructured natural language text has recently received considerable attention from the research community. In this paper, we propose a novel sentiment analysis model based on common-sense knowledge extracted from ConceptNet based ontology and context information. ConceptNet based ontology is used to determine the domain specific concepts which in turn produced the domain specific important features. Further, the polarities of the extracted concepts are determined using the contextual polarity lexicon which we developed by considering the context information of a word. Finally, semantic orientations of domain specific features of the review document are aggregated based on the importance of a feature with respect to the domain. The importance of the feature is determined by the depth of the feature in the ontology. Experimental results show the effectiveness of the proposed methods. PMID:25866505
Leveraging Collaborative Filtering to Accelerate Rare Disease Diagnosis
Shen, Feichen; Liu, Sijia; Wang, Yanshan; Wang, Liwei; Afzal, Naveed; Liu, Hongfang
2017-01-01
In the USA, rare diseases are defined as those affecting fewer than 200,000 patients at any given time. Patients with rare diseases are frequently misdiagnosed or undiagnosed which may due to the lack of knowledge and experience of care providers. We hypothesize that patients’ phenotypic information available in electronic medical records (EMR) can be leveraged to accelerate disease diagnosis based on the intuition that providers need to document associated phenotypic information to support the diagnosis decision, especially for rare diseases. In this study, we proposed a collaborative filtering system enriched with natural language processing and semantic techniques to assist rare disease diagnosis based on phenotypic characterization. Specifically, we leveraged four similarity measurements with two neighborhood algorithms on 2010-2015 Mayo Clinic unstructured large patient cohort and evaluated different approaches. Preliminary results demonstrated that the use of collaborative filtering with phenotypic information is able to stratify patients with relatively similar rare diseases. PMID:29854225
Leveraging Collaborative Filtering to Accelerate Rare Disease Diagnosis.
Shen, Feichen; Liu, Sijia; Wang, Yanshan; Wang, Liwei; Afzal, Naveed; Liu, Hongfang
2017-01-01
In the USA, rare diseases are defined as those affecting fewer than 200,000 patients at any given time. Patients with rare diseases are frequently misdiagnosed or undiagnosed which may due to the lack of knowledge and experience of care providers. We hypothesize that patients' phenotypic information available in electronic medical records (EMR) can be leveraged to accelerate disease diagnosis based on the intuition that providers need to document associated phenotypic information to support the diagnosis decision, especially for rare diseases. In this study, we proposed a collaborative filtering system enriched with natural language processing and semantic techniques to assist rare disease diagnosis based on phenotypic characterization. Specifically, we leveraged four similarity measurements with two neighborhood algorithms on 2010-2015 Mayo Clinic unstructured large patient cohort and evaluated different approaches. Preliminary results demonstrated that the use of collaborative filtering with phenotypic information is able to stratify patients with relatively similar rare diseases.
Algorithms and architectures for robot vision
NASA Technical Reports Server (NTRS)
Schenker, Paul S.
1990-01-01
The scope of the current work is to develop practical sensing implementations for robots operating in complex, partially unstructured environments. A focus in this work is to develop object models and estimation techniques which are specific to requirements of robot locomotion, approach and avoidance, and grasp and manipulation. Such problems have to date received limited attention in either computer or human vision - in essence, asking not only how perception is in general modeled, but also what is the functional purpose of its underlying representations. As in the past, researchers are drawing on ideas from both the psychological and machine vision literature. Of particular interest is the development 3-D shape and motion estimates for complex objects when given only partial and uncertain information and when such information is incrementally accrued over time. Current studies consider the use of surface motion, contour, and texture information, with the longer range goal of developing a fused sensing strategy based on these sources and others.
Sentiment analysis using common-sense and context information.
Agarwal, Basant; Mittal, Namita; Bansal, Pooja; Garg, Sonal
2015-01-01
Sentiment analysis research has been increasing tremendously in recent times due to the wide range of business and social applications. Sentiment analysis from unstructured natural language text has recently received considerable attention from the research community. In this paper, we propose a novel sentiment analysis model based on common-sense knowledge extracted from ConceptNet based ontology and context information. ConceptNet based ontology is used to determine the domain specific concepts which in turn produced the domain specific important features. Further, the polarities of the extracted concepts are determined using the contextual polarity lexicon which we developed by considering the context information of a word. Finally, semantic orientations of domain specific features of the review document are aggregated based on the importance of a feature with respect to the domain. The importance of the feature is determined by the depth of the feature in the ontology. Experimental results show the effectiveness of the proposed methods.
Osborn, Sarah; Zulian, Patrick; Benson, Thomas; ...
2018-01-30
This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, Sarah; Zulian, Patrick; Benson, Thomas
This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less
Cellular abundance of Mps1 and the role of its carboxyl terminal tail in substrate recruitment.
Sun, Tingting; Yang, Xiaomei; Wang, Wei; Zhang, Xiaojuan; Xu, Quanbin; Zhu, Songcheng; Kuchta, Robert; Chen, Guanjun; Liu, Xuedong
2010-12-03
Mps1 is a protein kinase that regulates normal mitotic progression and the spindle checkpoint in response to spindle damage. The levels of Mps1 are relatively low in cells during interphase but elevated in mitosis or upon activation of the spindle checkpoint, although the dynamic range of Mps1 expression and the Mps1 catalytic mechanism have not been carefully characterized. Our recent structural studies of the Mps1 kinase domain revealed that the carboxyl-terminal tail region of Mps1 is unstructured, raising the question of whether this region has any functional role in Mps1 catalysis. Here we first determined the cellular abundance of Mps1 during cell cycle progression and found that Mps1 levels vary between 60,000 per cell in early G(1) and 110,000 per cell during mitosis. We studied phosphorylation of a number of Mps1 substrates in vitro and in culture cells. Unexpectedly, we found that the unstructured carboxyl-terminal region of Mps1 plays an essential role in substrate recruitment. Kinetics studies using the purified recombinant wild type and mutant kinases indicate that the carboxyl-terminal tail is largely dispensable for autophosphorylation of Mps1 but critical for trans-phosphorylation of substrates in vitro and in cultured cells. Mps1 mutant without the unstructured tail region is defective in mediating spindle assembly checkpoint activation. Our results underscore the importance of the unstructured tail region of Mps1 in kinase activation.
Sznitman, Sharon; Engel-Yeger, Batya
2017-05-01
Researchers have theorized that adolescents high in sensation seeking are particularly sensitive to positive reinforcement and the rewarding outcomes of alcohol use, and thus that the personality vulnerability is a direct causal risk factor for alcohol use. In contrast, the routine activity perspective theorizes that part of the effect of sensation seeking on alcohol use goes through the propensity that sensation seekers have towards unstructured socializing with peers. The study tests a model with indirect and direct paths from sensation seeking and participation in unstructured peer socialization to adolescent alcohol use. Cross-sectional data were collected from 360 students in a state-secular Jewish high school (10th to 12th grade) in the center region of Israel. The sample was equally divided between boys (51.9%) and girls (48.1%), respondents' age ranged from 15 to 17 years (mean = 16.02 ± 0.85). Structural equation modeling was used to test the direct and indirect paths. While sensation seeking had a significant direct path to adolescent alcohol use, part of the association was mediated by unstructured socializing with peers. The mediated paths were similar for boys and girls alike. Sensation seeking is primarily biologically determined and prevention efforts are unlikely to modify this personality vulnerability. The results of this study suggest that a promising prevention avenue is to modify extracurricular participation patterns of vulnerable adolescents. © The Author 2017. Medical Council on Alcohol and Oxford University Press. All rights reserved.
NASA Astrophysics Data System (ADS)
KIM, Jong Woon; LEE, Young-Ouk
2017-09-01
As computing power gets better and better, computer codes that use a deterministic method seem to be less useful than those using the Monte Carlo method. In addition, users do not like to think about space, angles, and energy discretization for deterministic codes. However, a deterministic method is still powerful in that we can obtain a solution of the flux throughout the problem, particularly as when particles can barely penetrate, such as in a deep penetration problem with small detection volumes. Recently, a new state-of-the-art discrete-ordinates code, ATTILA, was developed and has been widely used in several applications. ATTILA provides the capabilities to solve geometrically complex 3-D transport problems by using an unstructured tetrahedral mesh. Since 2009, we have been developing our own code by benchmarking ATTILA. AETIUS is a discrete ordinates code that uses an unstructured tetrahedral mesh such as ATTILA. For pre- and post- processing, Gmsh is used to generate an unstructured tetrahedral mesh by importing a CAD file (*.step) and visualizing the calculation results of AETIUS. Using a CAD tool, the geometry can be modeled very easily. In this paper, we describe a brief overview of AETIUS and provide numerical results from both AETIUS and a Monte Carlo code, MCNP5, in a deep penetration problem with small detection volumes. The results demonstrate the effectiveness and efficiency of AETIUS for such calculations.
Metadata management for high content screening in OMERO
Li, Simon; Besson, Sébastien; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Lindner, Dominik; Linkert, Melissa; Moore, William J.; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Allan, Chris; Burel, Jean-Marie; Moore, Josh; Swedlow, Jason R.
2016-01-01
High content screening (HCS) experiments create a classic data management challenge—multiple, large sets of heterogeneous structured and unstructured data, that must be integrated and linked to produce a set of “final” results. These different data include images, reagents, protocols, analytic output, and phenotypes, all of which must be stored, linked and made accessible for users, scientists, collaborators and where appropriate the wider community. The OME Consortium has built several open source tools for managing, linking and sharing these different types of data. The OME Data Model is a metadata specification that supports the image data and metadata recorded in HCS experiments. Bio-Formats is a Java library that reads recorded image data and metadata and includes support for several HCS screening systems. OMERO is an enterprise data management application that integrates image data, experimental and analytic metadata and makes them accessible for visualization, mining, sharing and downstream analysis. We discuss how Bio-Formats and OMERO handle these different data types, and how they can be used to integrate, link and share HCS experiments in facilities and public data repositories. OME specifications and software are open source and are available at https://www.openmicroscopy.org. PMID:26476368
Metadata management for high content screening in OMERO.
Li, Simon; Besson, Sébastien; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Lindner, Dominik; Linkert, Melissa; Moore, William J; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Allan, Chris; Burel, Jean-Marie; Moore, Josh; Swedlow, Jason R
2016-03-01
High content screening (HCS) experiments create a classic data management challenge-multiple, large sets of heterogeneous structured and unstructured data, that must be integrated and linked to produce a set of "final" results. These different data include images, reagents, protocols, analytic output, and phenotypes, all of which must be stored, linked and made accessible for users, scientists, collaborators and where appropriate the wider community. The OME Consortium has built several open source tools for managing, linking and sharing these different types of data. The OME Data Model is a metadata specification that supports the image data and metadata recorded in HCS experiments. Bio-Formats is a Java library that reads recorded image data and metadata and includes support for several HCS screening systems. OMERO is an enterprise data management application that integrates image data, experimental and analytic metadata and makes them accessible for visualization, mining, sharing and downstream analysis. We discuss how Bio-Formats and OMERO handle these different data types, and how they can be used to integrate, link and share HCS experiments in facilities and public data repositories. OME specifications and software are open source and are available at https://www.openmicroscopy.org. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhaoqing; Wang, Taiping; Copping, Andrea E.
Understanding and providing proactive information on the potential for tidal energy projects to cause changes to the physical system and to key water quality constituents in tidal waters is a necessary and cost-effective means to avoid costly regulatory involvement and late stage surprises in the permitting process. This paper presents a modeling study for evaluating the tidal energy extraction and its potential impacts on the marine environment in a real world site - Tacoma Narrows of Puget Sound, Washington State, USA. An unstructured-grid coastal ocean model, fitted with a module that simulates tidal energy devices, was applied to simulate themore » tidal energy extracted by different turbine array configurations and the potential effects of the extraction at local and system-wide scales in Tacoma Narrows and South Puget Sound. Model results demonstrated the advantage of an unstructured-grid model for simulating the far-field effects of tidal energy extraction in a large model domain, as well as assessing the near-field effect using a fine grid resolution near the tidal turbines. The outcome shows that a realistic near-term deployment scenario extracts a very small fraction of the total tidal energy in the system and that system wide environmental effects are not likely; however, near-field effects on the flow field and bed shear stress in the area of tidal turbine farm are more likely. Model results also indicate that from a practical standpoint, hydrodynamic or water quality effects are not likely to be the limiting factor for development of large commercial-scale tidal farms. Results indicate that very high numbers of turbines are required to significantly alter the tidal system; limitations on marine space or other environmental concerns are likely to be reached before reaching these deployment levels. These findings show that important information obtained from numerical modeling can be used to inform regulatory and policy processes for tidal energy development.« less
Building Community: A 2005 Conference for Education and Public Outreach Professionals
NASA Astrophysics Data System (ADS)
Slater, T. F.; Bennett, M.; Garmany, K.
2004-12-01
In support of the Astronomical Society of the Pacific's (ASP) mission to increase the understanding and appreciation of astronomy, the ASP will host an international meeting in September 14-16, 2005 in Tucson focused on building and supporting a vibrant and connected community of individuals and groups engaged in educational and public outreach (EPO) in the disciplines of astronomy, astrobiology, space, and earth science. This conference is specially designed for individuals who are bringing the excitement of astronomy to non-astronomers. This community of science communicators includes: NASA and NSF-funded EPO program managers, developers, evaluators, PIOs, and others who support outreach efforts by government agencies and commercial industries; Scientists working with or assigned to EPO programs or efforts; Individuals working in formal science education: K-14 schools/colleges and minority-serving institutions as faculty or curriculum developers; Informal educators working in widely diverse settings including science centers, planetariums, museums, parks, and youth programs; Amateur astronomers involved in or interested in engaging children and adults in the excitement of astronomy; Public outreach specialists working in observatories, visitor centers, public information offices, and in multimedia broadcasting and journalism. The conference goals are to improve the quality and increase the effective dissemination of EPO materials, products, and programs through a multi-tiered professional development conference utilizing: Visionary plenary talks; Highly interactive panel discussions; Small group workshops and clinics focused on a wide range of EPO topics including evaluation and dissemination, with separate sessions for varying experience levels; Poster and project exhibition segments; Opportunities to increase program leveraging through structured and unstructured networking sessions; and Individual program action planning sessions. There will both separate and combined sessions for individuals working in formal, informal, public outreach, and scientific communications settings; and specific professional development sessions.
Osborne, John D; Wyatt, Matthew; Westfall, Andrew O; Willig, James; Bethard, Steven; Gordon, Geoff
2016-11-01
To help cancer registrars efficiently and accurately identify reportable cancer cases. The Cancer Registry Control Panel (CRCP) was developed to detect mentions of reportable cancer cases using a pipeline built on the Unstructured Information Management Architecture - Asynchronous Scaleout (UIMA-AS) architecture containing the National Library of Medicine's UIMA MetaMap annotator as well as a variety of rule-based UIMA annotators that primarily act to filter out concepts referring to nonreportable cancers. CRCP inspects pathology reports nightly to identify pathology records containing relevant cancer concepts and combines this with diagnosis codes from the Clinical Electronic Data Warehouse to identify candidate cancer patients using supervised machine learning. Cancer mentions are highlighted in all candidate clinical notes and then sorted in CRCP's web interface for faster validation by cancer registrars. CRCP achieved an accuracy of 0.872 and detected reportable cancer cases with a precision of 0.843 and a recall of 0.848. CRCP increases throughput by 22.6% over a baseline (manual review) pathology report inspection system while achieving a higher precision and recall. Depending on registrar time constraints, CRCP can increase recall to 0.939 at the expense of precision by incorporating a data source information feature. CRCP demonstrates accurate results when applying natural language processing features to the problem of detecting patients with cases of reportable cancer from clinical notes. We show that implementing only a portion of cancer reporting rules in the form of regular expressions is sufficient to increase the precision, recall, and speed of the detection of reportable cancer cases when combined with off-the-shelf information extraction software and machine learning. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Utilization and management of maternal and child health funds in rural Nepal.
Morrison, Joanna; Thapa, Rita; Sen, Aman; Neupane, Rishi; Borghi, Jo; Tumbahangphe, Kirti Man; Osrin, David; Manandhar, Dharma; Costello, Anthony
2010-01-01
Maternal and neonatal mortality rates are highest in the poorest countries, and financial barriers impede access to health care. Community loan funds can increase access to cash in rural areas, thereby reducing delays in care seeking. As part of a participatory intervention in rural Nepal, community women's groups initiated and managed local funds. We explore the factors affecting utilization and management of these funds and the role of the funds in the success of the women's group intervention. We conducted a qualitative study using focus group discussions, group interviews and unstructured observations. Funds may increase access to care for members of trusted 'insider' families adjudged as able to repay loans. Sustainability and sufficiency of funds was a concern but funds increased women's independence and enabled timely care seeking. Conversely, the perceived necessity to contribute may have deterred poorer women. While funds were integral to group success and increased women's autonomy, they may not be the most effective way of supporting the poorest, as the risk pool is too small to allow for repayment default.
Mining free-text medical records for companion animal enteric syndrome surveillance.
Anholt, R M; Berezowski, J; Jamal, I; Ribble, C; Stephen, C
2014-03-01
Large amounts of animal health care data are present in veterinary electronic medical records (EMR) and they present an opportunity for companion animal disease surveillance. Veterinary patient records are largely in free-text without clinical coding or fixed vocabulary. Text-mining, a computer and information technology application, is needed to identify cases of interest and to add structure to the otherwise unstructured data. In this study EMR's were extracted from veterinary management programs of 12 participating veterinary practices and stored in a data warehouse. Using commercially available text-mining software (WordStat™), we developed a categorization dictionary that could be used to automatically classify and extract enteric syndrome cases from the warehoused electronic medical records. The diagnostic accuracy of the text-miner for retrieving cases of enteric syndrome was measured against human reviewers who independently categorized a random sample of 2500 cases as enteric syndrome positive or negative. Compared to the reviewers, the text-miner retrieved cases with enteric signs with a sensitivity of 87.6% (95%CI, 80.4-92.9%) and a specificity of 99.3% (95%CI, 98.9-99.6%). Automatic and accurate detection of enteric syndrome cases provides an opportunity for community surveillance of enteric pathogens in companion animals. Copyright © 2014 Elsevier B.V. All rights reserved.
Women and health consequences of natural disasters: Challenge or opportunity?
Sohrabizadeh, Sanaz; Tourani PhD, Sogand; Khankeh, Hamid Reza
2016-01-01
Disasters do not affect people equally; the impact of disasters on the lives of women is different from other groups of a community. Women's fundamental rights to health and safety are violated after disasters. The authors of this study aimed to explore various factors of women's health with reference to previous natural disasters in Iran. A qualitative approach using in-depth unstructured interviews and field observations was employed to explore women's health factors in the affected regions. A total of 22 participants affected by disasters, as well as key informants, were interviewed applying the purposeful sampling method. Data were collected in 2014 in three provinces, including East Azerbaijan, Bushehr, and Mazandaran. A content analysis using the Graneheim approach was performed for analyzing the transcribed interviews. Two themes and four categories were extracted from the data. The themes that emerged included psycho-physical effects and women's health status. Physical and psycho-emotional effects and reproductive and environmental health effects were the four emergent categories. The findings implied that managing women's health challenges may result in reducing the distressing effects of disaster. These findings support identification and application of the mechanisms by which women's well-being in physical, mental, reproductive, and environmental aspects can be protected after disasters.
U-Compare: share and compare text mining tools with UIMA
Kano, Yoshinobu; Baumgartner, William A.; McCrohon, Luke; Ananiadou, Sophia; Cohen, K. Bretonnel; Hunter, Lawrence; Tsujii, Jun'ichi
2009-01-01
Summary: Due to the increasing number of text mining resources (tools and corpora) available to biologists, interoperability issues between these resources are becoming significant obstacles to using them effectively. UIMA, the Unstructured Information Management Architecture, is an open framework designed to aid in the construction of more interoperable tools. U-Compare is built on top of the UIMA framework, and provides both a concrete framework for out-of-the-box text mining and a sophisticated evaluation platform allowing users to run specific tools on any target text, generating both detailed statistics and instance-based visualizations of outputs. U-Compare is a joint project, providing the world's largest, and still growing, collection of UIMA-compatible resources. These resources, originally developed by different groups for a variety of domains, include many famous tools and corpora. U-Compare can be launched straight from the web, without needing to be manually installed. All U-Compare components are provided ready-to-use and can be combined easily via a drag-and-drop interface without any programming. External UIMA components can also simply be mixed with U-Compare components, without distinguishing between locally and remotely deployed resources. Availability: http://u-compare.org/ Contact: kano@is.s.u-tokyo.ac.jp PMID:19414535
Sabus, Carla; Spake, Ellen
2018-01-01
The ability to innovate and adapt practice is a requirement of the progressive healthcare provider. Innovative practice by rehabilitation providers has largely been approached as personal professional development; this study extends that perspective by examining innovation uptake from the organizational level. The varied professions can be expected to have distinct qualities of innovation adoption that reflect professional norms, values, and expectations. The purpose of this qualitative study was to describe the organizational processes of innovation uptake in outpatient physical therapy practice. Through nomination, two outpatient, privately owned physical therapy clinics were identified as innovation practices. Eighteen physical therapists, three owners, and a manager participated in the study. The two clinics served as case studies within a grounded theory approach. Data were collected through observation, unstructured questioning, work flow analysis, focus group sessions, and artifact analysis. Data were analyzed and coded among the investigators. A theoretical model of the innovation adoption process in outpatient physical therapy practice was developed. Elements of the model included (1) change grounded in relationship-centered care, (2) clinic readiness to accept change, and (3) clinic adaptability and resilience. A social paradigm of innovation adoption informed through this research complements the concentration on personal professional development.
Wang, Yue; Luo, Jin; Hao, Shiying; Xu, Haihua; Shin, Andrew Young; Jin, Bo; Liu, Rui; Deng, Xiaohong; Wang, Lijuan; Zheng, Le; Zhao, Yifan; Zhu, Chunqing; Hu, Zhongkai; Fu, Changlin; Hao, Yanpeng; Zhao, Yingzhen; Jiang, Yunliang; Dai, Dorothy; Culver, Devore S; Alfreds, Shaun T; Todd, Rogow; Stearns, Frank; Sylvester, Karl G; Widen, Eric; Ling, Xuefeng B
2015-12-01
In order to proactively manage congestive heart failure (CHF) patients, an effective CHF case finding algorithm is required to process both structured and unstructured electronic medical records (EMR) to allow complementary and cost-efficient identification of CHF patients. We set to identify CHF cases from both EMR codified and natural language processing (NLP) found cases. Using narrative clinical notes from all Maine Health Information Exchange (HIE) patients, the NLP case finding algorithm was retrospectively (July 1, 2012-June 30, 2013) developed with a random subset of HIE associated facilities, and blind-tested with the remaining facilities. The NLP based method was integrated into a live HIE population exploration system and validated prospectively (July 1, 2013-June 30, 2014). Total of 18,295 codified CHF patients were included in Maine HIE. Among the 253,803 subjects without CHF codings, our case finding algorithm prospectively identified 2411 uncodified CHF cases. The positive predictive value (PPV) is 0.914, and 70.1% of these 2411 cases were found to be with CHF histories in the clinical notes. A CHF case finding algorithm was developed, tested and prospectively validated. The successful integration of the CHF case findings algorithm into the Maine HIE live system is expected to improve the Maine CHF care. Copyright © 2015. Published by Elsevier Ireland Ltd.
Numerical comparisons of ground motion predictions with kinematic rupture modeling
NASA Astrophysics Data System (ADS)
Yuan, Y. O.; Zurek, B.; Liu, F.; deMartin, B.; Lacasse, M. D.
2017-12-01
Recent advances in large-scale wave simulators allow for the computation of seismograms at unprecedented levels of detail and for areas sufficiently large to be relevant to small regional studies. In some instances, detailed information of the mechanical properties of the subsurface has been obtained from seismic exploration surveys, well data, and core analysis. Using kinematic rupture modeling, this information can be used with a wave propagation simulator to predict the ground motion that would result from an assumed fault rupture. The purpose of this work is to explore the limits of wave propagation simulators for modeling ground motion in different settings, and in particular, to explore the numerical accuracy of different methods in the presence of features that are challenging to simulate such as topography, low-velocity surface layers, and shallow sources. In the main part of this work, we use a variety of synthetic three-dimensional models and compare the relative costs and benefits of different numerical discretization methods in computing the seismograms of realistic-size models. The finite-difference method, the discontinuous-Galerkin method, and the spectral-element method are compared for a range of synthetic models having different levels of complexity such as topography, large subsurface features, low-velocity surface layers, and the location and characteristics of fault ruptures represented as an array of seismic sources. While some previous studies have already demonstrated that unstructured-mesh methods can sometimes tackle complex problems (Moczo et al.), we investigate the trade-off between unstructured-mesh methods and regular-grid methods for a broad range of models and source configurations. Finally, for comparison, our direct simulation results are briefly contrasted with those predicted by a few phenomenological ground-motion prediction equations, and a workflow for accurately predicting ground motion is proposed.
Antecedents and consequences of workplace violence against nurses: A qualitative study.
Najafi, Fereshteh; Fallahi-Khoshknab, Masoud; Ahmadi, Fazlollah; Dalvandi, Asghar; Rahgozar, Mehdi
2018-01-01
To explore Iranian nurses' perceptions of and experiences with the antecedents and consequences of workplace violence perpetrated by patients, patients' relatives, colleagues and superiors. Workplace violence against nurses is a common problem worldwide, including in Iran. Although many studies have reviewed the antecedents and consequences of workplace violence, limited information is available on this topic. An understanding of the predisposing factors for violence and the consequences of violence is essential to developing programs to prevent and manage workplace violence. Qualitative descriptive design. In this qualitative study, 22 unstructured, in-depth interviews were conducted with registered nurses who had experienced workplace violence and who were selecting using purposive sampling in nine hospitals. Inductive content analysis was used to analyse the data. Five categories emerged as predisposing factors: unmet expectations of patients/relatives, inefficient organisational management, inappropriate professional communication, factors related to nurses and factors related to patients, patients' relatives and colleagues. Individual, familial and professional consequences were identified as outcomes of workplace violence against nurses. Workplace violence by patients/their relatives and colleagues/superiors is affected by various complicated factors at the individual and organisational levels. In addition to negatively affecting nurses' individual and family lives, workplace violence may lead to a lower quality of patient care and negative attitudes towards the nursing profession. Identifying factors, which lead to workplace violence, could help facilitate documenting and reporting such incidents as well as developing the necessary interventions to reduce them. Furthermore, native instruments must be developed to predict and monitor violence. © 2017 John Wiley & Sons Ltd.
An Adaptive Flow Solver for Air-Borne Vehicles Undergoing Time-Dependent Motions/Deformations
NASA Technical Reports Server (NTRS)
Singh, Jatinder; Taylor, Stephen
1997-01-01
This report describes a concurrent Euler flow solver for flows around complex 3-D bodies. The solver is based on a cell-centered finite volume methodology on 3-D unstructured tetrahedral grids. In this algorithm, spatial discretization for the inviscid convective term is accomplished using an upwind scheme. A localized reconstruction is done for flow variables which is second order accurate. Evolution in time is accomplished using an explicit three-stage Runge-Kutta method which has second order temporal accuracy. This is adapted for concurrent execution using another proven methodology based on concurrent graph abstraction. This solver operates on heterogeneous network architectures. These architectures may include a broad variety of UNIX workstations and PCs running Windows NT, symmetric multiprocessors and distributed-memory multi-computers. The unstructured grid is generated using commercial grid generation tools. The grid is automatically partitioned using a concurrent algorithm based on heat diffusion. This results in memory requirements that are inversely proportional to the number of processors. The solver uses automatic granularity control and resource management techniques both to balance load and communication requirements, and deal with differing memory constraints. These ideas are again based on heat diffusion. Results are subsequently combined for visualization and analysis using commercial CFD tools. Flow simulation results are demonstrated for a constant section wing at subsonic, transonic, and a supersonic case. These results are compared with experimental data and numerical results of other researchers. Performance results are under way for a variety of network topologies.
Learning from Demonstration for Autonomous Navigation in Complex Unstructured Terrain
2010-06-24
unstructured terrain Seraji and Howard, 2002, Huertas et al., 2005, Biesiadecki and Maimone, 2006] The common thread amongst these approaches is that they con...can be gathered either of- fline [ Seraji and Howard, 2002, Howard et al., 2007] or online [Thrun et al., 2006, Sun et al., 2007] by ob- serving where... Fe = F∗ = ~0; foreach P ie do P i∗ = planLossAugPath(start(P i e), goal(P ie), M); foreach x ∈ P ie do Fe + = Fe + Fx; foreach x ∈ P i∗ do F∗ = F∗ + Fx
NASA Technical Reports Server (NTRS)
Woodard, Paul R.; Batina, John T.; Yang, Henry T. Y.
1992-01-01
Quality assessment procedures are described for two-dimensional unstructured meshes. The procedures include measurement of minimum angles, element aspect ratios, stretching, and element skewness. Meshes about the ONERA M6 wing and the Boeing 747 transport configuration are generated using an advancing front method grid generation package of programs. Solutions of Euler's equations for these meshes are obtained at low angle-of-attack, transonic conditions. Results for these cases, obtained as part of a validation study demonstrate accuracy of an implicit upwind Euler solution algorithm.
Unstructured-grid methods development: Lessons le arned
NASA Technical Reports Server (NTRS)
Batina, John T.
1991-01-01
The development is summarized of unstructured grid methods for the solution of the equations of fluid flow and some of the lessons learned are shared. The 3-D Euler equations are solved, including spatial discretizations, temporal discretizations, and boundary conditions. An example calculation with an upwind implicit method using a CFL (Courant Friedricks Lewy) number of infinity is presented for the Boeing 747 aircraft. The results obtained in less than one hour of CPU time on a Cray-2 computer, thus demonstrating the speed and robustness of the present capability.
Parametric robust control and system identification: Unified approach
NASA Technical Reports Server (NTRS)
Keel, Leehyun
1994-01-01
Despite significant advancement in the area of robust parametric control, the problem of synthesizing such a controller is still a wide open problem. Thus, we attempt to give a solution to this important problem. Our approach captures the parametric uncertainty as an H(sub infinity) unstructured uncertainty so that H(sub infinity) synthesis techniques are applicable. Although the techniques cannot cope with the exact parametric uncertainty, they give a reasonable guideline to model the unstructured uncertainty that contains the parametric uncertainty. An additional loop shaping technique is also introduced to relax its conservatism.
3D Feature Extraction for Unstructured Grids
NASA Technical Reports Server (NTRS)
Silver, Deborah
1996-01-01
Visualization techniques provide tools that help scientists identify observed phenomena in scientific simulation. To be useful, these tools must allow the user to extract regions, classify and visualize them, abstract them for simplified representations, and track their evolution. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This article explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and those from Finite Element Analysis.
Sensor-based architecture for medical imaging workflow analysis.
Silva, Luís A Bastião; Campos, Samuel; Costa, Carlos; Oliveira, José Luis
2014-08-01
The growing use of computer systems in medical institutions has been generating a tremendous quantity of data. While these data have a critical role in assisting physicians in the clinical practice, the information that can be extracted goes far beyond this utilization. This article proposes a platform capable of assembling multiple data sources within a medical imaging laboratory, through a network of intelligent sensors. The proposed integration framework follows a SOA hybrid architecture based on an information sensor network, capable of collecting information from several sources in medical imaging laboratories. Currently, the system supports three types of sensors: DICOM repository meta-data, network workflows and examination reports. Each sensor is responsible for converting unstructured information from data sources into a common format that will then be semantically indexed in the framework engine. The platform was deployed in the Cardiology department of a central hospital, allowing identification of processes' characteristics and users' behaviours that were unknown before the utilization of this solution.
NASA Astrophysics Data System (ADS)
Angelidis, Dionysios; Chawdhary, Saurabh; Sotiropoulos, Fotis
2016-11-01
A novel numerical method is developed for solving the 3D, unsteady, incompressible Navier-Stokes equations on locally refined fully unstructured Cartesian grids in domains with arbitrarily complex immersed boundaries. Owing to the utilization of the fractional step method on an unstructured Cartesian hybrid staggered/non-staggered grid layout, flux mismatch and pressure discontinuity issues are avoided and the divergence free constraint is inherently satisfied to machine zero. Auxiliary/hanging nodes are used to facilitate the discretization of the governing equations. The second-order accuracy of the solver is ensured by using multi-dimension Lagrange interpolation operators and appropriate differencing schemes at the interface of regions with different levels of refinement. The sharp interface immersed boundary method is augmented with local near-boundary refinement to handle arbitrarily complex boundaries. The discrete momentum equation is solved with the matrix free Newton-Krylov method and the Krylov-subspace method is employed to solve the Poisson equation. The second-order accuracy of the proposed method on unstructured Cartesian grids is demonstrated by solving the Poisson equation with a known analytical solution. A number of three-dimensional laminar flow simulations of increasing complexity illustrate the ability of the method to handle flows across a range of Reynolds numbers and flow regimes. Laminar steady and unsteady flows past a sphere and the oblique vortex shedding from a circular cylinder mounted between two end walls demonstrate the accuracy, the efficiency and the smooth transition of scales and coherent structures across refinement levels. Large-eddy simulation (LES) past a miniature wind turbine rotor, parameterized using the actuator line approach, indicates the ability of the fully unstructured solver to simulate complex turbulent flows. Finally, a geometry resolving LES of turbulent flow past a complete hydrokinetic turbine illustrates the potential of the method to simulate turbulent flows past geometrically complex bodies on locally refined meshes. In all the cases, the results are found to be in very good agreement with published data and savings in computational resources are achieved.
NASA Astrophysics Data System (ADS)
Tomaro, Robert F.
1998-07-01
The present research is aimed at developing a higher-order, spatially accurate scheme for both steady and unsteady flow simulations using unstructured meshes. The resulting scheme must work on a variety of general problems to ensure the creation of a flexible, reliable and accurate aerodynamic analysis tool. To calculate the flow around complex configurations, unstructured grids and the associated flow solvers have been developed. Efficient simulations require the minimum use of computer memory and computational times. Unstructured flow solvers typically require more computer memory than a structured flow solver due to the indirect addressing of the cells. The approach taken in the present research was to modify an existing three-dimensional unstructured flow solver to first decrease the computational time required for a solution and then to increase the spatial accuracy. The terms required to simulate flow involving non-stationary grids were also implemented. First, an implicit solution algorithm was implemented to replace the existing explicit procedure. Several test cases, including internal and external, inviscid and viscous, two-dimensional, three-dimensional and axi-symmetric problems, were simulated for comparison between the explicit and implicit solution procedures. The increased efficiency and robustness of modified code due to the implicit algorithm was demonstrated. Two unsteady test cases, a plunging airfoil and a wing undergoing bending and torsion, were simulated using the implicit algorithm modified to include the terms required for a moving and/or deforming grid. Secondly, a higher than second-order spatially accurate scheme was developed and implemented into the baseline code. Third- and fourth-order spatially accurate schemes were implemented and tested. The original dissipation was modified to include higher-order terms and modified near shock waves to limit pre- and post-shock oscillations. The unsteady cases were repeated using the higher-order spatially accurate code. The new solutions were compared with those obtained using the second-order spatially accurate scheme. Finally, the increased efficiency of using an implicit solution algorithm in a production Computational Fluid Dynamics flow solver was demonstrated for steady and unsteady flows. A third- and fourth-order spatially accurate scheme has been implemented creating a basis for a state-of-the-art aerodynamic analysis tool.
NASA Astrophysics Data System (ADS)
Lee, Euntaek; Ahn, Hyung Taek; Luo, Hong
2018-02-01
We apply a hyperbolic cell-centered finite volume method to solve a steady diffusion equation on unstructured meshes. This method, originally proposed by Nishikawa using a node-centered finite volume method, reformulates the elliptic nature of viscous fluxes into a set of augmented equations that makes the entire system hyperbolic. We introduce an efficient and accurate solution strategy for the cell-centered finite volume method. To obtain high-order accuracy for both solution and gradient variables, we use a successive order solution reconstruction: constant, linear, and quadratic (k-exact) reconstruction with an efficient reconstruction stencil, a so-called wrapping stencil. By the virtue of the cell-centered scheme, the source term evaluation was greatly simplified regardless of the solution order. For uniform schemes, we obtain the same order of accuracy, i.e., first, second, and third orders, for both the solution and its gradient variables. For hybrid schemes, recycling the gradient variable information for solution variable reconstruction makes one order of additional accuracy, i.e., second, third, and fourth orders, possible for the solution variable with less computational work than needed for uniform schemes. In general, the hyperbolic method can be an effective solution technique for diffusion problems, but instability is also observed for the discontinuous diffusion coefficient cases, which brings necessity for further investigation about the monotonicity preserving hyperbolic diffusion method.
Lessons Learned from Numerical Simulations of the F-16XL Aircraft at Flight Conditions
NASA Technical Reports Server (NTRS)
Rizzi, Arthur; Jirasek, Adam; Lamar, John; Crippa, Simone; Badcock, Kenneth; Boelens, Oklo
2009-01-01
Nine groups participating in the Cranked Arrow Wing Aerodynamics Project International (CAWAPI) project have contributed steady and unsteady viscous simulations of a full-scale, semi-span model of the F-16XL aircraft. Three different categories of flight Reynolds/Mach number combinations were computed and compared with flight-test measurements for the purpose of code validation and improved understanding of the flight physics. Steady-state simulations are done with several turbulence models of different complexity with no topology information required and which overcome Boussinesq-assumption problems in vortical flows. Detached-eddy simulation (DES) and its successor delayed detached-eddy simulation (DDES) have been used to compute the time accurate flow development. Common structured and unstructured grids as well as individually-adapted unstructured grids were used. Although discrepancies are observed in the comparisons, overall reasonable agreement is demonstrated for surface pressure distribution, local skin friction and boundary velocity profiles at subsonic speeds. The physical modeling, steady or unsteady, and the grid resolution both contribute to the discrepancies observed in the comparisons with flight data, but at this time it cannot be determined how much each part contributes to the whole. Overall it can be said that the technology readiness of CFD-simulation technology for the study of vehicle performance has matured since 2001 such that it can be used today with a reasonable level of confidence for complex configurations.
Maheri, Aghbabak; Tol, Azar; Sadeghi, Roya
2017-01-01
INTRODUCTION: Internet addiction refers to the excessive use of the internet that causes mental, social, and physical problems. According to the high prevalence of internet addiction among university students, this study aimed to determine the effect of an educational intervention on preventive behaviors of internet addiction among Tehran University of Medical Sciences students. MATERIALS AND METHODS: This study was a quasi-experimental study conducted among female college students who live in the dormitories of Tehran University of Medical Sciences. Two-stage cluster sampling was used for selection of eighty participants in each study groups; data were collected using “Young's Internet Addiction” and unstructured questionnaire. Validity and reliability of unstructured questionnaire were evaluated by expert panel and were reported as Cronbach's alpha. Information of study groups before and 4 months after the intervention was compared using statistical methods by SPSS 16. RESULTS: After the intervention, the mean scores of internet addiction, perceived barriers construct, and the prevalence of internet addiction significantly decreased in the intervention group than that in the control group and the mean scores of knowledge and Health Belief Model (HBM) constructs (susceptibility, severity, benefits, self-efficacy) significantly increased. CONCLUSIONS: Education based on the HBM was effective on the reduction and prevention of internet addiction among female college students, and educational interventions in this field are highly recommended. PMID:28852654