Sample records for information extraction technology

  1. Translations on USSR Science and Technology, Physical Sciences and Technology, Number 45

    DTIC Science & Technology

    1978-08-14

    hundred pages. [Question] In the public’s perception a cosmonaut seems to be, in terms of physical and mental fitness, something like a superman . How ...indicate how the original information was processed. Where no processing indicator is given, the information was summarized or extracted. Unfamiliar... how the original Information was processed. Where no processing indicator is given, the information was sipmarized or extracted, unfamiliar names

  2. Research on Crowdsourcing Emergency Information Extraction of Based on Events' Frame

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Wang, Jizhou; Ma, Weijun; Mao, Xi

    2018-01-01

    At present, the common information extraction method cannot extract the structured emergency event information accurately; the general information retrieval tool cannot completely identify the emergency geographic information; these ways also do not have an accurate assessment of these results of distilling. So, this paper proposes an emergency information collection technology based on event framework. This technique is to solve the problem of emergency information picking. It mainly includes emergency information extraction model (EIEM), complete address recognition method (CARM) and the accuracy evaluation model of emergency information (AEMEI). EIEM can be structured to extract emergency information and complements the lack of network data acquisition in emergency mapping. CARM uses a hierarchical model and the shortest path algorithm and allows the toponomy pieces to be joined as a full address. AEMEI analyzes the results of the emergency event and summarizes the advantages and disadvantages of the event framework. Experiments show that event frame technology can solve the problem of emergency information drawing and provides reference cases for other applications. When the emergency disaster is about to occur, the relevant departments query emergency's data that has occurred in the past. They can make arrangements ahead of schedule which defense and reducing disaster. The technology decreases the number of casualties and property damage in the country and world. This is of great significance to the state and society.

  3. Technology Demonstration Summary: CF Systems Organics Extraction System, New Bedford Harbor, Massachusetts

    EPA Science Inventory

    The Site Program demonstration of CF Systems' organics extraction technology was conducted to obtain specific operating and cost information that could be used in evaluating the potential applicability of the technology to Superfund sites. The demonstration was conducted concurr...

  4. The research of road and vehicle information extraction algorithm based on high resolution remote sensing image

    NASA Astrophysics Data System (ADS)

    Zhou, Tingting; Gu, Lingjia; Ren, Ruizhi; Cao, Qiong

    2016-09-01

    With the rapid development of remote sensing technology, the spatial resolution and temporal resolution of satellite imagery also have a huge increase. Meanwhile, High-spatial-resolution images are becoming increasingly popular for commercial applications. The remote sensing image technology has broad application prospects in intelligent traffic. Compared with traditional traffic information collection methods, vehicle information extraction using high-resolution remote sensing image has the advantages of high resolution and wide coverage. This has great guiding significance to urban planning, transportation management, travel route choice and so on. Firstly, this paper preprocessed the acquired high-resolution multi-spectral and panchromatic remote sensing images. After that, on the one hand, in order to get the optimal thresholding for image segmentation, histogram equalization and linear enhancement technologies were applied into the preprocessing results. On the other hand, considering distribution characteristics of road, the normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were used to suppress water and vegetation information of preprocessing results. Then, the above two processing result were combined. Finally, the geometric characteristics were used to completed road information extraction. The road vector extracted was used to limit the target vehicle area. Target vehicle extraction was divided into bright vehicles extraction and dark vehicles extraction. Eventually, the extraction results of the two kinds of vehicles were combined to get the final results. The experiment results demonstrated that the proposed algorithm has a high precision for the vehicle information extraction for different high resolution remote sensing images. Among these results, the average fault detection rate was about 5.36%, the average residual rate was about 13.60% and the average accuracy was approximately 91.26%.

  5. Information Extraction from Unstructured Text for the Biodefense Knowledge Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samatova, N F; Park, B; Krishnamurthy, R

    2005-04-29

    The Bio-Encyclopedia at the Biodefense Knowledge Center (BKC) is being constructed to allow an early detection of emerging biological threats to homeland security. It requires highly structured information extracted from variety of data sources. However, the quantity of new and vital information available from every day sources cannot be assimilated by hand, and therefore reliable high-throughput information extraction techniques are much anticipated. In support of the BKC, Lawrence Livermore National Laboratory and Oak Ridge National Laboratory, together with the University of Utah, are developing an information extraction system built around the bioterrorism domain. This paper reports two important pieces ofmore » our effort integrated in the system: key phrase extraction and semantic tagging. Whereas two key phrase extraction technologies developed during the course of project help identify relevant texts, our state-of-the-art semantic tagging system can pinpoint phrases related to emerging biological threats. Also we are enhancing and tailoring the Bio-Encyclopedia by augmenting semantic dictionaries and extracting details of important events, such as suspected disease outbreaks. Some of these technologies have already been applied to large corpora of free text sources vital to the BKC mission, including ProMED-mail, PubMed abstracts, and the DHS's Information Analysis and Infrastructure Protection (IAIP) news clippings. In order to address the challenges involved in incorporating such large amounts of unstructured text, the overall system is focused on precise extraction of the most relevant information for inclusion in the BKC.« less

  6. The Agent of extracting Internet Information with Lead Order

    NASA Astrophysics Data System (ADS)

    Mo, Zan; Huang, Chuliang; Liu, Aijun

    In order to carry out e-commerce better, advanced technologies to access business information are in need urgently. An agent is described to deal with the problems of extracting internet information that caused by the non-standard and skimble-scamble structure of Chinese websites. The agent designed includes three modules which respond to the process of extracting information separately. A method of HTTP tree and a kind of Lead algorithm is proposed to generate a lead order, with which the required web can be retrieved easily. How to transform the extracted information structuralized with natural language is also discussed.

  7. IPAT: a freely accessible software tool for analyzing multiple patent documents with inbuilt landscape visualizer.

    PubMed

    Ajay, Dara; Gangwal, Rahul P; Sangamwar, Abhay T

    2015-01-01

    Intelligent Patent Analysis Tool (IPAT) is an online data retrieval tool, operated based on text mining algorithm to extract specific patent information in a predetermined pattern into an Excel sheet. The software is designed and developed to retrieve and analyze technology information from multiple patent documents and generate various patent landscape graphs and charts. The software is C# coded in visual studio 2010, which extracts the publicly available patent information from the web pages like Google Patent and simultaneously study the various technology trends based on user-defined parameters. In other words, IPAT combined with the manual categorization will act as an excellent technology assessment tool in competitive intelligence and due diligence for predicting the future R&D forecast.

  8. Natural language processing-based COTS software and related technologies survey.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stickland, Michael G.; Conrad, Gregory N.; Eaton, Shelley M.

    Natural language processing-based knowledge management software, traditionally developed for security organizations, is now becoming commercially available. An informal survey was conducted to discover and examine current NLP and related technologies and potential applications for information retrieval, information extraction, summarization, categorization, terminology management, link analysis, and visualization for possible implementation at Sandia National Laboratories. This report documents our current understanding of the technologies, lists software vendors and their products, and identifies potential applications of these technologies.

  9. Optimal Information Extraction of Laser Scanning Dataset by Scale-Adaptive Reduction

    NASA Astrophysics Data System (ADS)

    Zang, Y.; Yang, B.

    2018-04-01

    3D laser technology is widely used to collocate the surface information of object. For various applications, we need to extract a good perceptual quality point cloud from the scanned points. To solve the problem, most of existing methods extract important points based on a fixed scale. However, geometric features of 3D object come from various geometric scales. We propose a multi-scale construction method based on radial basis function. For each scale, important points are extracted from the point cloud based on their importance. We apply a perception metric Just-Noticeable-Difference to measure degradation of each geometric scale. Finally, scale-adaptive optimal information extraction is realized. Experiments are undertaken to evaluate the effective of the proposed method, suggesting a reliable solution for optimal information extraction of object.

  10. Methods from Information Extraction from LIDAR Intensity Data and Multispectral LIDAR Technology

    NASA Astrophysics Data System (ADS)

    Scaioni, M.; Höfle, B.; Baungarten Kersting, A. P.; Barazzetti, L.; Previtali, M.; Wujanz, D.

    2018-04-01

    LiDAR is a consolidated technology for topographic mapping and 3D reconstruction, which is implemented in several platforms On the other hand, the exploitation of the geometric information has been coupled by the use of laser intensity, which may provide additional data for multiple purposes. This option has been emphasized by the availability of sensors working on different wavelength, thus able to provide additional information for classification of surfaces and objects. Several applications ofmonochromatic and multi-spectral LiDAR data have been already developed in different fields: geosciences, agriculture, forestry, building and cultural heritage. The use of intensity data to extract measures of point cloud quality has been also developed. The paper would like to give an overview on the state-of-the-art of these techniques, and to present the modern technologies for the acquisition of multispectral LiDAR data. In addition, the ISPRS WG III/5 on `Information Extraction from LiDAR Intensity Data' has collected and made available a few open data sets to support scholars to do research on this field. This service is presented and data sets delivered so far as are described.

  11. Extraction of indirectly captured information for use in a comparison of offline pH measurement technologies.

    PubMed

    Ritchie, Elspeth K; Martin, Elaine B; Racher, Andy; Jaques, Colin

    2017-06-10

    Understanding the causes of discrepancies in pH readings of a sample can allow more robust pH control strategies to be implemented. It was found that 59.4% of differences between two offline pH measurement technologies for an historical dataset lay outside an expected instrument error range of ±0.02pH. A new variable, Osmo Res , was created using multiple linear regression (MLR) to extract information indirectly captured in the recorded measurements for osmolality. Principal component analysis and time series analysis were used to validate the expansion of the historical dataset with the new variable Osmo Res . MLR was used to identify variables strongly correlated (p<0.05) with differences in pH readings by the two offline pH measurement technologies. These included concentrations of specific chemicals (e.g. glucose) and Osmo Res, indicating culture medium and bolus feed additions as possible causes of discrepancies between the offline pH measurement technologies. Temperature was also identified as statistically significant. It is suggested that this was a result of differences in pH-temperature compensations employed by the pH measurement technologies. In summary, a method for extracting indirectly captured information has been demonstrated, and it has been shown that competing pH measurement technologies were not necessarily interchangeable at the desired level of control (±0.02pH). Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Image Analysis and Modeling

    DTIC Science & Technology

    1975-08-01

    image analysis and processing tasks such as information extraction, image enhancement and restoration, coding, etc. The ultimate objective of this research is to form a basis for the development of technology relevant to military applications of machine extraction of information from aircraft and satellite imagery of the earth’s surface. This report discusses research activities during the three month period February 1 - April 30,

  13. The Learning Edge: What Technology Can Do to Educate All Children. Technology, Education--Connections (TEC) Series

    ERIC Educational Resources Information Center

    Bain, Alan; Weston, Mark E.

    2011-01-01

    After billions of dollars, thousands of studies, and immeasurable effort by educators at all levels, why is the performance of students and teachers so unaffected by technology? Moreover, what should be done to extract genuine benefit from the information and communication technology (ICT) revolution? In this groundbreaking book, technology and…

  14. Reusable Software Technology

    NASA Technical Reports Server (NTRS)

    Morgan, Timothy E.

    1995-01-01

    The objective of the Reusable Software System (RSS) is to provide NASA Langley Research Center and its contractor personnel with a reusable software technology through the Internet. The RSS is easily accessible, provides information that is extractable, and the capability to submit information or data for the purpose of scientific research at NASA Langley Research Center within the Atmospheric Science Division.

  15. Automatic updating and 3D modeling of airport information from high resolution images using GIS and LIDAR data

    NASA Astrophysics Data System (ADS)

    Lv, Zheng; Sui, Haigang; Zhang, Xilin; Huang, Xianfeng

    2007-11-01

    As one of the most important geo-spatial objects and military establishment, airport is always a key target in fields of transportation and military affairs. Therefore, automatic recognition and extraction of airport from remote sensing images is very important and urgent for updating of civil aviation and military application. In this paper, a new multi-source data fusion approach on automatic airport information extraction, updating and 3D modeling is addressed. Corresponding key technologies including feature extraction of airport information based on a modified Ostu algorithm, automatic change detection based on new parallel lines-based buffer detection algorithm, 3D modeling based on gradual elimination of non-building points algorithm, 3D change detecting between old airport model and LIDAR data, typical CAD models imported and so on are discussed in detail. At last, based on these technologies, we develop a prototype system and the results show our method can achieve good effects.

  16. [Application of hyper-spectral remote sensing technology in environmental protection].

    PubMed

    Zhao, Shao-Hua; Zhang, Feng; Wang, Qiao; Yao, Yun-Jun; Wang, Zhong-Ting; You, Dai-An

    2013-12-01

    Hyper-spectral remote sensing (RS) technology has been widely used in environmental protection. The present work introduces its recent application in the RS monitoring of pollution gas, green-house gas, algal bloom, water quality of catch water environment, safety of drinking water sources, biodiversity, vegetation classification, soil pollution, and so on. Finally, issues such as scarce hyper-spectral satellites, the limits of data processing and information extract are related. Some proposals are also presented, including developing subsequent satellites of HJ-1 satellite with differential optical absorption spectroscopy, greenhouse gas spectroscopy and hyper-spectral imager, strengthening the study of hyper-spectral data processing and information extraction, and promoting the construction of environmental application system.

  17. Latent Dirichlet Allocation (LDA) Model and kNN Algorithm to Classify Research Project Selection

    NASA Astrophysics Data System (ADS)

    Safi’ie, M. A.; Utami, E.; Fatta, H. A.

    2018-03-01

    Universitas Sebelas Maret has a teaching staff more than 1500 people, and one of its tasks is to carry out research. In the other side, the funding support for research and service is limited, so there is need to be evaluated to determine the Research proposal submission and devotion on society (P2M). At the selection stage, research proposal documents are collected as unstructured data and the data stored is very large. To extract information contained in the documents therein required text mining technology. This technology applied to gain knowledge to the documents by automating the information extraction. In this articles we use Latent Dirichlet Allocation (LDA) to the documents as a model in feature extraction process, to get terms that represent its documents. Hereafter we use k-Nearest Neighbour (kNN) algorithm to classify the documents based on its terms.

  18. Associating Human-Centered Concepts with Social Networks Using Fuzzy Sets

    NASA Astrophysics Data System (ADS)

    Yager, Ronald R.

    The rapidly growing global interconnectivity, brought about to a large extent by the Internet, has dramatically increased the importance and diversity of social networks. Modern social networks cut across a spectrum from benign recreational focused websites such as Facebook to occupationally oriented websites such as LinkedIn to criminally focused groups such as drug cartels to devastation and terror focused groups such as Al-Qaeda. Many organizations are interested in analyzing and extracting information related to these social networks. Among these are governmental police and security agencies as well marketing and sales organizations. To aid these organizations there is a need for technologies to model social networks and intelligently extract information from these models. While established technologies exist for the modeling of relational networks [1-7] few technologies exist to extract information from these, compatible with human perception and understanding. Data bases is an example of a technology in which we have tools for representing our information as well as tools for querying and extracting the information contained. Our goal is in some sense analogous. We want to use the relational network model to represent information, in this case about relationships and interconnections, and then be able to query the social network using intelligent human-centered concepts. To extend our capabilities to interact with social relational networks we need to associate with these network human concepts and ideas. Since human beings predominantly use linguistic terms in which to reason and understand we need to build bridges between human conceptualization and the formal mathematical representation of the social network. Consider for example a concept such as "leader". An analyst may be able to express, in linguistic terms, using a network relevant vocabulary, properties of a leader. Our task is to translate this linguistic description into a mathematical formalism that allows us to determine how true it is that a particular node is a leader. In this work we look at the use of fuzzy set methodologies [8-10] to provide a bridge between the human analyst and the formal model of the network.

  19. Apache Clinical Text and Knowledge Extraction System (cTAKES) | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    The tool extracts deep phenotypic information from the clinical narrative at the document-, episode-, and patient-level. The final output is FHIR compliant patient-level phenotypic summary which can be consumed by research warehouses or the DeepPhe native visualization tool.

  20. A study on building data warehouse of hospital information system.

    PubMed

    Li, Ping; Wu, Tao; Chen, Mu; Zhou, Bin; Xu, Wei-guo

    2011-08-01

    Existing hospital information systems with simple statistical functions cannot meet current management needs. It is well known that hospital resources are distributed with private property rights among hospitals, such as in the case of the regional coordination of medical services. In this study, to integrate and make full use of medical data effectively, we propose a data warehouse modeling method for the hospital information system. The method can also be employed for a distributed-hospital medical service system. To ensure that hospital information supports the diverse needs of health care, the framework of the hospital information system has three layers: datacenter layer, system-function layer, and user-interface layer. This paper discusses the role of a data warehouse management system in handling hospital information from the establishment of the data theme to the design of a data model to the establishment of a data warehouse. Online analytical processing tools assist user-friendly multidimensional analysis from a number of different angles to extract the required data and information. Use of the data warehouse improves online analytical processing and mitigates deficiencies in the decision support system. The hospital information system based on a data warehouse effectively employs statistical analysis and data mining technology to handle massive quantities of historical data, and summarizes from clinical and hospital information for decision making. This paper proposes the use of a data warehouse for a hospital information system, specifically a data warehouse for the theme of hospital information to determine latitude, modeling and so on. The processing of patient information is given as an example that demonstrates the usefulness of this method in the case of hospital information management. Data warehouse technology is an evolving technology, and more and more decision support information extracted by data mining and with decision-making technology is required for further research.

  1. Conception of Self-Construction Production Scheduling System

    NASA Astrophysics Data System (ADS)

    Xue, Hai; Zhang, Xuerui; Shimizu, Yasuhiro; Fujimura, Shigeru

    With the high speed innovation of information technology, many production scheduling systems have been developed. However, a lot of customization according to individual production environment is required, and then a large investment for development and maintenance is indispensable. Therefore now the direction to construct scheduling systems should be changed. The final objective of this research aims at developing a system which is built by it extracting the scheduling technique automatically through the daily production scheduling work, so that an investment will be reduced. This extraction mechanism should be applied for various production processes for the interoperability. Using the master information extracted by the system, production scheduling operators can be supported to accelerate the production scheduling work easily and accurately without any restriction of scheduling operations. By installing this extraction mechanism, it is easy to introduce scheduling system without a lot of expense for customization. In this paper, at first a model for expressing a scheduling problem is proposed. Then the guideline to extract the scheduling information and use the extracted information is shown and some applied functions are also proposed based on it.

  2. Too Much Information--Too Much Apprehension

    ERIC Educational Resources Information Center

    Hijazi, Sam

    2004-01-01

    The information age along with the exponential increase in information technology has brought an unexpected amount of information. The endeavor to sort and extract a meaning from the massive amount of data has become a challenging task to many educators and managers. This research is an attempt to collect the most common suggestions to reduce the…

  3. Design and application of PDF model for extracting

    NASA Astrophysics Data System (ADS)

    Xiong, Lei

    2013-07-01

    In order to change the steps of contributions in editorial department system from two steps to one, this paper advocates that the technology of extracting the information of PDF files should be transplanted from PDF reader into IEEE Xplore contribution system and that it should be combined with uploading in batch skillfully to enable editors to upload PDF files about 1GB in batch for once. Computers will extract the information of the title, author, address, mailbox, abstract and key words of thesis voluntarily for later retrieval so as to save plenty of labor, material and finance for editorial department.

  4. 77 FR 42548 - Privacy Act of 1974; Department of Transportation, Federal Motor Carrier Safety Administration...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-19

    ... crash and inspection records. Data extract from the FMCSA Motor Carrier Management Information System... Transaction Records: Pursuant to GRS 24, ``Information Technology Operations and Management Records,'' Item 6... information that is created and used by the Department's Pre-Employment Screening program to provide...

  5. Activities of the Remote Sensing Information Sciences Research Group

    NASA Technical Reports Server (NTRS)

    Estes, J. E.; Botkin, D.; Peuquet, D.; Smith, T.; Star, J. L. (Principal Investigator)

    1984-01-01

    Topics on the analysis and processing of remotely sensed data in the areas of vegetation analysis and modelling, georeferenced information systems, machine assisted information extraction from image data, and artificial intelligence are investigated. Discussions on support field data and specific applications of the proposed technologies are also included.

  6. Quantum-entanglement storage and extraction in quantum network node

    NASA Astrophysics Data System (ADS)

    Shan, Zhuoyu; Zhang, Yong

    Quantum computing and quantum communication have become the most popular research topic. Nitrogen-vacancy (NV) centers in diamond have been shown the great advantage of implementing quantum information processing. The generation of entanglement between NV centers represents a fundamental prerequisite for all quantum information technologies. In this paper, we propose a scheme to realize the high-fidelity storage and extraction of quantum entanglement information based on the NV centers at room temperature. We store the entangled information of a pair of entangled photons in the Bell state into the nuclear spins of two NV centers, which can make these two NV centers entangled. And then we illuminate how to extract the entangled information from NV centers to prepare on-demand entangled states for optical quantum information processing. The strategy of engineering entanglement demonstrated here maybe pave the way towards a NV center-based quantum network.

  7. The utility of an automated electronic system to monitor and audit transfusion practice.

    PubMed

    Grey, D E; Smith, V; Villanueva, G; Richards, B; Augustson, B; Erber, W N

    2006-05-01

    Transfusion laboratories with transfusion committees have a responsibility to monitor transfusion practice and generate improvements in clinical decision-making and red cell usage. However, this can be problematic and expensive because data cannot be readily extracted from most laboratory information systems. To overcome this problem, we developed and introduced a system to electronically extract and collate extensive amounts of data from two laboratory information systems and to link it with ICD10 clinical codes in a new database using standard information technology. Three data files were generated from two laboratory information systems, ULTRA (version 3.2) and TM, using standard information technology scripts. These were patient pre- and post-transfusion haemoglobin, blood group and antibody screen, and cross match and transfusion data. These data together with ICD10 codes for surgical cases were imported into an MS ACCESS database and linked by means of a unique laboratory number. Queries were then run to extract the relevant information and processed in Microsoft Excel for graphical presentation. We assessed the utility of this data extraction system to audit transfusion practice in a 600-bed adult tertiary hospital over an 18-month period. A total of 52 MB of data were extracted from the two laboratory information systems for the 18-month period and together with 2.0 MB theatre ICD10 data enabled case-specific transfusion information to be generated. The audit evaluated 15,992 blood group and antibody screens, 25,344 cross-matched red cell units and 15,455 transfused red cell units. Data evaluated included cross-matched to transfusion ratios and pre- and post-transfusion haemoglobin levels for a range of clinical diagnoses. Data showed significant differences between clinical units and by ICD10 code. This method to electronically extract large amounts of data and linkage with clinical databases has provided a powerful and sustainable tool for monitoring transfusion practice. It has been successfully used to identify areas requiring education, training and clinical guidance and allows for comparison with national haemoglobin-based transfusion guidelines.

  8. Making Semantic Information Work Effectively for Degraded Environments

    DTIC Science & Technology

    2013-06-01

    Control Research & Technology Symposium (ICCRTS) held 19-21 June, 2013 in Alexandria, VA. 14. ABSTRACT The challenges of effectively managing semantic...technologies over disadvantaged or degraded environments are numerous and complex. One of the greatest challenges is the size of raw data. Large...approach mitigates this challenge by performing data reduction through the adoption of format recognition technologies, semantic data extractions, and the

  9. A Review of Vapor Extraction Technology for Contaminated Soil Remediation

    DTIC Science & Technology

    1993-05-01

    NUMBER DATE DTIC ACCESSION 23 MAY 94 NOTICE 1 . REPORT IDENTIFYING INFORMATION I. PU yurmiWin av W A. ORIGINATING anENCY m of ftm...NUMBER W 6 to8 8t. N00123-89-G-0531 2. DISTRIBUTION STATEMENT MTIr" 1 . Aswa AD Nuntw. ŕPPROVED FOR PUPLIC t•ELSASE; 2. ,elumtom ust,. DIST, Ri?-"jTO...8217 ,10" ’J’) ZL DTIC Form 50 PREVIOUS EDITIONS ARE OBSOLETE DEC 91 ,/ / 1 . - ’ < > i•" 1 A REVIEW OF VAPOR EXTRACTION TECHNOLOGY FOR CONTAMINATED SOIL

  10. Proactive Response to Potential Material Shortages Arising from Environmental Restrictions Using Automatic Discovery and Extraction of Information from Technical Documents

    DTIC Science & Technology

    2012-12-21

    material data and other key information in a UIMA environment. In the course of this project, the tools and methods developed were used to extract and...Architecture ( UIMA ) library from the Apache Software Foundation. Using this architecture, a given document is run through several “annotators” to...material taxonomy developed for the XSB, Inc. Coherent View™ database. In order to integrate this technology into the Java-based UIMA annotation

  11. Information sciences experiment system

    NASA Technical Reports Server (NTRS)

    Katzberg, Stephen J.; Murray, Nicholas D.; Benz, Harry F.; Bowker, David E.; Hendricks, Herbert D.

    1990-01-01

    The rapid expansion of remote sensing capability over the last two decades will take another major leap forward with the advent of the Earth Observing System (Eos). An approach is presented that will permit experiments and demonstrations in onboard information extraction. The approach is a non-intrusive, eavesdropping mode in which a small amount of spacecraft real estate is allocated to an onboard computation resource. How such an approach allows the evaluation of advanced technology in the space environment, advanced techniques in information extraction for both Earth science and information science studies, direct to user data products, and real-time response to events, all without affecting other on-board instrumentation is discussed.

  12. Data Processing and Text Mining Technologies on Electronic Medical Records: A Review

    PubMed Central

    Sun, Wencheng; Li, Yangyang; Liu, Fang; Fang, Shengqun; Wang, Guoyan

    2018-01-01

    Currently, medical institutes generally use EMR to record patient's condition, including diagnostic information, procedures performed, and treatment results. EMR has been recognized as a valuable resource for large-scale analysis. However, EMR has the characteristics of diversity, incompleteness, redundancy, and privacy, which make it difficult to carry out data mining and analysis directly. Therefore, it is necessary to preprocess the source data in order to improve data quality and improve the data mining results. Different types of data require different processing technologies. Most structured data commonly needs classic preprocessing technologies, including data cleansing, data integration, data transformation, and data reduction. For semistructured or unstructured data, such as medical text, containing more health information, it requires more complex and challenging processing methods. The task of information extraction for medical texts mainly includes NER (named-entity recognition) and RE (relation extraction). This paper focuses on the process of EMR processing and emphatically analyzes the key techniques. In addition, we make an in-depth study on the applications developed based on text mining together with the open challenges and research issues for future work. PMID:29849998

  13. Applying high resolution remote sensing image and DEM to falling boulder hazard assessment

    NASA Astrophysics Data System (ADS)

    Huang, Changqing; Shi, Wenzhong; Ng, K. C.

    2005-10-01

    Boulder fall hazard assessing generally requires gaining the boulder information. The extensive mapping and surveying fieldwork is a time-consuming, laborious and dangerous conventional method. So this paper proposes an applying image processing technology to extract boulder and assess boulder fall hazard from high resolution remote sensing image. The method can replace the conventional method and extract the boulder information in high accuracy, include boulder size, shape, height and the slope and aspect of its position. With above boulder information, it can be satisfied for assessing, prevention and cure boulder fall hazard.

  14. Beyond Information Retrieval: Ways To Provide Content in Context.

    ERIC Educational Resources Information Center

    Wiley, Deborah Lynne

    1998-01-01

    Provides an overview of information retrieval from mainframe systems to Web search engines; discusses collaborative filtering, data extraction, data visualization, agent technology, pattern recognition, classification and clustering, and virtual communities. Argues that rather than huge data-storage centers and proprietary software, we need…

  15. Present status and trends of image fusion

    NASA Astrophysics Data System (ADS)

    Xiang, Dachao; Fu, Sheng; Cai, Yiheng

    2009-10-01

    Image fusion information extracted from multiple images which is more accurate and reliable than that from just a single image. Since various images contain different information aspects of the measured parts, and comprehensive information can be obtained by integrating them together. Image fusion is a main branch of the application of data fusion technology. At present, it was widely used in computer vision technology, remote sensing, robot vision, medical image processing and military field. This paper mainly presents image fusion's contents, research methods, and the status quo at home and abroad, and analyzes the development trend.

  16. Organizational issues in the implementation and adoption of health information technology innovations: an interpretative review.

    PubMed

    Cresswell, Kathrin; Sheikh, Aziz

    2013-05-01

    Implementations of health information technologies are notoriously difficult, which is due to a range of inter-related technical, social and organizational factors that need to be considered. In the light of an apparent lack of empirically based integrated accounts surrounding these issues, this interpretative review aims to provide an overview and extract potentially generalizable findings across settings. We conducted a systematic search and critique of the empirical literature published between 1997 and 2010. In doing so, we searched a range of medical databases to identify review papers that related to the implementation and adoption of eHealth applications in organizational settings. We qualitatively synthesized this literature extracting data relating to technologies, contexts, stakeholders, and their inter-relationships. From a total body of 121 systematic reviews, we identified 13 systematic reviews encompassing organizational issues surrounding health information technology implementations. By and large, the evidence indicates that there are a range of technical, social and organizational considerations that need to be deliberated when attempting to ensure that technological innovations are useful for both individuals and organizational processes. However, these dimensions are inter-related, requiring a careful balancing act of strategic implementation decisions in order to ensure that unintended consequences resulting from technology introduction do not pose a threat to patients. Organizational issues surrounding technology implementations in healthcare settings are crucially important, but have as yet not received adequate research attention. This may in part be due to the subjective nature of factors, but also due to a lack of coordinated efforts toward more theoretically-informed work. Our findings may be used as the basis for the development of best practice guidelines in this area. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  17. PASTE: patient-centered SMS text tagging in a medication management system.

    PubMed

    Stenner, Shane P; Johnson, Kevin B; Denny, Joshua C

    2012-01-01

    To evaluate the performance of a system that extracts medication information and administration-related actions from patient short message service (SMS) messages. Mobile technologies provide a platform for electronic patient-centered medication management. MyMediHealth (MMH) is a medication management system that includes a medication scheduler, a medication administration record, and a reminder engine that sends text messages to cell phones. The object of this work was to extend MMH to allow two-way interaction using mobile phone-based SMS technology. Unprompted text-message communication with patients using natural language could engage patients in their healthcare, but presents unique natural language processing challenges. The authors developed a new functional component of MMH, the Patient-centered Automated SMS Tagging Engine (PASTE). The PASTE web service uses natural language processing methods, custom lexicons, and existing knowledge sources to extract and tag medication information from patient text messages. A pilot evaluation of PASTE was completed using 130 medication messages anonymously submitted by 16 volunteers via a website. System output was compared with manually tagged messages. Verified medication names, medication terms, and action terms reached high F-measures of 91.3%, 94.7%, and 90.4%, respectively. The overall medication name F-measure was 79.8%, and the medication action term F-measure was 90%. Other studies have demonstrated systems that successfully extract medication information from clinical documents using semantic tagging, regular expression-based approaches, or a combination of both approaches. This evaluation demonstrates the feasibility of extracting medication information from patient-generated medication messages.

  18. Program review presentation to Level 1, Interagency Coordination Committee

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Progress in the development of crop inventory technology is reported. Specific topics include the results of a thematic mapper analysis, variable selection studies/early season estimator improvements, the agricultural information system simulator, large unit proportion estimation, and development of common features for multi-satellite information extraction.

  19. A preliminary approach to creating an overview of lactoferrin multi-functionality utilizing a text mining method.

    PubMed

    Shimazaki, Kei-ichi; Kushida, Tatsuya

    2010-06-01

    Lactoferrin is a multi-functional metal-binding glycoprotein that exhibits many biological functions of interest to many researchers from the fields of clinical medicine, dentistry, pharmacology, veterinary medicine, nutrition and milk science. To date, a number of academic reports concerning the biological activities of lactoferrin have been published and are easily accessible through public data repositories. However, as the literature is expanding daily, this presents challenges in understanding the larger picture of lactoferrin function and mechanisms. In order to overcome the "analysis paralysis" associated with lactoferrin information, we attempted to apply a text mining method to the accumulated lactoferrin literature. To this end, we used the information extraction system GENPAC (provided by Nalapro Technologies Inc., Tokyo). This information extraction system uses natural language processing and text mining technology. This system analyzes the sentences and titles from abstracts stored in the PubMed database, and can automatically extract binary relations that consist of interactions between genes/proteins, chemicals and diseases/functions. We expect that such information visualization analysis will be useful in determining novel relationships among a multitude of lactoferrin functions and mechanisms. We have demonstrated the utilization of this method to find pathways of lactoferrin participation in neovascularization, Helicobacter pylori attack on gastric mucosa, atopic dermatitis and lipid metabolism.

  20. Information Retrieval and Text Mining Technologies for Chemistry.

    PubMed

    Krallinger, Martin; Rabal, Obdulia; Lourenço, Anália; Oyarzabal, Julen; Valencia, Alfonso

    2017-06-28

    Efficient access to chemical information contained in scientific literature, patents, technical reports, or the web is a pressing need shared by researchers and patent attorneys from different chemical disciplines. Retrieval of important chemical information in most cases starts with finding relevant documents for a particular chemical compound or family. Targeted retrieval of chemical documents is closely connected to the automatic recognition of chemical entities in the text, which commonly involves the extraction of the entire list of chemicals mentioned in a document, including any associated information. In this Review, we provide a comprehensive and in-depth description of fundamental concepts, technical implementations, and current technologies for meeting these information demands. A strong focus is placed on community challenges addressing systems performance, more particularly CHEMDNER and CHEMDNER patents tasks of BioCreative IV and V, respectively. Considering the growing interest in the construction of automatically annotated chemical knowledge bases that integrate chemical information and biological data, cheminformatics approaches for mapping the extracted chemical names into chemical structures and their subsequent annotation together with text mining applications for linking chemistry with biological information are also presented. Finally, future trends and current challenges are highlighted as a roadmap proposal for research in this emerging field.

  1. A research of road centerline extraction algorithm from high resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Yushan; Xu, Tingfa

    2017-09-01

    Satellite remote sensing technology has become one of the most effective methods for land surface monitoring in recent years, due to its advantages such as short period, large scale and rich information. Meanwhile, road extraction is an important field in the applications of high resolution remote sensing images. An intelligent and automatic road extraction algorithm with high precision has great significance for transportation, road network updating and urban planning. The fuzzy c-means (FCM) clustering segmentation algorithms have been used in road extraction, but the traditional algorithms did not consider spatial information. An improved fuzzy C-means clustering algorithm combined with spatial information (SFCM) is proposed in this paper, which is proved to be effective for noisy image segmentation. Firstly, the image is segmented using the SFCM. Secondly, the segmentation result is processed by mathematical morphology to remover the joint region. Thirdly, the road centerlines are extracted by morphology thinning and burr trimming. The average integrity of the centerline extraction algorithm is 97.98%, the average accuracy is 95.36% and the average quality is 93.59%. Experimental results show that the proposed method in this paper is effective for road centerline extraction.

  2. Rural hospital information technology implementation for safety and quality improvement: lessons learned.

    PubMed

    Tietze, Mari F; Williams, Josie; Galimbertti, Marisa

    2009-01-01

    This grant involved a hospital collaborative for excellence using information technology over 3-year period. The project activities focused on the improvement of patient care safety and quality in Southern rural and small community hospitals through the use of technology and education. The technology component of the design involved the implementation of a Web-based business analytic tool that allows hospitals to view data, create reports, and analyze their safety and quality data. Through a preimplementation and postimplementation comparative design, the focus of the implementation team was twofold: to recruit participant hospitals and to implement the technology at each of the 66 hospital sites. Rural hospitals were defined as acute care hospitals located in a county with a population of less than 100 000 or a state-administered Critical Access Hospital, making the total study population target 188 hospitals. Lessons learned during the information technology implementation of these hospitals are reflective of the unique culture, financial characteristics, organizational structure, and technology architecture of rural hospitals. Specific steps such as recruitment, information technology assessment, conference calls for project planning, data file extraction and transfer, technology training, use of e-mail, use of telephones, personnel management, and engaging information technology vendors were found to vary greatly among hospitals.

  3. Data base management system configuration specification. [computer storage devices

    NASA Technical Reports Server (NTRS)

    Neiers, J. W.

    1979-01-01

    The functional requirements and the configuration of the data base management system are described. Techniques and technology which will enable more efficient and timely transfer of useful data from the sensor to the user, extraction of information by the user, and exchange of information among the users are demonstrated.

  4. Automating concept identification in the electronic medical record: an experiment in extracting dosage information.

    PubMed Central

    Evans, D. A.; Brownlow, N. D.; Hersh, W. R.; Campbell, E. M.

    1996-01-01

    We discuss the development and evaluation of an automated procedure for extracting drug-dosage information from clinical narratives. The process was developed rapidly using existing technology and resources, including categories of terms from UMLS96. Evaluations over a large training and smaller test set of medical records demonstrate an approximately 80% rate of exact and partial matches' on target phrases, with few false positives and a modest rate of false negatives. The results suggest a strategy for automating general concept identification in electronic medical records. PMID:8947694

  5. The value of information technology in healthcare.

    PubMed

    Skinner, Richard I

    2003-01-01

    Not only will healthcare investments in information technology (IT) continue, they are sure to increase. Just as other industries learned over time how to extract more value from IT investments, so too will the healthcare industry, and for the same reason: because they must. This article explores the types of business value IT has generated in other industries, what value it can generate in healthcare, and some of the barriers encountered in achieving that value. The article ends with management principles for IT investment.

  6. Definition of information technology architectures for continuous data management and medical device integration in diabetes.

    PubMed

    Hernando, M Elena; Pascual, Mario; Salvador, Carlos H; García-Sáez, Gema; Rodríguez-Herrero, Agustín; Martínez-Sarriegui, Iñaki; Gómez, Enrique J

    2008-09-01

    The growing availability of continuous data from medical devices in diabetes management makes it crucial to define novel information technology architectures for efficient data storage, data transmission, and data visualization. The new paradigm of care demands the sharing of information in interoperable systems as the only way to support patient care in a continuum of care scenario. The technological platforms should support all the services required by the actors involved in the care process, located in different scenarios and managing diverse information for different purposes. This article presents basic criteria for defining flexible and adaptive architectures that are capable of interoperating with external systems, and integrating medical devices and decision support tools to extract all the relevant knowledge to support diabetes care.

  7. An Ontology-Based Approach to Incorporate User-Generated Geo-Content Into Sdi

    NASA Astrophysics Data System (ADS)

    Deng, D.-P.; Lemmens, R.

    2011-08-01

    The Web is changing the way people share and communicate information because of emergence of various Web technologies, which enable people to contribute information on the Web. User-Generated Geo-Content (UGGC) is a potential resource of geographic information. Due to the different production methods, UGGC often cannot fit in geographic information model. There is a semantic gap between UGGC and formal geographic information. To integrate UGGC into geographic information, this study conducts an ontology-based process to bridge this semantic gap. This ontology-based process includes five steps: Collection, Extraction, Formalization, Mapping, and Deployment. In addition, this study implements this process on Twitter messages, which is relevant to Japan Earthquake disaster. By using this process, we extract disaster relief information from Twitter messages, and develop a knowledge base for GeoSPARQL queries in disaster relief information.

  8. Concept recognition for extracting protein interaction relations from biomedical text

    PubMed Central

    Baumgartner, William A; Lu, Zhiyong; Johnson, Helen L; Caporaso, J Gregory; Paquette, Jesse; Lindemann, Anna; White, Elizabeth K; Medvedeva, Olga; Cohen, K Bretonnel; Hunter, Lawrence

    2008-01-01

    Background: Reliable information extraction applications have been a long sought goal of the biomedical text mining community, a goal that if reached would provide valuable tools to benchside biologists in their increasingly difficult task of assimilating the knowledge contained in the biomedical literature. We present an integrated approach to concept recognition in biomedical text. Concept recognition provides key information that has been largely missing from previous biomedical information extraction efforts, namely direct links to well defined knowledge resources that explicitly cement the concept's semantics. The BioCreative II tasks discussed in this special issue have provided a unique opportunity to demonstrate the effectiveness of concept recognition in the field of biomedical language processing. Results: Through the modular construction of a protein interaction relation extraction system, we present several use cases of concept recognition in biomedical text, and relate these use cases to potential uses by the benchside biologist. Conclusion: Current information extraction technologies are approaching performance standards at which concept recognition can begin to deliver high quality data to the benchside biologist. Our system is available as part of the BioCreative Meta-Server project and on the internet . PMID:18834500

  9. Using Process Redesign and Information Technology to Improve Procurement

    DTIC Science & Technology

    1994-04-01

    contrac- tor. Many large-volume contractors have automated order processing tied to ac- counting, manufacturing, and shipping subsystems. Currently...the contractor must receive the mailed order, analyze it, extract pertinent information, and en- ter that information into the automated order ... processing system. Almost all orders for small purchases are unilateral documents that do not require acceptance or acknowledgment by the contractor. For

  10. Sugarcane Crop Extraction Using Object-Oriented Method from ZY-3 High Resolution Satellite Tlc Image

    NASA Astrophysics Data System (ADS)

    Luo, H.; Ling, Z. Y.; Shao, G. Z.; Huang, Y.; He, Y. Q.; Ning, W. Y.; Zhong, Z.

    2018-04-01

    Sugarcane is one of the most important crops in Guangxi, China. As the development of satellite remote sensing technology, more remotely sensed images can be used for monitoring sugarcane crop. With the help of Three Line Camera (TLC) images, wide coverage and stereoscopic mapping ability, Chinese ZY-3 high resolution stereoscopic mapping satellite is useful in attaining more information for sugarcane crop monitoring, such as spectral, shape, texture difference between forward, nadir and backward images. Digital surface model (DSM) derived from ZY-3 TLC images are also able to provide height information for sugarcane crop. In this study, we make attempt to extract sugarcane crop from ZY-3 images, which are acquired in harvest period. Ortho-rectified TLC images, fused image, DSM are processed for our extraction. Then Object-oriented method is used in image segmentation, example collection, and feature extraction. The results of our study show that with the help of ZY-3 TLC image, the information of sugarcane crop in harvest time can be automatic extracted, with an overall accuracy of about 85.3 %.

  11. End-User Evaluations of Semantic Web Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCool, Rob; Cowell, Andrew J.; Thurman, David A.

    Stanford University's Knowledge Systems Laboratory (KSL) is working in partnership with Battelle Memorial Institute and IBM Watson Research Center to develop a suite of technologies for information extraction, knowledge representation & reasoning, and human-information interaction, in unison entitled 'Knowledge Associates for Novel Intelligence' (KANI). We have developed an integrated analytic environment composed of a collection of analyst associates, software components that aid the user at different stages of the information analysis process. An important part of our participatory design process has been to ensure our technologies and designs are tightly integrate with the needs and requirements of our end users,more » To this end, we perform a sequence of evaluations towards the end of the development process that ensure the technologies are both functional and usable. This paper reports on that process.« less

  12. Extracting ballistic forensic intelligence: microstamped firearms deliver data for illegal firearm traffic mapping: technology, implementation, and applications

    NASA Astrophysics Data System (ADS)

    Ohar, Orest P.; Lizotte, Todd E.

    2009-08-01

    Over the years law enforcement has become increasingly complex, driving a need for a better level of organization of knowledge within policing. The use of COMPSTAT or other Geospatial Information Systems (GIS) for crime mapping and analysis has provided opportunities for careful analysis of crime trends. By identifying hotspots within communities, data collected and entered into these systems can be analyzed to determine how, when and where law enforcement assets can be deployed efficiently. This paper will introduce in detail, a powerful new law enforcement and forensic investigative technology called Intentional Firearm Microstamping (IFM). Once embedded and deployed into firearms, IFM will provide data for identifying and tracking the sources of illegally trafficked firearms within the borders of the United States and across the border with Mexico. Intentional Firearm Microstamping is a micro code technology that leverages a laser based micromachining process to form optimally located, microscopic "intentional structures and marks" on components within a firearm. Thus when the firearm is fired, these IFM structures transfer an identifying tracking code onto the expended cartridge that is ejected from the firearm. Intentional Firearm Microstamped structures are laser micromachined alpha numeric and encoded geometric tracking numbers, linked to the serial number of the firearm. IFM codes can be extracted quickly and used without the need to recover the firearm. Furthermore, through the process of extraction, IFM codes can be quantitatively verified to a higher level of certainty as compared to traditional forensic matching techniques. IFM provides critical intelligence capable of identifying straw purchasers, trafficking routes and networks across state borders and can be used on firearms illegally exported across international borders. This paper will outline IFM applications for supporting intelligence led policing initiatives, IFM implementation strategies, describe the how IFM overcomes the firearms stochastic properties and explain the code extraction technologies that can be used by forensic investigators and discuss the applications where the extracted data will benefit geospatial information systems for forensic intelligence benefit.

  13. PASTE: patient-centered SMS text tagging in a medication management system

    PubMed Central

    Johnson, Kevin B; Denny, Joshua C

    2011-01-01

    Objective To evaluate the performance of a system that extracts medication information and administration-related actions from patient short message service (SMS) messages. Design Mobile technologies provide a platform for electronic patient-centered medication management. MyMediHealth (MMH) is a medication management system that includes a medication scheduler, a medication administration record, and a reminder engine that sends text messages to cell phones. The object of this work was to extend MMH to allow two-way interaction using mobile phone-based SMS technology. Unprompted text-message communication with patients using natural language could engage patients in their healthcare, but presents unique natural language processing challenges. The authors developed a new functional component of MMH, the Patient-centered Automated SMS Tagging Engine (PASTE). The PASTE web service uses natural language processing methods, custom lexicons, and existing knowledge sources to extract and tag medication information from patient text messages. Measurements A pilot evaluation of PASTE was completed using 130 medication messages anonymously submitted by 16 volunteers via a website. System output was compared with manually tagged messages. Results Verified medication names, medication terms, and action terms reached high F-measures of 91.3%, 94.7%, and 90.4%, respectively. The overall medication name F-measure was 79.8%, and the medication action term F-measure was 90%. Conclusion Other studies have demonstrated systems that successfully extract medication information from clinical documents using semantic tagging, regular expression-based approaches, or a combination of both approaches. This evaluation demonstrates the feasibility of extracting medication information from patient-generated medication messages. PMID:21984605

  14. NASA Earth Resources Survey Symposium. Volume 1-B: Geology, Information Systems and Services

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A symposium was conducted on the practical applications of earth resources survey technology including utilization and results of data from programs involving LANDSAT, the Skylab earth resources experiment package, and aircraft. Topics discussed include geological structure, landform surveys, energy and extractive resources, and information systems and services.

  15. Training Students to Extract Value from Big Data: Summary of a Workshop

    ERIC Educational Resources Information Center

    Mellody, Maureen

    2014-01-01

    As the availability of high-throughput data-collection technologies, such as information-sensing mobile devices, remote sensing, internet log records, and wireless sensor networks has grown, science, engineering, and business have rapidly transitioned from striving to develop information from scant data to a situation in which the challenge is now…

  16. Semantic extraction and processing of medical records for patient-oriented visual index

    NASA Astrophysics Data System (ADS)

    Zheng, Weilin; Dong, Wenjie; Chen, Xiangjiao; Zhang, Jianguo

    2012-02-01

    To have comprehensive and completed understanding healthcare status of a patient, doctors need to search patient medical records from different healthcare information systems, such as PACS, RIS, HIS, USIS, as a reference of diagnosis and treatment decisions for the patient. However, it is time-consuming and tedious to do these procedures. In order to solve this kind of problems, we developed a patient-oriented visual index system (VIS) to use the visual technology to show health status and to retrieve the patients' examination information stored in each system with a 3D human model. In this presentation, we present a new approach about how to extract the semantic and characteristic information from the medical record systems such as RIS/USIS to create the 3D Visual Index. This approach includes following steps: (1) Building a medical characteristic semantic knowledge base; (2) Developing natural language processing (NLP) engine to perform semantic analysis and logical judgment on text-based medical records; (3) Applying the knowledge base and NLP engine on medical records to extract medical characteristics (e.g., the positive focus information), and then mapping extracted information to related organ/parts of 3D human model to create the visual index. We performed the testing procedures on 559 samples of radiological reports which include 853 focuses, and achieved 828 focuses' information. The successful rate of focus extraction is about 97.1%.

  17. Government Information Locator Service (GILS). Draft report to the Information Infrastructure Task Force

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a draft report on the Government Information Locator Service (GILS) to the National Information Infrastructure (NII) task force. GILS is designed to take advantage of internetworking technology known as client-server architecture which allows information to be distributed among multiple independent information servers. Two appendices are provided -- (1) A glossary of related terminology and (2) extracts from a draft GILS profile for the use of the American National Standard Information Retrieval Application Service Definition and Protocol Specification for Library Applications.

  18. Building a diabetes screening population data repository using electronic medical records.

    PubMed

    Tuan, Wen-Jan; Sheehy, Ann M; Smith, Maureen A

    2011-05-01

    There has been a rapid advancement of information technology in the area of clinical and population health data management since 2000. However, with the fast growth of electronic medical records (EMRs) and the increasing complexity of information systems, it has become challenging for researchers to effectively access, locate, extract, and analyze information critical to their research. This article introduces an outpatient encounter data framework designed to construct an EMR-based population data repository for diabetes screening research. The outpatient encounter data framework is developed on a hybrid data structure of entity-attribute-value models, dimensional models, and relational models. This design preserves a small number of subject-specific tables essential to key clinical constructs in the data repository. It enables atomic information to be maintained in a transparent and meaningful way to researchers and health care practitioners who need to access data and still achieve the same performance level as conventional data warehouse models. A six-layer information processing strategy is developed to extract and transform EMRs to the research data repository. The data structure also complies with both Health Insurance Portability and Accountability Act regulations and the institutional review board's requirements. Although developed for diabetes screening research, the design of the outpatient encounter data framework is suitable for other types of health service research. It may also provide organizations a tool to improve health care quality and efficiency, consistent with the "meaningful use" objectives of the Health Information Technology for Economic and Clinical Health Act. © 2011 Diabetes Technology Society.

  19. Road Extraction from AVIRIS Using Spectral Mixture and Q-Tree Filter Techniques

    NASA Technical Reports Server (NTRS)

    Gardner, Margaret E.; Roberts, Dar A.; Funk, Chris; Noronha, Val

    2001-01-01

    Accurate road location and condition information are of primary importance in road infrastructure management. Additionally, spatially accurate and up-to-date road networks are essential in ambulance and rescue dispatch in emergency situations. However, accurate road infrastructure databases do not exist for vast areas, particularly in areas with rapid expansion. Currently, the US Department of Transportation (USDOT) extends great effort in field Global Positioning System (GPS) mapping and condition assessment to meet these informational needs. This methodology, though effective, is both time-consuming and costly, because every road within a DOT's jurisdiction must be field-visited to obtain accurate information. Therefore, the USDOT is interested in identifying new technologies that could help meet road infrastructure informational needs more effectively. Remote sensing provides one means by which large areas may be mapped with a high standard of accuracy and is a technology with great potential in infrastructure mapping. The goal of our research is to develop accurate road extraction techniques using high spatial resolution, fine spectral resolution imagery. Additionally, our research will explore the use of hyperspectral data in assessing road quality. Finally, this research aims to define the spatial and spectral requirements for remote sensing data to be used successfully for road feature extraction and road quality mapping. Our findings will facilitate the USDOT in assessing remote sensing as a new resource in infrastructure studies.

  20. NASA's Advanced Information Systems Technology (AIST) Program: Advanced Concepts and Disruptive Technologies

    NASA Astrophysics Data System (ADS)

    Little, M. M.; Moe, K.; Komar, G.

    2014-12-01

    NASA's Earth Science Technology Office (ESTO) manages a wide range of information technology projects under the Advanced Information Systems Technology (AIST) Program. The AIST Program aims to support all phases of NASA's Earth Science program with the goal of enabling new observations and information products, increasing the accessibility and use of Earth observations, and reducing the risk and cost of satellite and ground based information systems. Recent initiatives feature computational technologies to improve information extracted from data streams or model outputs and researchers' tools for Big Data analytics. Data-centric technologies enable research communities to facilitate collaboration and increase the speed with which results are produced and published. In the future NASA anticipates more small satellites (e.g., CubeSats), mobile drones and ground-based in-situ sensors will advance the state-of-the-art regarding how scientific observations are performed, given the flexibility, cost and deployment advantages of new operations technologies. This paper reviews the success of the program and the lessons learned. Infusion of these technologies is challenging and the paper discusses the obstacles and strategies to adoption by the earth science research and application efforts. It also describes alternative perspectives for the future program direction and for realizing the value in the steps to transform observations from sensors to data, to information, and to knowledge, namely: sensor measurement concepts development; data acquisition and management; data product generation; and data exploitation for science and applications.

  1. Research on the use of data fusion technology to evaluate the state of electromechanical equipment

    NASA Astrophysics Data System (ADS)

    Lin, Lin

    2018-04-01

    Aiming at the problems of different testing information modes and the coexistence of quantitative and qualitative information in the state evaluation of electromechanical equipment, the paper proposes the use of data fusion technology to evaluate the state of electromechanical equipment. This paper introduces the state evaluation process of mechanical and electrical equipment in detail, uses the D-S evidence theory to fuse the decision-making layers of mechanical and electrical equipment state evaluation and carries out simulation tests. The simulation results show that it is feasible and effective to apply the data fusion technology to the state evaluation of the mechatronic equipment. After the multiple decision-making information provided by different evaluation methods are fused repeatedly and the useful information is extracted repeatedly, the fuzziness of judgment can be reduced and the state evaluation Credibility.

  2. Managing diabetes mellitus using information technology: a systematic review.

    PubMed

    Riazi, H; Larijani, B; Langarizadeh, M; Shahmoradi, L

    2015-01-01

    To review published evidences about using information technology interventions in diabetes care and determine their effects on managing diabetes. Systematic review of information technology based interventions. MEDLINE®/PubMed were electronically searched for articles published between 2004/07/01 and 2014/07/01. A comprehensive, electronic search strategy was used to identify eligible articles. Inclusion criteria were defined based on type of study and effect of information technology based intervention in relation to glucose control and other clinical outcomes in diabetic patients. Studies must have used a controlled design to evaluate an information technology based intervention. A total of 3613 articles were identified based on the searches conducted in MEDLINE from PubMed. After excluding duplicates (n = 6), we screened titles and abstracts of 3607 articles based on inclusion criteria. The remaining articles matched with inclusion criteria (n = 277) were reviewed in full text, and 210 articles were excluded based on exclusion criteria. Finally, 67 articles complied with our eligibility criteria and were included in this study. In this study, the effect of various information technology based interventions on clinical outcomes in diabetic patients extracted and measured from selected articles is described and compared to each other. Information technology based interventions combined with the usual care are associated with improved glycemic control with different efficacy on various clinical outcomes in diabetic patients.

  3. Is There a European View on Health Economic Evaluations? Results from a Synopsis of Methodological Guidelines Used in the EUnetHTA Partner Countries.

    PubMed

    Heintz, Emelie; Gerber-Grote, Andreas; Ghabri, Salah; Hamers, Francoise F; Rupel, Valentina Prevolnik; Slabe-Erker, Renata; Davidson, Thomas

    2016-01-01

    The objectives of this study were to review current methodological guidelines for economic evaluations of all types of technologies in the 33 countries with organizations involved in the European Network for Health Technology Assessment (EUnetHTA), and to provide a general framework for economic evaluation at a European level. Methodological guidelines for health economic evaluations used by EUnetHTA partners were collected through a survey. Information from each guideline was extracted using a pre-tested extraction template. On the basis of the extracted information, a summary describing the methods used by the EUnetHTA countries was written for each methodological item. General recommendations were formulated for methodological issues where the guidelines of the EUnetHTA partners were in agreement or where the usefulness of economic evaluations may be increased by presenting the results in a specific way. At least one contact person from all 33 EUnetHTA countries (100 %) responded to the survey. In total, the review included 51 guidelines, representing 25 countries (eight countries had no methodological guideline for health economic evaluations). On the basis of the results of the extracted information from all 51 guidelines, EUnetHTA issued ten main recommendations for health economic evaluations. The presented review of methodological guidelines for health economic evaluations and the consequent recommendations will hopefully improve the comparability, transferability and overall usefulness of economic evaluations performed within EUnetHTA. Nevertheless, there are still methodological issues that need to be investigated further.

  4. [Application of ultrasound counter currentextraction in patent of traditional Chinese medicine].

    PubMed

    Miao, Yan-ni; Wu, Bin; Yue, Xue-lian

    2015-07-01

    The patent information of ultrasound countercurrent extraction used in traditional Chinese medicine was analyzed in this paper by the samples from Derwent World Patent Database (DWPI) and the Chinese Patent Abstracts Database (CNABS). The application of ultrasound countercurrent was discussed with the patent applicant,the amount of the annual distribution, and the pharmaceutical raw materials and other aspects. While the technical parameters published in the patent was deeply analyzed, such as material crushing, extraction solvent, extraction time and temperature, extraction equipment and ultrasonic frequency. Thought above research, various technical parameters of ultrasound countercurrent extraction used in traditional Chinese was summarize. The analysis conclusion of the paper can be used in discovering the technical advantages, optimizing extraction conditions, and providing a reference to extraction technological innovation of traditional Chinese medicine.

  5. Patent information retrieval: approaching a method and analysing nanotechnology patent collaborations.

    PubMed

    Ozcan, Sercan; Islam, Nazrul

    2017-01-01

    Many challenges still remain in the processing of explicit technological knowledge documents such as patents. Given the limitations and drawbacks of the existing approaches, this research sets out to develop an improved method for searching patent databases and extracting patent information to increase the efficiency and reliability of nanotechnology patent information retrieval process and to empirically analyse patent collaboration. A tech-mining method was applied and the subsequent analysis was performed using Thomson data analyser software. The findings show that nations such as Korea and Japan are highly collaborative in sharing technological knowledge across academic and corporate organisations within their national boundaries, and China presents, in some cases, a great illustration of effective patent collaboration and co-inventorship. This study also analyses key patent strengths by country, organisation and technology.

  6. New approaches to health promotion and informatics education using Internet in the Czech Republic.

    PubMed

    Zvárová, J

    2005-01-01

    The paper describes nowadays information technology skills in the Czech Republic. It focuses on informatics education using Internet, ECDL concept and the links between computer literacy among health care professionals and quality of health care. Everyone understands that the main source of wealth of any nation is information management and the efficient transformation of information into knowledge. There appear completely new decisive factors for the economics of the near future based on circulation and exchange information. It is clear that modern health care cannot be built without information and communication technologies. We discuss several approaches how to contribute to some topics of information society in health care, namely the role of electronic health record, structured information, extraction of information from free medical texts and sharing knowledge stored in medical guidelines.

  7. [Identification of green tea brand based on hyperspectra imaging technology].

    PubMed

    Zhang, Hai-Liang; Liu, Xiao-Li; Zhu, Feng-Le; He, Yong

    2014-05-01

    Hyperspectral imaging technology was developed to identify different brand famous green tea based on PCA information and image information fusion. First 512 spectral images of six brands of famous green tea in the 380 approximately 1 023 nm wavelength range were collected and principal component analysis (PCA) was performed with the goal of selecting two characteristic bands (545 and 611 nm) that could potentially be used for classification system. Then, 12 gray level co-occurrence matrix (GLCM) features (i. e., mean, covariance, homogeneity, energy, contrast, correlation, entropy, inverse gap, contrast, difference from the second-order and autocorrelation) based on the statistical moment were extracted from each characteristic band image. Finally, integration of the 12 texture features and three PCA spectral characteristics for each green tea sample were extracted as the input of LS-SVM. Experimental results showed that discriminating rate was 100% in the prediction set. The receiver operating characteristic curve (ROC) assessment methods were used to evaluate the LS-SVM classification algorithm. Overall results sufficiently demonstrate that hyperspectral imaging technology can be used to perform classification of green tea.

  8. HEDEA: A Python Tool for Extracting and Analysing Semi-structured Information from Medical Records

    PubMed Central

    Aggarwal, Anshul; Garhwal, Sunita

    2018-01-01

    Objectives One of the most important functions for a medical practitioner while treating a patient is to study the patient's complete medical history by going through all records, from test results to doctor's notes. With the increasing use of technology in medicine, these records are mostly digital, alleviating the problem of looking through a stack of papers, which are easily misplaced, but some of these are in an unstructured form. Large parts of clinical reports are in written text form and are tedious to use directly without appropriate pre-processing. In medical research, such health records may be a good, convenient source of medical data; however, lack of structure means that the data is unfit for statistical evaluation. In this paper, we introduce a system to extract, store, retrieve, and analyse information from health records, with a focus on the Indian healthcare scene. Methods A Python-based tool, Healthcare Data Extraction and Analysis (HEDEA), has been designed to extract structured information from various medical records using a regular expression-based approach. Results The HEDEA system is working, covering a large set of formats, to extract and analyse health information. Conclusions This tool can be used to generate analysis report and charts using the central database. This information is only provided after prior approval has been received from the patient for medical research purposes. PMID:29770248

  9. HEDEA: A Python Tool for Extracting and Analysing Semi-structured Information from Medical Records.

    PubMed

    Aggarwal, Anshul; Garhwal, Sunita; Kumar, Ajay

    2018-04-01

    One of the most important functions for a medical practitioner while treating a patient is to study the patient's complete medical history by going through all records, from test results to doctor's notes. With the increasing use of technology in medicine, these records are mostly digital, alleviating the problem of looking through a stack of papers, which are easily misplaced, but some of these are in an unstructured form. Large parts of clinical reports are in written text form and are tedious to use directly without appropriate pre-processing. In medical research, such health records may be a good, convenient source of medical data; however, lack of structure means that the data is unfit for statistical evaluation. In this paper, we introduce a system to extract, store, retrieve, and analyse information from health records, with a focus on the Indian healthcare scene. A Python-based tool, Healthcare Data Extraction and Analysis (HEDEA), has been designed to extract structured information from various medical records using a regular expression-based approach. The HEDEA system is working, covering a large set of formats, to extract and analyse health information. This tool can be used to generate analysis report and charts using the central database. This information is only provided after prior approval has been received from the patient for medical research purposes.

  10. Authoritative knowledge, the technological imperative and women's responses to prenatal diagnostic technologies.

    PubMed

    McCoyd, Judith L M

    2010-12-01

    Theories about authoritative knowledge (AK) and the technological imperative have received varying levels of interest in anthropological, feminist and science and technology studies. Although the anthropological literature abounds with empirical considerations of authoritative knowledge, few have considered both theories through an empirical, inductive lens. Data extracted from an earlier study of 30 women's responses to termination for fetal anomaly are reanalyzed to consider the women's views of, and responses to, prenatal diagnostic technologies (PNDTs). Findings indicate that a small minority embrace the societal portrayal of technology as univalently positive, while the majority have nuanced and ambivalent responses to the use of PNDTs. Further, the interface of authoritative knowledge and the technological imperative suggests that AK derives not only from medical provider status and technology use, but also from the adequacy and trustworthiness of the information. The issue of timing and uncertainty of the information also are interrogated for their impact on women's lives and what that can illuminate about the theories of AK and the technological imperative.

  11. Defense Forensic Enterprise: Assessment and Status Report Personnel Accounting Extract

    DTIC Science & Technology

    2013-12-01

    represent the opinion of the Department of the Navy. APPROVED FOR PUBLIC RELEASE. DISTRIBUTION UNLIMITED. Copies of this document can be obtained through...Roberts Director, Aviation Systems and Technology Advanced Technology and Systems Analysis REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188... Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions

  12. Ethics in biotechnology and biosecurity.

    PubMed

    Jameel, S

    2011-01-01

    Great advances in technology produce unique challenges. Every technology also has a dual use, which needs to be understood and managed to extract maximum benefits for mankind and the development of civilization. The achievements of physicists in the mid-20th century resulted in the nuclear technology, which gave us the destructive power of the atomic bomb as also a source of energy. Towards the later part of the 20th century, information technology empowered us with fast, easy and cheap access to information, but also led to intrusions into our privacy. Today, biotechnology is yielding life- saving and life-enhancing advances at a fast pace. But, the same tools can also give rise to fiercely destructive forces. How do we construct a security regime for biology? What have we learnt from the management of earlier technological advances? How much information should be in the public domain? Should biology, or more broadly science, be regulated? Who should regulate it? These and many other ethical questions need to be addressed.

  13. A three-dimensional laser vibration measurement technology realized on five laser beam and its calibration

    NASA Astrophysics Data System (ADS)

    Li, Lu-Ke; Zhang, Shen-Feng

    2018-03-01

    Put forward a kind of three-dimensional vibration information technology of vibrating object by the mean of five laser beam of He-Ne laser, and with the help of three-way sensor, measure the three-dimensional laser vibration developed by above mentioned technology. The technology based on the Doppler principle of interference and signal demodulation technology, get the vibration information of the object, through the algorithm processing, extract the three-dimensional vibration information of space objects, and can achieve the function of angle calibration of five beam in the space, which avoid the effects of the mechanical installation error, greatly improve the accuracy of measurement. With the help of a & B K4527 contact three axis sensor, measure and calibrate three-dimensional laser vibrometer, which ensure the accuracy of the measurement data. Summarize the advantages and disadvantages of contact and non-contact sensor, and analysis the future development trends of the sensor industry.

  14. Extraction of actionable information from crowdsourced disaster data.

    PubMed

    Kiatpanont, Rungsun; Tanlamai, Uthai; Chongstitvatana, Prabhas

    Natural disasters cause enormous damage to countries all over the world. To deal with these common problems, different activities are required for disaster management at each phase of the crisis. There are three groups of activities as follows: (1) make sense of the situation and determine how best to deal with it, (2) deploy the necessary resources, and (3) harmonize as many parties as possible, using the most effective communication channels. Current technological improvements and developments now enable people to act as real-time information sources. As a result, inundation with crowdsourced data poses a real challenge for a disaster manager. The problem is how to extract the valuable information from a gigantic data pool in the shortest possible time so that the information is still useful and actionable. This research proposed an actionable-data-extraction process to deal with the challenge. Twitter was selected as a test case because messages posted on Twitter are publicly available. Hashtag, an easy and very efficient technique, was also used to differentiate information. A quantitative approach to extract useful information from the tweets was supported and verified by interviews with disaster managers from many leading organizations in Thailand to understand their missions. The information classifications extracted from the collected tweets were first performed manually, and then the tweets were used to train a machine learning algorithm to classify future tweets. One particularly useful, significant, and primary section was the request for help category. The support vector machine algorithm was used to validate the results from the extraction process of 13,696 sample tweets, with over 74 percent accuracy. The results confirmed that the machine learning technique could significantly and practically assist with disaster management by dealing with crowdsourced data.

  15. Application of multispectral remote sensing techniques for dismissed mine sites monitoring and rehabilitation

    NASA Astrophysics Data System (ADS)

    Bonifazi, Giuseppe; Serranti, Silvia

    2007-09-01

    Mining activities, expecially those operated in open air (open pit), present a deep impact on the sourrondings. Such an impact, and the related problems, are directly related to the correct operation of the activities, and usually strongly interact with the environment. Impact can be mainly related to the following issues: high volumes of handled material, ii) generation of dust, noise and vibrations, water pollution, visual impact and, finally, mining area recovery at the end of exploitation activities. All these aspects can be considered very important, and must be properly evaluated and monitored. Environmental impact control is usually carried out during and after the end of the mining activities, adopting methods related to the detection, collection, analysis of specific environmental indicators and with their further comparison with reference thresholding values stated by official regulations. Aim of the study was to investigate, and critically evaluate, the problems related to development of an integrated set of procedures based on the collection and the analysis of remote sensed data in order to evaluate the effect of rehabilitation of land contaminated by extractive industry activities. Starting from the results of these analyses, a monitoring and registration of the environmental impact of such operations was performed by the application and the integration of modern information technologies, as the previous mentioned Earth Observation (EO), with Geographic Information Systems (GIS). The study was developed with reference to different dismissed mine sites in India, Thailand and China. The results of the study have been utilized as input for the construction of a knowledge based decision support system finalized to help in the identification of the appropriate rehabilitation technologies for all those dismissed area previously interested by extractive industry activities. The work was financially supported within the framework of the Project ASIA IT&C - CN/ASIA IT&C/006 (89870) Extract-It "Application of Information Technologies for the Sustainable Management of Extractive Industry Activities" of the European Union.

  16. Research in satellite-aided crop inventory and monitoring

    NASA Technical Reports Server (NTRS)

    Erickson, J. D.; Dragg, J. L.; Bizzell, R. M.; Trichel, M. C. (Principal Investigator)

    1982-01-01

    Automated information extraction procedures for analysis of multitemporal LANDSAT data in non-U.S. crop inventory and monitoring are reviewed. Experiments to develope and evaluate crop area estimation technologies for spring small grains, summer crops, corn, and soybeans are discussed.

  17. Genome sequence of Stachybotrys chartarum Strain 51-11

    EPA Science Inventory

    Stachybotrys chartarum strain 51-11 genome was sequenced by shotgun sequencing utilizing Illumina Hiseq 2000 and PacBio long read technology. Since Stachybotrys chartarum has been implicated in health impacts within water-damaged buildings, any information extracted from the geno...

  18. Disruptive technologies for Massachusetts Bay Transportation Authority business strategy exploration.

    DOT National Transportation Integrated Search

    2013-04-01

    There are three tasks for this research : 1. Methodology to extract Road Usage Patterns from Phone Data: We combined the : most complete record of daily mobility, based on large-scale mobile phone data, with : detailed Geographic Information System (...

  19. Medical knowledge discovery and management.

    PubMed

    Prior, Fred

    2009-05-01

    Although the volume of medical information is growing rapidly, the ability to rapidly convert this data into "actionable insights" and new medical knowledge is lagging far behind. The first step in the knowledge discovery process is data management and integration, which logically can be accomplished through the application of data warehouse technologies. A key insight that arises from efforts in biosurveillance and the global scope of military medicine is that information must be integrated over both time (longitudinal health records) and space (spatial localization of health-related events). Once data are compiled and integrated it is essential to encode the semantics and relationships among data elements through the use of ontologies and semantic web technologies to convert data into knowledge. Medical images form a special class of health-related information. Traditionally knowledge has been extracted from images by human observation and encoded via controlled terminologies. This approach is rapidly being replaced by quantitative analyses that more reliably support knowledge extraction. The goals of knowledge discovery are the improvement of both the timeliness and accuracy of medical decision making and the identification of new procedures and therapies.

  20. Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds.

    PubMed

    Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun

    2016-06-17

    Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data.

  1. Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds†

    PubMed Central

    Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun

    2016-01-01

    Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data. PMID:27322279

  2. A new patent-based approach for technology mapping in the pharmaceutical domain.

    PubMed

    Russo, Davide; Montecchi, Tiziano; Carrara, Paolo

    2013-09-01

    The key factor in decision-making is the quality of information collected and processed in the problem analysis. In most cases, patents represent a very important source of information. The main problem is how to extract such information from the huge corpus of documents with a high recall and precision, and in a short time. This article demonstrates a patent search and classification method, called Knowledge Organizing Module, which consists of creating, almost automatically, a pool of patents based on polysemy expansion and homonymy disambiguation. Since the pool is done, an automatic patent technology landscaping is provided for fixing the state of the art of our product, and exploring competing alternative treatments and/or possible technological opportunities. An exemplary case study is provided, it deals with a patent analysis in the field of verruca treatments.

  3. Natural Antioxidants in Foods and Medicinal Plants: Extraction, Assessment and Resources

    PubMed Central

    Xu, Dong-Ping; Li, Ya; Meng, Xiao; Zhou, Tong; Zhou, Yue; Zheng, Jie; Zhang, Jiao-Jiao; Li, Hua-Bin

    2017-01-01

    Natural antioxidants are widely distributed in food and medicinal plants. These natural antioxidants, especially polyphenols and carotenoids, exhibit a wide range of biological effects, including anti-inflammatory, anti-aging, anti-atherosclerosis and anticancer. The effective extraction and proper assessment of antioxidants from food and medicinal plants are crucial to explore the potential antioxidant sources and promote the application in functional foods, pharmaceuticals and food additives. The present paper provides comprehensive information on the green extraction technologies of natural antioxidants, assessment of antioxidant activity at chemical and cellular based levels and their main resources from food and medicinal plants. PMID:28067795

  4. Natural Antioxidants in Foods and Medicinal Plants: Extraction, Assessment and Resources.

    PubMed

    Xu, Dong-Ping; Li, Ya; Meng, Xiao; Zhou, Tong; Zhou, Yue; Zheng, Jie; Zhang, Jiao-Jiao; Li, Hua-Bin

    2017-01-05

    Natural antioxidants are widely distributed in food and medicinal plants. These natural antioxidants, especially polyphenols and carotenoids, exhibit a wide range of biological effects, including anti-inflammatory, anti-aging, anti-atherosclerosis and anticancer. The effective extraction and proper assessment of antioxidants from food and medicinal plants are crucial to explore the potential antioxidant sources and promote the application in functional foods, pharmaceuticals and food additives. The present paper provides comprehensive information on the green extraction technologies of natural antioxidants, assessment of antioxidant activity at chemical and cellular based levels and their main resources from food and medicinal plants.

  5. Reliable Electronic Text: The Elusive Prerequisite for a Host of Human Language Technologies

    DTIC Science & Technology

    2010-09-30

    is not always the case—for example, ligatures in Latin-fonts, and glyphs in Arabic fonts (King, 2008; Carrier, 2009). This complexity, and others...such effects can render electronic text useless for natural language processing ( NLP ). Typically, file converters do not expose the details of the...the many component NLP technologies typically used inside information extraction and text categorization applications, such as tokenization, part-of

  6. Assessment of the use of space technology in the monitoring of oil spills and ocean pollution: Executive summary

    NASA Technical Reports Server (NTRS)

    Alvarado, U. R. (Editor)

    1980-01-01

    The adequacy of current technology in terms of stage of maturity, of sensing, support systems, and information extraction was assessed relative to oil spills, waste pollution, and inputs to pollution trajectory models. Needs for advanced techniques are defined and the characteristics of a future satellite system are determined based on the requirements of U.S. agencies involved in pollution monitoring.

  7. DEXTER: Disease-Expression Relation Extraction from Text.

    PubMed

    Gupta, Samir; Dingerdissen, Hayley; Ross, Karen E; Hu, Yu; Wu, Cathy H; Mazumder, Raja; Vijay-Shanker, K

    2018-01-01

    Gene expression levels affect biological processes and play a key role in many diseases. Characterizing expression profiles is useful for clinical research, and diagnostics and prognostics of diseases. There are currently several high-quality databases that capture gene expression information, obtained mostly from large-scale studies, such as microarray and next-generation sequencing technologies, in the context of disease. The scientific literature is another rich source of information on gene expression-disease relationships that not only have been captured from large-scale studies but have also been observed in thousands of small-scale studies. Expression information obtained from literature through manual curation can extend expression databases. While many of the existing databases include information from literature, they are limited by the time-consuming nature of manual curation and have difficulty keeping up with the explosion of publications in the biomedical field. In this work, we describe an automated text-mining tool, Disease-Expression Relation Extraction from Text (DEXTER) to extract information from literature on gene and microRNA expression in the context of disease. One of the motivations in developing DEXTER was to extend the BioXpress database, a cancer-focused gene expression database that includes data derived from large-scale experiments and manual curation of publications. The literature-based portion of BioXpress lags behind significantly compared to expression information obtained from large-scale studies and can benefit from our text-mined results. We have conducted two different evaluations to measure the accuracy of our text-mining tool and achieved average F-scores of 88.51 and 81.81% for the two evaluations, respectively. Also, to demonstrate the ability to extract rich expression information in different disease-related scenarios, we used DEXTER to extract information on differential expression information for 2024 genes in lung cancer, 115 glycosyltransferases in 62 cancers and 826 microRNA in 171 cancers. All extractions using DEXTER are integrated in the literature-based portion of BioXpress.Database URL: http://biotm.cis.udel.edu/DEXTER.

  8. Individual Learning Route as a Way of Highly Qualified Specialists Training for Extraction of Solid Commercial Minerals Enterprises

    NASA Astrophysics Data System (ADS)

    Oschepkova, Elena; Vasinskaya, Irina; Sockoluck, Irina

    2017-11-01

    In view of changing educational paradigm (adopting of two-tier system of higher education concept - undergraduate and graduate programs) a need of using of modern learning and information and communications technologies arises putting into practice learner-centered approaches in training of highly qualified specialists for extraction and processing of solid commercial minerals enterprises. In the unstable market demand situation and changeable institutional environment, from one side, and necessity of work balancing, supplying conditions and product quality when mining-and-geological parameters change, from the other side, mining enterprises have to introduce and develop the integrated management process of product and informative and logistic flows under united management system. One of the main limitations, which keeps down the developing process on Russian mining enterprises, is staff incompetence at all levels of logistic management. Under present-day conditions extraction and processing of solid commercial minerals enterprises need highly qualified specialists who can do self-directed researches, develop new and improve present arranging, planning and managing technologies of technical operation and commercial exploitation of transport and transportation and processing facilities based on logistics. Learner-centered approach and individualization of the learning process necessitate the designing of individual learning route (ILR), which can help the students to realize their professional facilities according to requirements for specialists for extraction and processing of solid commercial minerals enterprises.

  9. Text Information Extraction System (TIES) | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    TIES is a service based software system for acquiring, deidentifying, and processing clinical text reports using natural language processing, and also for querying, sharing and using this data to foster tissue and image based research, within and between institutions.

  10. THE APPLICATION OF PHOTOGRAPHIC INTERPRETATION AND RELATED TECHNOLOGIES IN MODERN ENVIRONMENTAL PROTECTION

    EPA Science Inventory

    Imagery Interpretation is a timed-tested technique for extracting landscape-level information from aerial photographs and other types of remotely sensed data. The U.S. Environmental Protection Agency's Environmental Photographic Interpretation Center (EPIC) has a 25+ year history...

  11. Multimedia Information Retrieval Literature Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Pak C.; Bohn, Shawn J.; Payne, Deborah A.

    This survey paper highlights some of the recent, influential work in multimedia information retrieval (MIR). MIR is a branch area of multimedia (MM). The young and fast-growing area has received strong industrial and academic support in the United States and around the world (see Section 7 for a list of major conferences and journals of the community). The term "information retrieval" may be misleading to those with different computer science or information technology backgrounds. As shown in our discussion later, it indeed includes topics from user interaction, data analytics, machine learning, feature extraction, information visualization, and more.

  12. Science information systems: Visualization

    NASA Technical Reports Server (NTRS)

    Wall, Ray J.

    1991-01-01

    Future programs in earth science, planetary science, and astrophysics will involve complex instruments that produce data at unprecedented rates and volumes. Current methods for data display, exploration, and discovery are inadequate. Visualization technology offers a means for the user to comprehend, explore, and examine complex data sets. The goal of this program is to increase the effectiveness and efficiency of scientists in extracting scientific information from large volumes of instrument data.

  13. [An object-based information extraction technology for dominant tree species group types].

    PubMed

    Tian, Tian; Fan, Wen-yi; Lu, Wei; Xiao, Xiang

    2015-06-01

    Information extraction for dominant tree group types is difficult in remote sensing image classification, howevers, the object-oriented classification method using high spatial resolution remote sensing data is a new method to realize the accurate type information extraction. In this paper, taking the Jiangle Forest Farm in Fujian Province as the research area, based on the Quickbird image data in 2013, the object-oriented method was adopted to identify the farmland, shrub-herbaceous plant, young afforested land, Pinus massoniana, Cunninghamia lanceolata and broad-leave tree types. Three types of classification factors including spectral, texture, and different vegetation indices were used to establish a class hierarchy. According to the different levels, membership functions and the decision tree classification rules were adopted. The results showed that the method based on the object-oriented method by using texture, spectrum and the vegetation indices achieved the classification accuracy of 91.3%, which was increased by 5.7% compared with that by only using the texture and spectrum.

  14. Recognition techniques for extracting information from semistructured documents

    NASA Astrophysics Data System (ADS)

    Della Ventura, Anna; Gagliardi, Isabella; Zonta, Bruna

    2000-12-01

    Archives of optical documents are more and more massively employed, the demand driven also by the new norms sanctioning the legal value of digital documents, provided they are stored on supports that are physically unalterable. On the supply side there is now a vast and technologically advanced market, where optical memories have solved the problem of the duration and permanence of data at costs comparable to those for magnetic memories. The remaining bottleneck in these systems is the indexing. The indexing of documents with a variable structure, while still not completely automated, can be machine supported to a large degree with evident advantages both in the organization of the work, and in extracting information, providing data that is much more detailed and potentially significant for the user. We present here a system for the automatic registration of correspondence to and from a public office. The system is based on a general methodology for the extraction, indexing, archiving, and retrieval of significant information from semi-structured documents. This information, in our prototype application, is distributed among the database fields of sender, addressee, subject, date, and body of the document.

  15. Classifying patents based on their semantic content.

    PubMed

    Bergeaud, Antonin; Potiron, Yoann; Raimbault, Juste

    2017-01-01

    In this paper, we extend some usual techniques of classification resulting from a large-scale data-mining and network approach. This new technology, which in particular is designed to be suitable to big data, is used to construct an open consolidated database from raw data on 4 million patents taken from the US patent office from 1976 onward. To build the pattern network, not only do we look at each patent title, but we also examine their full abstract and extract the relevant keywords accordingly. We refer to this classification as semantic approach in contrast with the more common technological approach which consists in taking the topology when considering US Patent office technological classes. Moreover, we document that both approaches have highly different topological measures and strong statistical evidence that they feature a different model. This suggests that our method is a useful tool to extract endogenous information.

  16. Classifying patents based on their semantic content

    PubMed Central

    2017-01-01

    In this paper, we extend some usual techniques of classification resulting from a large-scale data-mining and network approach. This new technology, which in particular is designed to be suitable to big data, is used to construct an open consolidated database from raw data on 4 million patents taken from the US patent office from 1976 onward. To build the pattern network, not only do we look at each patent title, but we also examine their full abstract and extract the relevant keywords accordingly. We refer to this classification as semantic approach in contrast with the more common technological approach which consists in taking the topology when considering US Patent office technological classes. Moreover, we document that both approaches have highly different topological measures and strong statistical evidence that they feature a different model. This suggests that our method is a useful tool to extract endogenous information. PMID:28445550

  17. [Study on new extraction technology of astragaloside IV].

    PubMed

    Sun, Haiyan; Guan, Su; Huang, Min

    2005-08-01

    To explore the possibility and the optimal extraction technology of astragaloside IV by SFE-CO2. According the content of astragaloside IV, the optimum extraction technology parameters such as extraction temperature, pressure, extraction time, velocity of fluid and co-solvent were investigated and the result was compared with that of water extraction. The optimum technical parameters were as follows: Extracting pressure 40 Mpa, temperature 45 degrees C, extracting time 2h, co-solvent was 95% ethanol and its dosage was 4ml/g, the ratio of CO2 fluid was 10 kg/kg x h. Extraction technology of astragaloside IV by SFE-CO2 is reliable, stable.

  18. Design and process aspects of laboratory scale SCF particle formation systems.

    PubMed

    Vemavarapu, Chandra; Mollan, Matthew J; Lodaya, Mayur; Needham, Thomas E

    2005-03-23

    Consistent production of solid drug materials of desired particle and crystallographic morphologies under cGMP conditions is a frequent challenge to pharmaceutical researchers. Supercritical fluid (SCF) technology gained significant attention in pharmaceutical research by not only showing a promise in this regard but also accommodating the principles of green chemistry. Given that this technology attained commercialization in coffee decaffeination and in the extraction of hops and other essential oils, a majority of the off-the-shelf SCF instrumentation is designed for extraction purposes. Only a selective few vendors appear to be in the early stages of manufacturing equipment designed for particle formation. The scarcity of information on the design and process engineering of laboratory scale equipment is recognized as a significant shortcoming to the technological progress. The purpose of this article is therefore to provide the information and resources necessary for startup research involving particle formation using supercritical fluids. The various stages of particle formation by supercritical fluid processing can be broadly classified into delivery, reaction, pre-expansion, expansion and collection. The importance of each of these processes in tailoring the particle morphology is discussed in this article along with presenting various alternatives to perform these operations.

  19. Spectral Regression Based Fault Feature Extraction for Bearing Accelerometer Sensor Signals

    PubMed Central

    Xia, Zhanguo; Xia, Shixiong; Wan, Ling; Cai, Shiyu

    2012-01-01

    Bearings are not only the most important element but also a common source of failures in rotary machinery. Bearing fault prognosis technology has been receiving more and more attention recently, in particular because it plays an increasingly important role in avoiding the occurrence of accidents. Therein, fault feature extraction (FFE) of bearing accelerometer sensor signals is essential to highlight representative features of bearing conditions for machinery fault diagnosis and prognosis. This paper proposes a spectral regression (SR)-based approach for fault feature extraction from original features including time, frequency and time-frequency domain features of bearing accelerometer sensor signals. SR is a novel regression framework for efficient regularized subspace learning and feature extraction technology, and it uses the least squares method to obtain the best projection direction, rather than computing the density matrix of features, so it also has the advantage in dimensionality reduction. The effectiveness of the SR-based method is validated experimentally by applying the acquired vibration signals data to bearings. The experimental results indicate that SR can reduce the computation cost and preserve more structure information about different bearing faults and severities, and it is demonstrated that the proposed feature extraction scheme has an advantage over other similar approaches. PMID:23202017

  20. Application research on land use remote sensing dynamic monitoring: A case study of Anning district, Lanzhou

    NASA Astrophysics Data System (ADS)

    Zhu, Yunqiang; Zhu, Huazhong; Lu, Heli; Ni, Jianguang; Zhu, Shaoxia

    2005-10-01

    Remote sensing dynamic monitoring of land use can detect the change information of land use and update the current land use map, which is important for rational utilization and scientific management of land resources. This paper discusses the technological procedure of remote sensing dynamic monitoring of land use including the process of remote sensing images, the extraction of annual change information of land use, field survey, indoor post processing and accuracy assessment. Especially, we emphasize on comparative research on the choice of remote sensing rectifying models, image fusion algorithms and accuracy assessment methods. Taking Anning district in Lanzhou as an example, we extract the land use change information of the district during 2002-2003, access monitoring accuracy and analyze the reason of land use change.

  1. Detecting the red tide based on remote sensing data in optically complex East China Sea

    NASA Astrophysics Data System (ADS)

    Xu, Xiaohui; Pan, Delu; Mao, Zhihua; Tao, Bangyi; Liu, Qiong

    2012-09-01

    Red tide not only destroys marine fishery production, deteriorates the marine environment, affects coastal tourist industry, but also causes human poison, even death by eating toxic seafood contaminated by red tide organisms. Remote sensing technology has the characteristics of large-scale, synchronized, rapid monitoring, so it is one of the most important and most effective means of red tide monitoring. This paper selects the high frequency red tides areas of the East China Sea as study area, MODIS/Aqua L2 data as the data source, analysis and compares the spectral differences in the red tide water bodies and non-red tide water bodies of many historical events. Based on the spectral differences, this paper develops the algorithm of Rrs555/Rrs488> 1.5 to extract the red tide information. Apply the algorithm on red tide event happened in the East China Sea on May 28, 2009 to extract the information of red tide, and found that the method can determine effectively the location of the occurrence of red tide; there is a good corresponding relationship between red tide extraction result and chlorophyll a concentration extracted by remote sensing, shows that these algorithm can determine effectively the location and extract the red tide information.

  2. Integrated Computational System for Aerodynamic Steering and Visualization

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus

    1999-01-01

    In February of 1994, an effort from the Fluid Dynamics and Information Sciences Divisions at NASA Ames Research Center with McDonnel Douglas Aerospace Company and Stanford University was initiated to develop, demonstrate, validate and disseminate automated software for numerical aerodynamic simulation. The goal of the initiative was to develop a tri-discipline approach encompassing CFD, Intelligent Systems, and Automated Flow Feature Recognition to improve the utility of CFD in the design cycle. This approach would then be represented through an intelligent computational system which could accept an engineer's definition of a problem and construct an optimal and reliable CFD solution. Stanford University's role focused on developing technologies that advance visualization capabilities for analysis of CFD data, extract specific flow features useful for the design process, and compare CFD data with experimental data. During the years 1995-1997, Stanford University focused on developing techniques in the area of tensor visualization and flow feature extraction. Software libraries were created enabling feature extraction and exploration of tensor fields. As a proof of concept, a prototype system called the Integrated Computational System (ICS) was developed to demonstrate CFD design cycle. The current research effort focuses on finding a quantitative comparison of general vector fields based on topological features. Since the method relies on topological information, grid matching and vector alignment is not needed in the comparison. This is often a problem with many data comparison techniques. In addition, since only topology based information is stored and compared for each field, there is a significant compression of information that enables large databases to be quickly searched. This report will (1) briefly review the technologies developed during 1995-1997 (2) describe current technologies in the area of comparison techniques, (4) describe the theory of our new method researched during the grant year (5) summarize a few of the results and finally (6) discuss work within the last 6 months that are direct extensions from the grant.

  3. Automatic Extraction of Urban Built-Up Area Based on Object-Oriented Method and Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Li, L.; Zhou, H.; Wen, Q.; Chen, T.; Guan, F.; Ren, B.; Yu, H.; Wang, Z.

    2018-04-01

    Built-up area marks the use of city construction land in the different periods of the development, the accurate extraction is the key to the studies of the changes of urban expansion. This paper studies the technology of automatic extraction of urban built-up area based on object-oriented method and remote sensing data, and realizes the automatic extraction of the main built-up area of the city, which saves the manpower cost greatly. First, the extraction of construction land based on object-oriented method, the main technical steps include: (1) Multi-resolution segmentation; (2) Feature Construction and Selection; (3) Information Extraction of Construction Land Based on Rule Set, The characteristic parameters used in the rule set mainly include the mean of the red band (Mean R), Normalized Difference Vegetation Index (NDVI), Ratio of residential index (RRI), Blue band mean (Mean B), Through the combination of the above characteristic parameters, the construction site information can be extracted. Based on the degree of adaptability, distance and area of the object domain, the urban built-up area can be quickly and accurately defined from the construction land information without depending on other data and expert knowledge to achieve the automatic extraction of the urban built-up area. In this paper, Beijing city as an experimental area for the technical methods of the experiment, the results show that: the city built-up area to achieve automatic extraction, boundary accuracy of 2359.65 m to meet the requirements. The automatic extraction of urban built-up area has strong practicality and can be applied to the monitoring of the change of the main built-up area of city.

  4. Using GIS in ecological management: green assessment of the impacts of petroleum activities in the state of Texas.

    PubMed

    Merem, Edmund; Robinson, Bennetta; Wesley, Joan M; Yerramilli, Sudha; Twumasi, Yaw A

    2010-05-01

    Geo-information technologies are valuable tools for ecological assessment in stressed environments. Visualizing natural features prone to disasters from the oil sector spatially not only helps in focusing the scope of environmental management with records of changes in affected areas, but it also furnishes information on the pace at which resource extraction affects nature. Notwithstanding the recourse to ecosystem protection, geo-spatial analysis of the impacts remains sketchy. This paper uses GIS and descriptive statistics to assess the ecological impacts of petroleum extraction activities in Texas. While the focus ranges from issues to mitigation strategies, the results point to growth in indicators of ecosystem decline.

  5. Using GIS in Ecological Management: Green Assessment of the Impacts of Petroleum Activities in the State of Texas

    PubMed Central

    Merem, Edmund; Robinson, Bennetta; Wesley, Joan M.; Yerramilli, Sudha; Twumasi, Yaw A.

    2010-01-01

    Geo-information technologies are valuable tools for ecological assessment in stressed environments. Visualizing natural features prone to disasters from the oil sector spatially not only helps in focusing the scope of environmental management with records of changes in affected areas, but it also furnishes information on the pace at which resource extraction affects nature. Notwithstanding the recourse to ecosystem protection, geo-spatial analysis of the impacts remains sketchy. This paper uses GIS and descriptive statistics to assess the ecological impacts of petroleum extraction activities in Texas. While the focus ranges from issues to mitigation strategies, the results point to growth in indicators of ecosystem decline. PMID:20623014

  6. Aircraft Operations Classification System

    NASA Technical Reports Server (NTRS)

    Harlow, Charles; Zhu, Weihong

    2001-01-01

    Accurate data is important in the aviation planning process. In this project we consider systems for measuring aircraft activity at airports. This would include determining the type of aircraft such as jet, helicopter, single engine, and multiengine propeller. Some of the issues involved in deploying technologies for monitoring aircraft operations are cost, reliability, and accuracy. In addition, the system must be field portable and acceptable at airports. A comparison of technologies was conducted and it was decided that an aircraft monitoring system should be based upon acoustic technology. A multimedia relational database was established for the study. The information contained in the database consists of airport information, runway information, acoustic records, photographic records, a description of the event (takeoff, landing), aircraft type, and environmental information. We extracted features from the time signal and the frequency content of the signal. A multi-layer feed-forward neural network was chosen as the classifier. Training and testing results were obtained. We were able to obtain classification results of over 90 percent for training and testing for takeoff events.

  7. Usability of stereoscopic view in teleoperation

    NASA Astrophysics Data System (ADS)

    Boonsuk, Wutthigrai

    2015-03-01

    Recently, there are tremendous growths in the area of 3D stereoscopic visualization. The 3D stereoscopic visualization technology has been used in a growing number of consumer products such as the 3D televisions and the 3D glasses for gaming systems. This technology refers to the idea that human brain develops depth of perception by retrieving information from the two eyes. Our brain combines the left and right images on the retinas and extracts depth information. Therefore, viewing two video images taken at slightly distance apart as shown in Figure 1 can create illusion of depth [8]. Proponents of this technology argue that the stereo view of 3D visualization increases user immersion and performance as more information is gained through the 3D vision as compare to the 2D view. However, it is still uncertain if additional information gained from the 3D stereoscopic visualization can actually improve user performance in real world situations such as in the case of teleoperation.

  8. Informing child welfare policy and practice: using knowledge discovery and data mining technology via a dynamic Web site.

    PubMed

    Duncan, Dean F; Kum, Hye-Chung; Weigensberg, Elizabeth Caplick; Flair, Kimberly A; Stewart, C Joy

    2008-11-01

    Proper management and implementation of an effective child welfare agency requires the constant use of information about the experiences and outcomes of children involved in the system, emphasizing the need for comprehensive, timely, and accurate data. In the past 20 years, there have been many advances in technology that can maximize the potential of administrative data to promote better evaluation and management in the field of child welfare. Specifically, this article discusses the use of knowledge discovery and data mining (KDD), which makes it possible to create longitudinal data files from administrative data sources, extract valuable knowledge, and make the information available via a user-friendly public Web site. This article demonstrates a successful project in North Carolina where knowledge discovery and data mining technology was used to develop a comprehensive set of child welfare outcomes available through a public Web site to facilitate information sharing of child welfare data to improve policy and practice.

  9. Investigation related to multispectral imaging systems

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F.; Erickson, J. D.

    1974-01-01

    A summary of technical progress made during a five year research program directed toward the development of operational information systems based on multispectral sensing and the use of these systems in earth-resource survey applications is presented. Efforts were undertaken during this program to: (1) improve the basic understanding of the many facets of multispectral remote sensing, (2) develop methods for improving the accuracy of information generated by remote sensing systems, (3) improve the efficiency of data processing and information extraction techniques to enhance the cost-effectiveness of remote sensing systems, (4) investigate additional problems having potential remote sensing solutions, and (5) apply the existing and developing technology for specific users and document and transfer that technology to the remote sensing community.

  10. [HPLC-ESI-MS(n) analysis of the water soluble extracts of Fructus Choerospondiatis].

    PubMed

    Shi, Run-ju; Dai, Yun; Fang, Min-feng; Zhao, Xin; Zheng, Jian-bin; Zheng, Xiao-hui

    2007-03-01

    To establish an HPLC-ESI-MS(n) method for analyzing the chemical ingredients in the water soluble extracts of Fructus Choerospondiatis. Water-solvable extracts of Fructus Choerospondiatis are obtained by heating recirculation. Multi-stage reaction mode (MRM) of the HPLC-ESI-MS(n) was used to determine the content of Gallic acid, the MS(n) technology was used to obtain the information of characteristic multistage fragment ions so as to identify the chemical structure of peaks in the total current spectrum. Eleven compounds were identified, and one of them is a new unknown ingredient. The method, which has high recovery and specificity, can offer the experimental evidences for the further research of the chemical ingredients extracted from the Fructus Choerospondiatis.

  11. [Optimization of Extraction Technology for Sericin from Silkworm Cocoon with Orthogonal Design].

    PubMed

    Zhao, Chun-ying; Wang, Yan; Li, Yun-feng; Chen, Zhi-hong

    2015-05-01

    To optimize the appropriate extracting technology for sericin from Silkworm cocoon. Using sericin extraction rates and sericin content as the indices. The single and orthogonal experiments were used to determine the best conditions. The optimal extraction technology for sericin from Silkworm cocoon was as follows: 1: 30 for the ratio of solid to liquid, 3 h reflux for 2 times of extraction and water temperature at 100 degrees C. The extraction rate of sericin from Silkworm cocoon was 27.1%. The optimal extraction technology is stable, feasible, and can provide reference for further pharmacological study on cocoon sericin.

  12. [Advances in studies on multi-stage countercurrent extraction technology in traditional Chinese medicine].

    PubMed

    Xie, Zhi-Peng; Liu, Xue-Song; Chen, Yong; Cai, Ming; Qu, Hai-Bin; Cheng, Yi-Yu

    2007-05-01

    Multi-stage countercurrent extraction technology, integrating solvent extraction, repercolation with dynamic and countercurrent extraction, is a novel extraction technology for the traditional Chinese medicine. This solvent-saving, energy-saving and high-extraction-efficiency technology can at the most drive active compounds to diffuse from the herbal materials into the solvent stage by stage by creating concentration differences between the herbal materials and the solvents. This paper reviewed the basic principle, the influence factors and the research progress and trends of the equipments and the application of the multi-stage countercurrent extraction.

  13. Optimization of an Innovative Biofiltration System as a VOC Control Technology for Aircraft Painting Facilities

    DTIC Science & Technology

    2004-04-20

    EUROPE (Leson, 1991). Chemical Operations Coffee Roasting Composting Facilities Chemical Storage Coca Roasting Landfill Gas Extraction Film Coating...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect

  14. TERSSE: Definition of the Total Earth Resources System for the Shuttle Era. Volume 7: User Models: A System Assessment

    NASA Technical Reports Server (NTRS)

    1974-01-01

    User models defined as any explicit process or procedure used to transform information extracted from remotely sensed data into a form useful as a resource management information input are discussed. The role of the user models as information, technological, and operations interfaces between the TERSSE and the resource managers is emphasized. It is recommended that guidelines and management strategies be developed for a systems approach to user model development.

  15. New solid-state chemistry technologies to bring better drugs to market: knowledge-based decision making.

    PubMed

    Park, Aeri; Chyall, Leonard J; Dunlap, Jeanette; Schertz, Christine; Jonaitis, David; Stahly, Barbara C; Bates, Simon; Shipplett, Rex; Childs, Scott

    2007-01-01

    Modern drug development demands constant deployment of more effective technologies to mitigate the high cost of bringing new drugs to market. In addition to cost savings, new technologies can improve all aspects of pharmaceutical development. New technologies developed at SSCI, Inc. include solid form development of an active pharmaceutical ingredients. (APIs) are PatternMatch software and capillary-based crystallisation techniques that not only allow for fast and effective solid form screening, but also extract maximum property information from the routine screening data that is generally available. These new technologies offer knowledge-based decision making during solid form development of APIs and result in more developable API solid forms.

  16. Audio feature extraction using probability distribution function

    NASA Astrophysics Data System (ADS)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  17. Protocole of a controlled before-after evaluation of a national health information technology-based program to improve healthcare coordination and access to information.

    PubMed

    Saillour-Glénisson, Florence; Duhamel, Sylvie; Fourneyron, Emmanuelle; Huiart, Laetitia; Joseph, Jean Philippe; Langlois, Emmanuel; Pincemail, Stephane; Ramel, Viviane; Renaud, Thomas; Roberts, Tamara; Sibé, Matthieu; Thiessard, Frantz; Wittwer, Jerome; Salmi, Louis Rachid

    2017-04-21

    Improvement of coordination of all health and social care actors in the patient pathways is an important issue in many countries. Health Information (HI) technology has been considered as a potentially effective answer to this issue. The French Health Ministry first funded the development of five TSN ("Territoire de Soins Numérique"/Digital health territories) projects, aiming at improving healthcare coordination and access to information for healthcare providers, patients and the population, and at improving healthcare professionals work organization. The French Health Ministry then launched a call for grant to fund one research project consisting in evaluating the TSN projects implementation and impact and in developing a model for HI technology evaluation. EvaTSN is mainly based on a controlled before-after study design. Data collection covers three periods: before TSN program implementation, during early TSN program implementation and at late TSN program implementation, in the five TSN projects' territories and in five comparison territories. Three populations will be considered: "TSN-targeted people" (healthcare system users and people having characteristics targeted by the TSN projects), "TSN patient users" (people included in TSN experimentations or using particular services) and "TSN professional users" (healthcare professionals involved in TSN projects). Several samples will be made in each population depending on the objective, axis and stage of the study. Four types of data sources are considered: 1) extractions from the French National Heath Insurance Database (SNIIRAM) and the French Autonomy Personalized Allowance database, 2) Ad hoc surveys collecting information on knowledge of TSN projects, TSN program use, ease of use, satisfaction and understanding, TSN pathway experience and appropriateness of hospital admissions, 3) qualitative analyses using semi-directive interviews and focus groups and document analyses and 4) extractions of TSN implementation indicators from TSN program database. EvaTSN is a challenging French national project for the production of evidenced-based information on HI technologies impact and on the context and conditions of their effectiveness and efficiency. We will be able to support health care management in order to implement HI technologies. We will also be able to produce an evaluation toolkit for HI technology evaluation. ClinicalTrials.gov ID: NCT02837406 , 08/18/2016.

  18. Review of the harvesting and extraction program within the National Alliance for Advanced Biofuels and Bioproducts

    DOE PAGES

    Marrone, Babetta L.; Lacey, Ronald E.; Anderson, Daniel B.; ...

    2017-08-07

    Energy-efficient and scalable harvesting and lipid extraction processes must be developed in order for the algal biofuels and bioproducts industry to thrive. The major challenge for harvesting is the handling of large volumes of cultivation water to concentrate low amounts of biomass. For lipid extraction, the major energy and cost drivers are associated with disrupting the algae cell wall and drying the biomass before solvent extraction of the lipids. Here we review the research and development conducted by the Harvesting and Extraction Team during the 3-year National Alliance for Advanced Biofuels and Bioproducts (NAABB) algal consortium project. The harvesting andmore » extraction team investigated five harvesting and three wet extraction technologies at lab bench scale for effectiveness, and conducted a techoeconomic study to evaluate their costs and energy efficiency compared to available baseline technologies. Based on this study, three harvesting technologies were selected for further study at larger scale. We evaluated the selected harvesting technologies: electrocoagulation, membrane filtration, and ultrasonic harvesting, in a field study at minimum scale of 100 L/h. None of the extraction technologies were determined to be ready for scale-up; therefore, an emerging extraction technology (wet solvent extraction) was selected from industry to provide scale-up data and capabilities to produce lipid and lipid-extracted materials for the NAABB program. One specialized extraction/adsorption technology was developed that showed promise for recovering high value co-products from lipid extracts. Overall, the NAABB Harvesting and Extraction Team improved the readiness level of several innovative, energy efficient technologies to integrate with algae production processes and captured valuable lessons learned about scale-up challenges.« less

  19. Review of the harvesting and extraction program within the National Alliance for Advanced Biofuels and Bioproducts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marrone, Babetta L.; Lacey, Ronald E.; Anderson, Daniel B.

    Energy-efficient and scalable harvesting and lipid extraction processes must be developed in order for the algal biofuels and bioproducts industry to thrive. The major challenge for harvesting is the handling of large volumes of cultivation water to concentrate low amounts of biomass. For lipid extraction, the major energy and cost drivers are associated with disrupting the algae cell wall and drying the biomass before solvent extraction of the lipids. Here we review the research and development conducted by the Harvesting and Extraction Team during the 3-year National Alliance for Advanced Biofuels and Bioproducts (NAABB) algal consortium project. The harvesting andmore » extraction team investigated five harvesting and three wet extraction technologies at lab bench scale for effectiveness, and conducted a techoeconomic study to evaluate their costs and energy efficiency compared to available baseline technologies. Based on this study, three harvesting technologies were selected for further study at larger scale. We evaluated the selected harvesting technologies: electrocoagulation, membrane filtration, and ultrasonic harvesting, in a field study at minimum scale of 100 L/h. None of the extraction technologies were determined to be ready for scale-up; therefore, an emerging extraction technology (wet solvent extraction) was selected from industry to provide scale-up data and capabilities to produce lipid and lipid-extracted materials for the NAABB program. One specialized extraction/adsorption technology was developed that showed promise for recovering high value co-products from lipid extracts. Overall, the NAABB Harvesting and Extraction Team improved the readiness level of several innovative, energy efficient technologies to integrate with algae production processes and captured valuable lessons learned about scale-up challenges.« less

  20. Infrared Spectroscopic Imaging: The Next Generation

    PubMed Central

    Bhargava, Rohit

    2013-01-01

    Infrared (IR) spectroscopic imaging seemingly matured as a technology in the mid-2000s, with commercially successful instrumentation and reports in numerous applications. Recent developments, however, have transformed our understanding of the recorded data, provided capability for new instrumentation, and greatly enhanced the ability to extract more useful information in less time. These developments are summarized here in three broad areas— data recording, interpretation of recorded data, and information extraction—and their critical review is employed to project emerging trends. Overall, the convergence of selected components from hardware, theory, algorithms, and applications is one trend. Instead of similar, general-purpose instrumentation, another trend is likely to be diverse and application-targeted designs of instrumentation driven by emerging component technologies. The recent renaissance in both fundamental science and instrumentation will likely spur investigations at the confluence of conventional spectroscopic analyses and optical physics for improved data interpretation. While chemometrics has dominated data processing, a trend will likely lie in the development of signal processing algorithms to optimally extract spectral and spatial information prior to conventional chemometric analyses. Finally, the sum of these recent advances is likely to provide unprecedented capability in measurement and scientific insight, which will present new opportunities for the applied spectroscopist. PMID:23031693

  1. State of the Art, Trends and Future of Bluetooth Low Energy, Near Field Communication and Visible Light Communication in the Development of Smart Cities.

    PubMed

    Cerruela García, Gonzalo; Luque Ruiz, Irene; Gómez-Nieto, Miguel Ángel

    2016-11-23

    The current social impact of new technologies has produced major changes in all areas of society, creating the concept of a smart city supported by an electronic infrastructure, telecommunications and information technology. This paper presents a review of Bluetooth Low Energy (BLE), Near Field Communication (NFC) and Visible Light Communication (VLC) and their use and influence within different areas of the development of the smart city. The document also presents a review of Big Data Solutions for the management of information and the extraction of knowledge in an environment where things are connected by an "Internet of Things" (IoT) network. Lastly, we present how these technologies can be combined together to benefit the development of the smart city.

  2. [Study on ultrafine vibration extraction technology of Rhizoma Chuanxiong].

    PubMed

    Dai, Long

    2009-04-01

    To explore the best ultrafine vibration extraction technology of Rhizoma Chuanxiong. Using the content of ligustrazine hydrochloride and ferulic acid as determination indexes, quadrature test was used to choose extraction times, time, solvent amount and to compare with the result of conventional extraction technology. The best condition of the Rhizoma chuanxiong was with 90% ethanol of 4 times volume, extracting 2 times in 25 degrees C, 15 minutes each time. Comparing with conventional extraction technology, extraction time of UVET was 1/6, solvent amount was 4/7, the extraction rate of marker components was 1.19 and 1.09 times, respectivley. UVET can improve the extracting rate of effective constituents, reduce the time and solvent amount and be used in industrialization.

  3. PCA Tomography: how to extract information from data cubes

    NASA Astrophysics Data System (ADS)

    Steiner, J. E.; Menezes, R. B.; Ricci, T. V.; Oliveira, A. S.

    2009-05-01

    Astronomy has evolved almost exclusively by the use of spectroscopic and imaging techniques, operated separately. With the development of modern technologies, it is possible to obtain data cubes in which one combines both techniques simultaneously, producing images with spectral resolution. To extract information from them can be quite complex, and hence the development of new methods of data analysis is desirable. We present a method of analysis of data cube (data from single field observations, containing two spatial and one spectral dimension) that uses Principal Component Analysis (PCA) to express the data in the form of reduced dimensionality, facilitating efficient information extraction from very large data sets. PCA transforms the system of correlated coordinates into a system of uncorrelated coordinates ordered by principal components of decreasing variance. The new coordinates are referred to as eigenvectors, and the projections of the data on to these coordinates produce images we will call tomograms. The association of the tomograms (images) to eigenvectors (spectra) is important for the interpretation of both. The eigenvectors are mutually orthogonal, and this information is fundamental for their handling and interpretation. When the data cube shows objects that present uncorrelated physical phenomena, the eigenvector's orthogonality may be instrumental in separating and identifying them. By handling eigenvectors and tomograms, one can enhance features, extract noise, compress data, extract spectra, etc. We applied the method, for illustration purpose only, to the central region of the low ionization nuclear emission region (LINER) galaxy NGC 4736, and demonstrate that it has a type 1 active nucleus, not known before. Furthermore, we show that it is displaced from the centre of its stellar bulge. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the National Science Foundation on behalf of the Gemini partnership: the National Science Foundation (United States), the Science and Technology Facilities Council (United Kingdom), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência e Tecnologia (Brazil) and SECYT (Argentina). E-mail: steiner@astro.iag.usp.br

  4. Extraction of Urban Trees from Integrated Airborne Based Digital Image and LIDAR Point Cloud Datasets - Initial Results

    NASA Astrophysics Data System (ADS)

    Dogon-yaro, M. A.; Kumar, P.; Rahman, A. Abdul; Buyuksalih, G.

    2016-10-01

    Timely and accurate acquisition of information on the condition and structural changes of urban trees serves as a tool for decision makers to better appreciate urban ecosystems and their numerous values which are critical to building up strategies for sustainable development. The conventional techniques used for extracting tree features include; ground surveying and interpretation of the aerial photography. However, these techniques are associated with some constraint, such as labour intensive field work, a lot of financial requirement, influences by weather condition and topographical covers which can be overcome by means of integrated airborne based LiDAR and very high resolution digital image datasets. This study presented a semi-automated approach for extracting urban trees from integrated airborne based LIDAR and multispectral digital image datasets over Istanbul city of Turkey. The above scheme includes detection and extraction of shadow free vegetation features based on spectral properties of digital images using shadow index and NDVI techniques and automated extraction of 3D information about vegetation features from the integrated processing of shadow free vegetation image and LiDAR point cloud datasets. The ability of the developed algorithms shows a promising result as an automated and cost effective approach to estimating and delineated 3D information of urban trees. The research also proved that integrated datasets is a suitable technology and a viable source of information for city managers to be used in urban trees management.

  5. [Radiological dose and metadata management].

    PubMed

    Walz, M; Kolodziej, M; Madsack, B

    2016-12-01

    This article describes the features of management systems currently available in Germany for extraction, registration and evaluation of metadata from radiological examinations, particularly in the digital imaging and communications in medicine (DICOM) environment. In addition, the probable relevant developments in this area concerning radiation protection legislation, terminology, standardization and information technology are presented.

  6. Ghana Open Data Initiative | Ghana Open Data Initiative

    Science.gov Websites

    Information Technology Agency (NITA) 2012-2016 government of Ghana. All right reserved. Agencies Search  Get Involved Finance Health Agriculture Energy Education Environment Local Government City Data Extractive Statistics Business Elections About Us Open Government FAQ Aid Data Sites Data

  7. Development and application of traffic flow information collecting and analysis system based on multi-type video

    NASA Astrophysics Data System (ADS)

    Lu, Mujie; Shang, Wenjie; Ji, Xinkai; Hua, Mingzhuang; Cheng, Kuo

    2015-12-01

    Nowadays, intelligent transportation system (ITS) has already become the new direction of transportation development. Traffic data, as a fundamental part of intelligent transportation system, is having a more and more crucial status. In recent years, video observation technology has been widely used in the field of traffic information collecting. Traffic flow information contained in video data has many advantages which is comprehensive and can be stored for a long time, but there are still many problems, such as low precision and high cost in the process of collecting information. This paper aiming at these problems, proposes a kind of traffic target detection method with broad applicability. Based on three different ways of getting video data, such as aerial photography, fixed camera and handheld camera, we develop a kind of intelligent analysis software which can be used to extract the macroscopic, microscopic traffic flow information in the video, and the information can be used for traffic analysis and transportation planning. For road intersections, the system uses frame difference method to extract traffic information, for freeway sections, the system uses optical flow method to track the vehicles. The system was applied in Nanjing, Jiangsu province, and the application shows that the system for extracting different types of traffic flow information has a high accuracy, it can meet the needs of traffic engineering observations and has a good application prospect.

  8. Document Exploration and Automatic Knowledge Extraction for Unstructured Biomedical Text

    NASA Astrophysics Data System (ADS)

    Chu, S.; Totaro, G.; Doshi, N.; Thapar, S.; Mattmann, C. A.; Ramirez, P.

    2015-12-01

    We describe our work on building a web-browser based document reader with built-in exploration tool and automatic concept extraction of medical entities for biomedical text. Vast amounts of biomedical information are offered in unstructured text form through scientific publications and R&D reports. Utilizing text mining can help us to mine information and extract relevant knowledge from a plethora of biomedical text. The ability to employ such technologies to aid researchers in coping with information overload is greatly desirable. In recent years, there has been an increased interest in automatic biomedical concept extraction [1, 2] and intelligent PDF reader tools with the ability to search on content and find related articles [3]. Such reader tools are typically desktop applications and are limited to specific platforms. Our goal is to provide researchers with a simple tool to aid them in finding, reading, and exploring documents. Thus, we propose a web-based document explorer, which we called Shangri-Docs, which combines a document reader with automatic concept extraction and highlighting of relevant terms. Shangri-Docsalso provides the ability to evaluate a wide variety of document formats (e.g. PDF, Words, PPT, text, etc.) and to exploit the linked nature of the Web and personal content by performing searches on content from public sites (e.g. Wikipedia, PubMed) and private cataloged databases simultaneously. Shangri-Docsutilizes Apache cTAKES (clinical Text Analysis and Knowledge Extraction System) [4] and Unified Medical Language System (UMLS) to automatically identify and highlight terms and concepts, such as specific symptoms, diseases, drugs, and anatomical sites, mentioned in the text. cTAKES was originally designed specially to extract information from clinical medical records. Our investigation leads us to extend the automatic knowledge extraction process of cTAKES for biomedical research domain by improving the ontology guided information extraction process. We will describe our experience and implementation of our system and share lessons learned from our development. We will also discuss ways in which this could be adapted to other science fields. [1] Funk et al., 2014. [2] Kang et al., 2014. [3] Utopia Documents, http://utopiadocs.com [4] Apache cTAKES, http://ctakes.apache.org

  9. Text mining facilitates database curation - extraction of mutation-disease associations from Bio-medical literature.

    PubMed

    Ravikumar, Komandur Elayavilli; Wagholikar, Kavishwar B; Li, Dingcheng; Kocher, Jean-Pierre; Liu, Hongfang

    2015-06-06

    Advances in the next generation sequencing technology has accelerated the pace of individualized medicine (IM), which aims to incorporate genetic/genomic information into medicine. One immediate need in interpreting sequencing data is the assembly of information about genetic variants and their corresponding associations with other entities (e.g., diseases or medications). Even with dedicated effort to capture such information in biological databases, much of this information remains 'locked' in the unstructured text of biomedical publications. There is a substantial lag between the publication and the subsequent abstraction of such information into databases. Multiple text mining systems have been developed, but most of them focus on the sentence level association extraction with performance evaluation based on gold standard text annotations specifically prepared for text mining systems. We developed and evaluated a text mining system, MutD, which extracts protein mutation-disease associations from MEDLINE abstracts by incorporating discourse level analysis, using a benchmark data set extracted from curated database records. MutD achieves an F-measure of 64.3% for reconstructing protein mutation disease associations in curated database records. Discourse level analysis component of MutD contributed to a gain of more than 10% in F-measure when compared against the sentence level association extraction. Our error analysis indicates that 23 of the 64 precision errors are true associations that were not captured by database curators and 68 of the 113 recall errors are caused by the absence of associated disease entities in the abstract. After adjusting for the defects in the curated database, the revised F-measure of MutD in association detection reaches 81.5%. Our quantitative analysis reveals that MutD can effectively extract protein mutation disease associations when benchmarking based on curated database records. The analysis also demonstrates that incorporating discourse level analysis significantly improved the performance of extracting the protein-mutation-disease association. Future work includes the extension of MutD for full text articles.

  10. Fabrication and vibration characterization of curcumin extracted from turmeric (Curcuma longa) rhizomes of the northern Vietnam.

    PubMed

    Van Nong, Hoang; Hung, Le Xuan; Thang, Pham Nam; Chinh, Vu Duc; Vu, Le Van; Dung, Phan Tien; Van Trung, Tran; Nga, Pham Thu

    2016-01-01

    In this report, we present the research results on using the conventional method and microwave technology to extract curcuminoid from turmeric roots originated in different regions of Northern Vietnam. This method is simple, yet economical, non-toxic and still able to achieve high extraction performance to get curcuminoid from turmeric roots. The detailed results on the Raman vibration spectra combined with X-ray powder diffraction and high-performance liquid chromatography/mass spectrometry allowed the evaluation of each batch of curcumin crystalline powder sample received, under the conditions of applied fabrication technology. Also, the absorption and fluorescence spectroscopies of the samples are presented in the paper. The information to be presented in this paper: absorption and fluorescence spectroscopies of the samples; new experimental study results on applied technology to mass-produce curcumin from turmeric rhizomes; comparative study results between fabricated samples and marketing curcumin products-to state the complexity of co-existing crystalline phase in curcumin powder samples. We noticed that, it is possible to use the vibration line at ~959 cm(-1)-characteristic of the ν C=O vibration, and the ~1625 cm(-1) line-characteristic of the ν C=O and ν C=C vibration in curcumin molecules, for preliminary quality assessment of naturally originated curcumin crystalline powder samples. Data on these new optical spectra will contribute to the bringing of detailed information on natural curcumin in Vietnam, serving research purposes and applications of natural curcumin powder and nanocurcumin in Vietnam, as well as being initial materials for the pharmaceutical, cosmetics or functional food industries.

  11. Variability extraction and modeling for product variants.

    PubMed

    Linsbauer, Lukas; Lopez-Herrejon, Roberto Erick; Egyed, Alexander

    2017-01-01

    Fast-changing hardware and software technologies in addition to larger and more specialized customer bases demand software tailored to meet very diverse requirements. Software development approaches that aim at capturing this diversity on a single consolidated platform often require large upfront investments, e.g., time or budget. Alternatively, companies resort to developing one variant of a software product at a time by reusing as much as possible from already-existing product variants. However, identifying and extracting the parts to reuse is an error-prone and inefficient task compounded by the typically large number of product variants. Hence, more disciplined and systematic approaches are needed to cope with the complexity of developing and maintaining sets of product variants. Such approaches require detailed information about the product variants, the features they provide and their relations. In this paper, we present an approach to extract such variability information from product variants. It identifies traces from features and feature interactions to their implementation artifacts, and computes their dependencies. This work can be useful in many scenarios ranging from ad hoc development approaches such as clone-and-own to systematic reuse approaches such as software product lines. We applied our variability extraction approach to six case studies and provide a detailed evaluation. The results show that the extracted variability information is consistent with the variability in our six case study systems given by their variability models and available product variants.

  12. Thermal feature extraction of servers in a datacenter using thermal image registration

    NASA Astrophysics Data System (ADS)

    Liu, Hang; Ran, Jian; Xie, Ting; Gao, Shan

    2017-09-01

    Thermal cameras provide fine-grained thermal information that enhances monitoring and enables automatic thermal management in large datacenters. Recent approaches employing mobile robots or thermal camera networks can already identify the physical locations of hot spots. Other distribution information used to optimize datacenter management can also be obtained automatically using pattern recognition technology. However, most of the features extracted from thermal images, such as shape and gradient, may be affected by changes in the position and direction of the thermal camera. This paper presents a method for extracting the thermal features of a hot spot or a server in a container datacenter. First, thermal and visual images are registered based on textural characteristics extracted from images acquired in datacenters. Then, the thermal distribution of each server is standardized. The features of a hot spot or server extracted from the standard distribution can reduce the impact of camera position and direction. The results of experiments show that image registration is efficient for aligning the corresponding visual and thermal images in the datacenter, and the standardization procedure reduces the impacts of camera position and direction on hot spot or server features.

  13. Multiplexed Sequence Encoding: A Framework for DNA Communication.

    PubMed

    Zakeri, Bijan; Carr, Peter A; Lu, Timothy K

    2016-01-01

    Synthetic DNA has great propensity for efficiently and stably storing non-biological information. With DNA writing and reading technologies rapidly advancing, new applications for synthetic DNA are emerging in data storage and communication. Traditionally, DNA communication has focused on the encoding and transfer of complete sets of information. Here, we explore the use of DNA for the communication of short messages that are fragmented across multiple distinct DNA molecules. We identified three pivotal points in a communication-data encoding, data transfer & data extraction-and developed novel tools to enable communication via molecules of DNA. To address data encoding, we designed DNA-based individualized keyboards (iKeys) to convert plaintext into DNA, while reducing the occurrence of DNA homopolymers to improve synthesis and sequencing processes. To address data transfer, we implemented a secret-sharing system-Multiplexed Sequence Encoding (MuSE)-that conceals messages between multiple distinct DNA molecules, requiring a combination key to reveal messages. To address data extraction, we achieved the first instance of chromatogram patterning through multiplexed sequencing, thereby enabling a new method for data extraction. We envision these approaches will enable more widespread communication of information via DNA.

  14. Representation control increases task efficiency in complex graphical representations.

    PubMed

    Moritz, Julia; Meyerhoff, Hauke S; Meyer-Dernbecher, Claudia; Schwan, Stephan

    2018-01-01

    In complex graphical representations, the relevant information for a specific task is often distributed across multiple spatial locations. In such situations, understanding the representation requires internal transformation processes in order to extract the relevant information. However, digital technology enables observers to alter the spatial arrangement of depicted information and therefore to offload the transformation processes. The objective of this study was to investigate the use of such a representation control (i.e. the users' option to decide how information should be displayed) in order to accomplish an information extraction task in terms of solution time and accuracy. In the representation control condition, the participants were allowed to reorganize the graphical representation and reduce information density. In the control condition, no interactive features were offered. We observed that participants in the representation control condition solved tasks that required reorganization of the maps faster and more accurate than participants without representation control. The present findings demonstrate how processes of cognitive offloading, spatial contiguity, and information coherence interact in knowledge media intended for broad and diverse groups of recipients.

  15. Representation control increases task efficiency in complex graphical representations

    PubMed Central

    Meyerhoff, Hauke S.; Meyer-Dernbecher, Claudia; Schwan, Stephan

    2018-01-01

    In complex graphical representations, the relevant information for a specific task is often distributed across multiple spatial locations. In such situations, understanding the representation requires internal transformation processes in order to extract the relevant information. However, digital technology enables observers to alter the spatial arrangement of depicted information and therefore to offload the transformation processes. The objective of this study was to investigate the use of such a representation control (i.e. the users' option to decide how information should be displayed) in order to accomplish an information extraction task in terms of solution time and accuracy. In the representation control condition, the participants were allowed to reorganize the graphical representation and reduce information density. In the control condition, no interactive features were offered. We observed that participants in the representation control condition solved tasks that required reorganization of the maps faster and more accurate than participants without representation control. The present findings demonstrate how processes of cognitive offloading, spatial contiguity, and information coherence interact in knowledge media intended for broad and diverse groups of recipients. PMID:29698443

  16. Modelling Single Tree Structure with Terrestrial Laser Scanner

    NASA Astrophysics Data System (ADS)

    Yurtseven, H.; Akgül, M.; Gülci, S.

    2017-11-01

    Recent technological developments, which has reliable accuracy and quality for all engineering works, such as remote sensing tools have wide range use in forestry applications. Last decade, sustainable use and management opportunities of forest resources are favorite topics. Thus, precision of obtained data plays an important role in evaluation of current status of forests' value. The use of aerial and terrestrial laser technology has more reliable and effective models to advance the appropriate natural resource management. This study investigates the use of terrestrial laser scanner (TLS) technology in forestry, and also the methodological data processing stages for tree volume extraction is explained. Z+F Imager 5010C TLS system was used for measure single tree information such as tree height, diameter of breast height, branch volume and canopy closure. In this context more detailed and accurate data can be obtained than conventional inventory sampling in forestry by using TLS systems. However the accuracy of obtained data is up to the experiences of TLS operator in the field. Number of scan stations and its positions are other important factors to reduce noise effect and accurate 3D modelling. The results indicated that the use of point cloud data to extract tree information for forestry applications are promising methodology for precision forestry.

  17. Automatic extraction of pavement markings on streets from point cloud data of mobile LiDAR

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Zhong, Ruofei; Tang, Tao; Wang, Liuzhao; Liu, Xianlin

    2017-08-01

    Pavement markings provide an important foundation as they help to keep roads users safe. Accurate and comprehensive information about pavement markings assists the road regulators and is useful in developing driverless technology. Mobile light detection and ranging (LiDAR) systems offer new opportunities to collect and process accurate pavement markings’ information. Mobile LiDAR systems can directly obtain the three-dimensional (3D) coordinates of an object, thus defining spatial data and the intensity of (3D) objects in a fast and efficient way. The RGB attribute information of data points can be obtained based on the panoramic camera in the system. In this paper, we present a novel method process to automatically extract pavement markings using multiple attribute information of the laser scanning point cloud from the mobile LiDAR data. This method process utilizes a differential grayscale of RGB color, laser pulse reflection intensity, and the differential intensity to identify and extract pavement markings. We utilized point cloud density to remove the noise and used morphological operations to eliminate the errors. In the application, we tested our method process on different sections of roads in Beijing, China, and Buffalo, NY, USA. The results indicated that both correctness (p) and completeness (r) were higher than 90%. The method process of this research can be applied to extract pavement markings from huge point cloud data produced by mobile LiDAR.

  18. GDRMS: a system for automatic extraction of the disease-centre relation

    NASA Astrophysics Data System (ADS)

    Yang, Ronggen; Zhang, Yue; Gong, Lejun

    2012-01-01

    With the rapidly increasing of biomedical literature, the deluge of new articles is leading to information overload. Extracting the available knowledge from the huge amount of biomedical literature has become a major challenge. GDRMS is developed as a tool that extracts the relationship between disease and gene, gene and gene from biomedical literatures using text mining technology. It is a ruled-based system which also provides disease-centre network visualization, constructs the disease-gene database, and represents a gene engine for understanding the function of the gene. The main focus of GDRMS is to provide a valuable opportunity to explore the relationship between disease and gene for the research community about etiology of disease.

  19. Supporting the Growing Needs of the GIS Industry

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Visual Learning Systems, Inc. (VLS), of Missoula, Montana, has developed a commercial software application called Feature Analyst. Feature Analyst was conceived under a Small Business Innovation Research (SBIR) contract with NASA's Stennis Space Center, and through the Montana State University TechLink Center, an organization funded by NASA and the U.S. Department of Defense to link regional companies with Federal laboratories for joint research and technology transfer. The software provides a paradigm shift to automated feature extraction, as it utilizes spectral, spatial, temporal, and ancillary information to model the feature extraction process; presents the ability to remove clutter; incorporates advanced machine learning techniques to supply unparalleled levels of accuracy; and includes an exceedingly simple interface for feature extraction.

  20. Extraction and fusion of spectral parameters for face recognition

    NASA Astrophysics Data System (ADS)

    Boisier, B.; Billiot, B.; Abdessalem, Z.; Gouton, P.; Hardeberg, J. Y.

    2011-03-01

    Many methods have been developed in image processing for face recognition, especially in recent years with the increase of biometric technologies. However, most of these techniques are used on grayscale images acquired in the visible range of the electromagnetic spectrum. The aims of our study are to improve existing tools and to develop new methods for face recognition. The techniques used take advantage of the different spectral ranges, the visible, optical infrared and thermal infrared, by either combining them or analyzing them separately in order to extract the most appropriate information for face recognition. We also verify the consistency of several keypoints extraction techniques in the Near Infrared (NIR) and in the Visible Spectrum.

  1. Imaging and Analytics: The changing face of Medical Imaging

    NASA Astrophysics Data System (ADS)

    Foo, Thomas

    There have been significant technological advances in imaging capability over the past 40 years. Medical imaging capabilities have developed rapidly, along with technology development in computational processing speed and miniaturization. Moving to all-digital, the number of images that are acquired in a routine clinical examination has increased dramatically from under 50 images in the early days of CT and MRI to more than 500-1000 images today. The staggering number of images that are routinely acquired poses significant challenges for clinicians to interpret the data and to correctly identify the clinical problem. Although the time provided to render a clinical finding has not substantially changed, the amount of data available for interpretation has grown exponentially. In addition, the image quality (spatial resolution) and information content (physiologically-dependent image contrast) has also increased significantly with advances in medical imaging technology. On its current trajectory, medical imaging in the traditional sense is unsustainable. To assist in filtering and extracting the most relevant data elements from medical imaging, image analytics will have a much larger role. Automated image segmentation, generation of parametric image maps, and clinical decision support tools will be needed and developed apace to allow the clinician to manage, extract and utilize only the information that will help improve diagnostic accuracy and sensitivity. As medical imaging devices continue to improve in spatial resolution, functional and anatomical information content, image/data analytics will be more ubiquitous and integral to medical imaging capability.

  2. Computer sciences

    NASA Technical Reports Server (NTRS)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  3. [Medical imaging in tumor precision medicine: opportunities and challenges].

    PubMed

    Xu, Jingjing; Tan, Yanbin; Zhang, Minming

    2017-05-25

    Tumor precision medicine is an emerging approach for tumor diagnosis, treatment and prevention, which takes account of individual variability of environment, lifestyle and genetic information. Tumor precision medicine is built up on the medical imaging innovations developed during the past decades, including the new hardware, new imaging agents, standardized protocols, image analysis and multimodal imaging fusion technology. Also the development of automated and reproducible analysis algorithm has extracted large amount of information from image-based features. With the continuous development and mining of tumor clinical and imaging databases, the radiogenomics, radiomics and artificial intelligence have been flourishing. Therefore, these new technological advances bring new opportunities and challenges to the application of imaging in tumor precision medicine.

  4. State of the Art, Trends and Future of Bluetooth Low Energy, Near Field Communication and Visible Light Communication in the Development of Smart Cities

    PubMed Central

    Cerruela García, Gonzalo; Luque Ruiz, Irene; Gómez-Nieto, Miguel Ángel

    2016-01-01

    The current social impact of new technologies has produced major changes in all areas of society, creating the concept of a smart city supported by an electronic infrastructure, telecommunications and information technology. This paper presents a review of Bluetooth Low Energy (BLE), Near Field Communication (NFC) and Visible Light Communication (VLC) and their use and influence within different areas of the development of the smart city. The document also presents a review of Big Data Solutions for the management of information and the extraction of knowledge in an environment where things are connected by an “Internet of Things” (IoT) network. Lastly, we present how these technologies can be combined together to benefit the development of the smart city. PMID:27886087

  5. High-quality and small-capacity e-learning video featuring lecturer-superimposing PC screen images

    NASA Astrophysics Data System (ADS)

    Nomura, Yoshihiko; Murakami, Michinobu; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    2006-10-01

    Information processing and communication technology are progressing quickly, and are prevailing throughout various technological fields. Therefore, the development of such technology should respond to the needs for improvement of quality in the e-learning education system. The authors propose a new video-image compression processing system that ingeniously employs the features of the lecturing scene. While dynamic lecturing scene is shot by a digital video camera, screen images are electronically stored by a PC screen image capturing software in relatively long period at a practical class. Then, a lecturer and a lecture stick are extracted from the digital video images by pattern recognition techniques, and the extracted images are superimposed on the appropriate PC screen images by off-line processing. Thus, we have succeeded to create a high-quality and small-capacity (HQ/SC) video-on-demand educational content featuring the advantages: the high quality of image sharpness, the small electronic file capacity, and the realistic lecturer motion.

  6. Integrated Functional and Executional Modelling of Software Using Web-Based Databases

    NASA Technical Reports Server (NTRS)

    Kulkarni, Deepak; Marietta, Roberta

    1998-01-01

    NASA's software subsystems undergo extensive modification and updates over the operational lifetimes. It is imperative that modified software should satisfy safety goals. This report discusses the difficulties encountered in doing so and discusses a solution based on integrated modelling of software, use of automatic information extraction tools, web technology and databases.

  7. We've Got Plenty of Data, Now How Can We Use It?

    ERIC Educational Resources Information Center

    Weiler, Jeffrey K.; Mears, Robert L.

    1999-01-01

    To mine a large store of school data, a new technology (variously termed data warehousing, data marts, online analytical processing, and executive information systems) is emerging. Data warehousing helps school districts extract and restructure desired data from automated systems and create new databases designed to enhance analytical and…

  8. Extraction of Rotation Information from a Simulated Fiber Optic Gyro Using Amplitude Modulation.

    DTIC Science & Technology

    1983-12-01

    Force Institute of Technology, in June 1982. Permanent address: 590 County Road 207 Durango , Colorado 81301 73 "............................4...Daniel John Brett was born on 22 May 1955 in Los Angeles, California. He graduated from high school in Durango , Colo- rado in 1973 and enlisted in the

  9. An Experimental Investigation of Complexity in Database Query Formulation Tasks

    ERIC Educational Resources Information Center

    Casterella, Gretchen Irwin; Vijayasarathy, Leo

    2013-01-01

    Information Technology professionals and other knowledge workers rely on their ability to extract data from organizational databases to respond to business questions and support decision making. Structured query language (SQL) is the standard programming language for querying data in relational databases, and SQL skills are in high demand and are…

  10. Web-Based Knowledge Exchange through Social Links in the Workplace

    ERIC Educational Resources Information Center

    Filipowski, Tomasz; Kazienko, Przemyslaw; Brodka, Piotr; Kajdanowicz, Tomasz

    2012-01-01

    Knowledge exchange between employees is an essential feature of recent commercial organisations on the competitive market. Based on the data gathered by various information technology (IT) systems, social links can be extracted and exploited in knowledge exchange systems of a new kind. Users of such a system ask their queries and the system…

  11. Information science team

    NASA Technical Reports Server (NTRS)

    Billingsley, F.

    1982-01-01

    Concerns are expressed about the data handling aspects of system design and about enabling technology for data handling and data analysis. The status, contributing factors, critical issues, and recommendations for investigations are listed for data handling, rectification and registration, and information extraction. Potential supports to individual P.I., research tasks, systematic data system design, and to system operation. The need for an airborne spectrometer class instrument for fundamental research in high spectral and spatial resolution is indicated. Geographic information system formatting and labelling techniques, very large scale integration, and methods for providing multitype data sets must also be developed.

  12. Big Data Technologies: New Opportunities for Diabetes Management.

    PubMed

    Bellazzi, Riccardo; Dagliati, Arianna; Sacchi, Lucia; Segagni, Daniele

    2015-04-24

    The so-called big data revolution provides substantial opportunities to diabetes management. At least 3 important directions are currently of great interest. First, the integration of different sources of information, from primary and secondary care to administrative information, may allow depicting a novel view of patient's care processes and of single patient's behaviors, taking into account the multifaceted nature of chronic care. Second, the availability of novel diabetes technologies, able to gather large amounts of real-time data, requires the implementation of distributed platforms for data analysis and decision support. Finally, the inclusion of geographical and environmental information into such complex IT systems may further increase the capability of interpreting the data gathered and extract new knowledge from them. This article reviews the main concepts and definitions related to big data, it presents some efforts in health care, and discusses the potential role of big data in diabetes care. Finally, as an example, it describes the research efforts carried on in the MOSAIC project, funded by the European Commission. © 2015 Diabetes Technology Society.

  13. Full-field optical coherence tomography used for security and document identity

    NASA Astrophysics Data System (ADS)

    Chang, Shoude; Mao, Youxin; Sherif, Sherif; Flueraru, Costel

    2006-09-01

    The optical coherence tomography (OCT) is an emerging technology for high-resolution cross-sectional imaging of 3D structures. In the past years, OCT systems have been used mainly for medical, especially ophthalmological diagnostics. Concerning the nature of OCT system being capable to explore the internal features of an object, we apply the OCT technology to directly retrieve the 2D information pre-stored in a multiple-layer information carrier. The standard depth-resolution of an OCT system is at micrometer level. If a 20mm by 20mm sampling area with a 1024 x 1024 CCD array is used in the OCT system having 10 μm, an information carrier having a volume of 20mm x 20mm x 2mm could contain 200 Mega-pixel images. Because of its tiny size and large information volume, the information carrier, with its OCT retrieving system, will have potential applications in documents security and object identification. In addition, as the information carrier can be made by low-scattering transparent material, the signal/noise ratio will be improved dramatically. As a consequence, the specific hardware and complicated software can also be greatly simplified. Owing to non-scanning along X-Y axis, the full-field OCT could be the simplest and most economic imaging system for extracting information from such a multilayer information carrier. In this paper, deign and implementation of a full-field OCT system is described and the related algorithms are introduced. In our experiments, a four layers information carrier is used, which contains 4 layers of image pattern, two text images and two fingerprint images. The extracted tomography images of each layer are also provided.

  14. Supporting health professionals through information and communication technologies: a systematic review of the effects of information and communication technologies on recruitment and retention.

    PubMed

    Gagnon, Marie-Pierre; Pollender, Hugo; Trépanier, Amélie; Duplàa, Emmanuel; Ly, Birama Apho

    2011-05-01

    Healthcare personnel shortage is a growing concern in many countries, especially in remote areas, where it has major consequences on the accessibility of health services. Information and communication technologies (ICTs) have often been proposed as having positive effects on certain dimensions of the recruitment and retention of professionals working in the healthcare sector. This study aims to explore the impact of interventions using ICTs on recruitment and retention of healthcare professionals. A systematic review of the literature was conducted, including the following steps: exploring scientific and gray literature through established criteria and data extraction of relevant information by two independent reviewers. Of the 2,225 screened studies, 13 were included. Nine studies showed a positive, often indirect, influence that ICTs may have on recruitment and retention. Despite the conclusions of 9 of 13 studies reporting a possible positive influence of ICTs on the recruitment and retention of healthcare professionals, these results highlight the need of a deeper reflection on that topic. Therefore, more research is needed.

  15. The Importance of Data Quality in Using Health Information Exchange (HIE) Networks to Improve Health Outcomes: Case Study of a HIE Extracted Dataset of Patients with Congestive Heart Failure Participating in a Regional HIE

    ERIC Educational Resources Information Center

    Cartron-Mizeracki, Marie-Astrid

    2016-01-01

    Expenditures on health information technology (HIT) for healthcare organizations are growing exponentially and the value of it is the subject of criticism and skepticism. Because HIT is viewed as capable of improving major health care indicators, the government offers incentives to health care providers and organizations to implement solutions.…

  16. Extraction fatty acid as a source to produce biofuel in microalgae Chlorella sp. and Spirulina sp. using supercritical carbon dioxide

    NASA Astrophysics Data System (ADS)

    Tai, Do Chiem; Hai, Dam Thi Thanh; Vinh, Nguyen Hanh; Phung, Le Thi Kim

    2016-06-01

    In this research, the fatty acids of isolated microalgae were extracted by some technologies such as maceration, Soxhlet, ultrasonic-assisted extraction and supercritical fluid extraction; and analyzed for biodiesel production using GC-MS. This work deals with the extraction of microalgae oil from dry biomass by using supercritical fluid extraction method. A complete study at laboratory of the influence of some parameters on the extraction kinetics and yields and on the composition of the oil in terms of lipid classes and profiles is proposed. Two types of microalgae were studied: Chlorella sp. and Spirulina sp. For the extraction of oil from microalgae, supercritical CO2 (SC-CO2) is regarded with interest, being safer than n-hexane and offering a negligible environmental impact, a short extraction time and a high-quality final product. Whilst some experimental papers are available on the supercritical fluid extraction (SFE) of oil from microalgae, only limited information exists on the kinetics of the process. These results demonstrate that supercritical CO2 extraction is an efficient method for the complete recovery of the neutral lipid phase.

  17. Informative frame detection from wireless capsule video endoscopic images

    NASA Astrophysics Data System (ADS)

    Bashar, Md. Khayrul; Mori, Kensaku; Suenaga, Yasuhito; Kitasaka, Takayuki; Mekada, Yoshito

    2008-03-01

    Wireless capsule endoscopy (WCE) is a new clinical technology permitting the visualization of the small bowel, the most difficult segment of the digestive tract. The major drawback of this technology is the high amount of time for video diagnosis. In this study, we propose a method for informative frame detection by isolating useless frames that are substantially covered by turbid fluids or their contamination with other materials, e.g., faecal, semi-processed or unabsorbed foods etc. Such materials and fluids present a wide range of colors, from brown to yellow, and/or bubble-like texture patterns. The detection scheme, therefore, consists of two stages: highly contaminated non-bubbled (HCN) frame detection and significantly bubbled (SB) frame detection. Local color moments in the Ohta color space are used to characterize HCN frames, which are isolated by the Support Vector Machine (SVM) classifier in Stage-1. The rest of the frames go to the Stage-2, where Laguerre gauss Circular Harmonic Functions (LG-CHFs) extract the characteristics of the bubble-structures in a multi-resolution framework. An automatic segmentation method is designed to extract the bubbled regions based on local absolute energies of the CHF responses, derived from the grayscale version of the original color image. Final detection of the informative frames is obtained by using threshold operation on the extracted regions. An experiment with 20,558 frames from the three videos shows the excellent average detection accuracy (96.75%) by the proposed method, when compared with the Gabor based- (74.29%) and discrete wavelet based features (62.21%).

  18. Volume and Value of Big Healthcare Data.

    PubMed

    Dinov, Ivo D

    Modern scientific inquiries require significant data-driven evidence and trans-disciplinary expertise to extract valuable information and gain actionable knowledge about natural processes. Effective evidence-based decisions require collection, processing and interpretation of vast amounts of complex data. The Moore's and Kryder's laws of exponential increase of computational power and information storage, respectively, dictate the need rapid trans-disciplinary advances, technological innovation and effective mechanisms for managing and interrogating Big Healthcare Data. In this article, we review important aspects of Big Data analytics and discuss important questions like: What are the challenges and opportunities associated with this biomedical, social, and healthcare data avalanche? Are there innovative statistical computing strategies to represent, model, analyze and interpret Big heterogeneous data? We present the foundation of a new compressive big data analytics (CBDA) framework for representation, modeling and inference of large, complex and heterogeneous datasets. Finally, we consider specific directions likely to impact the process of extracting information from Big healthcare data, translating that information to knowledge, and deriving appropriate actions.

  19. Volume and Value of Big Healthcare Data

    PubMed Central

    Dinov, Ivo D.

    2016-01-01

    Modern scientific inquiries require significant data-driven evidence and trans-disciplinary expertise to extract valuable information and gain actionable knowledge about natural processes. Effective evidence-based decisions require collection, processing and interpretation of vast amounts of complex data. The Moore's and Kryder's laws of exponential increase of computational power and information storage, respectively, dictate the need rapid trans-disciplinary advances, technological innovation and effective mechanisms for managing and interrogating Big Healthcare Data. In this article, we review important aspects of Big Data analytics and discuss important questions like: What are the challenges and opportunities associated with this biomedical, social, and healthcare data avalanche? Are there innovative statistical computing strategies to represent, model, analyze and interpret Big heterogeneous data? We present the foundation of a new compressive big data analytics (CBDA) framework for representation, modeling and inference of large, complex and heterogeneous datasets. Finally, we consider specific directions likely to impact the process of extracting information from Big healthcare data, translating that information to knowledge, and deriving appropriate actions. PMID:26998309

  20. [Study on extraction technology of soyasaponins from residual of bean ware].

    PubMed

    Lu, Rumei; Zhang, Yizhen; Bi, Yi

    2003-04-01

    To find out the optimum extraction technology of soyasaponins from residual of bean ware. The optimum extraction conditions were investigated by the orthogonal design, and the content of soyasaponins was determined by UV-spectro-pho-tometry. The optimum extraction technology was A3B1C1, that is adding 7 times and 6 times amount of 70% alcohol and refluxing for two times and each time for 1.0 h. The selected technology showed higher yield of soyasaponins, good stability and high efficient.

  1. Extracting 3d Semantic Information from Video Surveillance System Using Deep Learning

    NASA Astrophysics Data System (ADS)

    Zhang, J. S.; Cao, J.; Mao, B.; Shen, D. Q.

    2018-04-01

    At present, intelligent video analysis technology has been widely used in various fields. Object tracking is one of the important part of intelligent video surveillance, but the traditional target tracking technology based on the pixel coordinate system in images still exists some unavoidable problems. Target tracking based on pixel can't reflect the real position information of targets, and it is difficult to track objects across scenes. Based on the analysis of Zhengyou Zhang's camera calibration method, this paper presents a method of target tracking based on the target's space coordinate system after converting the 2-D coordinate of the target into 3-D coordinate. It can be seen from the experimental results: Our method can restore the real position change information of targets well, and can also accurately get the trajectory of the target in space.

  2. Technical design and system implementation of region-line primitive association framework

    NASA Astrophysics Data System (ADS)

    Wang, Min; Xing, Jinjin; Wang, Jie; Lv, Guonian

    2017-08-01

    Apart from regions, image edge lines are an important information source, and they deserve more attention in object-based image analysis (OBIA) than they currently receive. In the region-line primitive association framework (RLPAF), we promote straight-edge lines as line primitives to achieve powerful OBIAs. Along with regions, straight lines become basic units for subsequent extraction and analysis of OBIA features. This study develops a new software system called remote-sensing knowledge finder (RSFinder) to implement RLPAF for engineering application purposes. This paper introduces the extended technical framework, a comprehensively designed feature set, key technology, and software implementation. To our knowledge, RSFinder is the world's first OBIA system based on two types of primitives, namely, regions and lines. It is fundamentally different from other well-known region-only-based OBIA systems, such as eCogntion and ENVI feature extraction module. This paper has important reference values for the development of similarly structured OBIA systems and line-involved extraction algorithms of remote sensing information.

  3. Information Extraction in Tomb Pit Using Hyperspectral Data

    NASA Astrophysics Data System (ADS)

    Yang, X.; Hou, M.; Lyu, S.; Ma, S.; Gao, Z.; Bai, S.; Gu, M.; Liu, Y.

    2018-04-01

    Hyperspectral data has characteristics of multiple bands and continuous, large amount of data, redundancy, and non-destructive. These characteristics make it possible to use hyperspectral data to study cultural relics. In this paper, the hyperspectral imaging technology is adopted to recognize the bottom images of an ancient tomb located in Shanxi province. There are many black remains on the bottom surface of the tomb, which are suspected to be some meaningful texts or paintings. Firstly, the hyperspectral data is preprocessing to get the reflectance of the region of interesting. For the convenient of compute and storage, the original reflectance value is multiplied by 10000. Secondly, this article uses three methods to extract the symbols at the bottom of the ancient tomb. Finally we tried to use morphology to connect the symbols and gave fifteen reference images. The results show that the extraction of information based on hyperspectral data can obtain a better visual experience, which is beneficial to the study of ancient tombs by researchers, and provides some references for archaeological research findings.

  4. Image processing and analysis using neural networks for optometry area

    NASA Astrophysics Data System (ADS)

    Netto, Antonio V.; Ferreira de Oliveira, Maria C.

    2002-11-01

    In this work we describe the framework of a functional system for processing and analyzing images of the human eye acquired by the Hartmann-Shack technique (HS), in order to extract information to formulate a diagnosis of eye refractive errors (astigmatism, hypermetropia and myopia). The analysis is to be carried out using an Artificial Intelligence system based on Neural Nets, Fuzzy Logic and Classifier Combination. The major goal is to establish the basis of a new technology to effectively measure ocular refractive errors that is based on methods alternative those adopted in current patented systems. Moreover, analysis of images acquired with the Hartmann-Shack technique may enable the extraction of additional information on the health of an eye under exam from the same image used to detect refraction errors.

  5. Genome Sequence of Stachybotrys chartarum Strain 51-11

    PubMed Central

    Kim, Jean; Levy, Josh

    2015-01-01

    The Stachybotrys chartarum strain 51-11 genome was sequenced by shotgun sequencing utilizing Illumina HiSeq 2000 and PacBio technologies. Since S. chartarum has been implicated as having health impacts within water-damaged buildings, any information extracted from the genomic sequence data relating to toxins or the metabolism of the fungus might be useful. PMID:26430036

  6. [Optimization of extraction technology from Paeoniae Radix Alba using response surface methodology].

    PubMed

    Jin, Lin; Zhao, Wan-shun; Guo, Qiao-sheng; Zhang, Wen-sheng; Ye, Zheng-liang

    2015-08-01

    To ensure the stability of chemistry components and the convenience of operation, ultrasound method was chosen to study in this investigation. As the total common peaks area in chromatograms was set to be evaluation index, the influence on the technology caused by extraction time, ethanol concentration and liquid-to-solid ratio was studied by using single factor methodology, and the extraction technology of Paeoniae Radix Alba was optimized by using response surface methodology. The results showed that the extracting results were most affected by ethanol concentration; liquid-to-solid ratio came the second and extraction time thirdly. The optimum ultrasonic-assisted extraction conditions were as follow: the ultrasonic extraction time was 20.06 min, the ethanol concentration in solvent was 72.04%, and the liquid-to-solid ratio was 53.38 mL · g(-1), the predicted value of total common peaks area was 2.1608 x 10(8). Under the extraction conditions after optimization, the total common peaks area was 2.1422 x 10(8), and the relative deviation between the measured and predicted value was 0.86%, so the optimized extraction technology for Paeoniae Radix Alba is suitable and feasible. Besides, for the purpose of extracting more sufficiently and completely, the optimized extraction technology had more advantages than the extraction method recorded in the monogragh of Paeoniae Radix Alba in Chinese Pharmacopoeia, which will come true the assessment and utilization comprehensively.

  7. [Investigation on Spray Drying Technology of Auricularia auricular Extract].

    PubMed

    Zhou, Rong; Chen, Hui; Xie, Yuan; Chen, Peng; Wang, Luo-lin

    2015-07-01

    To investigate the feasibility of spray drying technology of Auricularia auricular extract and its optimum process. On the basis of single factor test, with the yield of dry extract and the content of polysaccharide as indexes, orthogonal test method was used to optimize the spray drying technology on the inlet air temperature, injection speed and crude drug content. Using ultraviolet spectrophotometry, thin layer chromatography(TLC) and pharmacodynamics as indicators, extracts prepared by traditional alcohol precipitation drying process and spray drying process were compared. Compared with the traditional preparation method, the extract prepared by spray drying had little differences from the polysaccharide content, TLC and the function of reducing TG and TC, and its optimum technology condition were as follows: The inlet air temperature was 180 °C, injection speed was 10 ml/min and crude drugs content was 0. 4 g/mL. Auricularia auricular extract by spray drying technology is stable and feasible with high economic benefit.

  8. Optical Security System Based on the Biometrics Using Holographic Storage Technique with a Simple Data Format

    NASA Astrophysics Data System (ADS)

    Jun, An Won

    2006-01-01

    We implement a first practical holographic security system using electrical biometrics that combines optical encryption and digital holographic memory technologies. Optical information for identification includes a picture of face, a name, and a fingerprint, which has been spatially multiplexed by random phase mask used for a decryption key. For decryption in our biometric security system, a bit-error-detection method that compares the digital bit of live fingerprint with of fingerprint information extracted from hologram is used.

  9. Distributed telemedicine for the National Information Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forslund, D.W.; Lee, Seong H.; Reverbel, F.C.

    1997-08-01

    TeleMed is an advanced system that provides a distributed multimedia electronic medical record available over a wide area network. It uses object-based computing, distributed data repositories, advanced graphical user interfaces, and visualization tools along with innovative concept extraction of image information for storing and accessing medical records developed in a separate project from 1994-5. In 1996, we began the transition to Java, extended the infrastructure, and worked to begin deploying TeleMed-like technologies throughout the nation. Other applications are mentioned.

  10. Modern Hardware Technologies and Software Techniques for On-Line Database Storage and Access.

    DTIC Science & Technology

    1985-12-01

    of the information in a message narrative. This method employs artificial intelligence techniques to extract information, In simalest terms, an...disf ribif ion (tape replacemenf) systemns Database distribution On-fine mass storage Videogame ROM (luke-box I Media Cost Mt $2-10/438 $10-SO/G38...trajninq ot tne great intelligence for the analyst would be required. If, on’ the other hand, a sentence analysis scneme siTole enouq,. for the low-level

  11. Semantic Technologies for Re-Use of Clinical Routine Data.

    PubMed

    Kreuzthaler, Markus; Martínez-Costa, Catalina; Kaiser, Peter; Schulz, Stefan

    2017-01-01

    Routine patient data in electronic patient records are only partly structured, and an even smaller segment is coded, mainly for administrative purposes. Large parts are only available as free text. Transforming this content into a structured and semantically explicit form is a prerequisite for querying and information extraction. The core of the system architecture presented in this paper is based on SAP HANA in-memory database technology using the SAP Connected Health platform for data integration as well as for clinical data warehousing. A natural language processing pipeline analyses unstructured content and maps it to a standardized vocabulary within a well-defined information model. The resulting semantically standardized patient profiles are used for a broad range of clinical and research application scenarios.

  12. Identification Method of Mud Shale Fractures Base on Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Xia, Weixu; Lai, Fuqiang; Luo, Han

    2018-01-01

    In recent years, inspired by seismic analysis technology, a new method for analysing mud shale fractures oil and gas reservoirs by logging properties has emerged. By extracting the high frequency attribute of the wavelet transform in the logging attribute, the formation information hidden in the logging signal is extracted, identified the fractures that are not recognized by conventional logging and in the identified fracture segment to show the “cycle jump”, “high value”, “spike” and other response effect is more obvious. Finally formed a complete wavelet denoising method and wavelet high frequency identification fracture method.

  13. Biological network extraction from scientific literature: state of the art and challenges.

    PubMed

    Li, Chen; Liakata, Maria; Rebholz-Schuhmann, Dietrich

    2014-09-01

    Networks of molecular interactions explain complex biological processes, and all known information on molecular events is contained in a number of public repositories including the scientific literature. Metabolic and signalling pathways are often viewed separately, even though both types are composed of interactions involving proteins and other chemical entities. It is necessary to be able to combine data from all available resources to judge the functionality, complexity and completeness of any given network overall, but especially the full integration of relevant information from the scientific literature is still an ongoing and complex task. Currently, the text-mining research community is steadily moving towards processing the full body of the scientific literature by making use of rich linguistic features such as full text parsing, to extract biological interactions. The next step will be to combine these with information from scientific databases to support hypothesis generation for the discovery of new knowledge and the extension of biological networks. The generation of comprehensive networks requires technologies such as entity grounding, coordination resolution and co-reference resolution, which are not fully solved and are required to further improve the quality of results. Here, we analyse the state of the art for the extraction of network information from the scientific literature and the evaluation of extraction methods against reference corpora, discuss challenges involved and identify directions for future research. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  14. The ISES: A non-intrusive medium for in-space experiments in on-board information extraction

    NASA Technical Reports Server (NTRS)

    Murray, Nicholas D.; Katzberg, Stephen J.; Nealy, Mike

    1990-01-01

    The Information Science Experiment System (ISES) represents a new approach in applying advanced systems technology and techniques to on-board information extraction in the space environment. Basically, what is proposed is a 'black box' attached to the spacecraft data bus or local area network. To the spacecraft the 'black box' appears to be just another payload requiring power, heat rejection, interfaces, adding weight, and requiring time on the data management and communication system. In reality, the 'black box' is a programmable computational resource which eavesdrops on the data network, taking and producing selectable, real-time science data back on the network. This paper will present a brief overview of the ISES Concept and will discuss issues related to applying the ISES to the polar platform and Space Station Freedom. Critical to the operation of ISES is the viability of a payload-like interface to the spacecraft data bus or local area network. Study results that address this question will be reviewed vis-a-vis the solar platform and the core space station. Also, initial results of processing science and other requirements for onboard, real-time information extraction will be presented with particular emphasis on the polar platform. Opportunities for a broader range of applications on the core space station will also be discussed.

  15. Acousto-Optic Technology for Topographic Feature Extraction and Image Analysis.

    DTIC Science & Technology

    1981-03-01

    This report contains all findings of the acousto - optic technology study for feature extraction conducted by Deft Laboratories Inc. for the U.S. Army...topographic feature extraction and image analysis using acousto - optic (A-O) technology. A conclusion of this study was that A-O devices are potentially

  16. Security inspection in ports by anomaly detection using hyperspectral imaging technology

    NASA Astrophysics Data System (ADS)

    Rivera, Javier; Valverde, Fernando; Saldaña, Manuel; Manian, Vidya

    2013-05-01

    Applying hyperspectral imaging technology in port security is crucial for the detection of possible threats or illegal activities. One of the most common problems that cargo suffers is tampering. This represents a danger to society because it creates a channel to smuggle illegal and hazardous products. If a cargo is altered, security inspections on that cargo should contain anomalies that reveal the nature of the tampering. Hyperspectral images can detect anomalies by gathering information through multiple electromagnetic bands. The spectrums extracted from these bands can be used to detect surface anomalies from different materials. Based on this technology, a scenario was built in which a hyperspectral camera was used to inspect the cargo for any surface anomalies and a user interface shows the results. The spectrum of items, altered by different materials that can be used to conceal illegal products, is analyzed and classified in order to provide information about the tampered cargo. The image is analyzed with a variety of techniques such as multiple features extracting algorithms, autonomous anomaly detection, and target spectrum detection. The results will be exported to a workstation or mobile device in order to show them in an easy -to-use interface. This process could enhance the current capabilities of security systems that are already implemented, providing a more complete approach to detect threats and illegal cargo.

  17. Evaluation of nuclear-facility decommissioning projects. Summary report: Ames Laboratory Research Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Link, B.W.; Miller, R.L.

    1983-07-01

    This document summarizes the available information concerning the decommissioning of the Ames Laboratory Research Reactor (ALRR), a five-megawatt heavy water moderated and cooled research reactor. The data were placed in a computerized information retrieval/manipulation system which permits its future utilization for purposes of comparative analysis. This information is presented both in detail in its computer output form and also as a manually assembled summarization which highlights the more important aspects of the decommissioning program. Some comparative information with reference to generic decommissioning data extracted from NUREG/CR 1756, Technology, Safety and Costs of Decommissioning Nuclear Research and Test Reactors, is included.

  18. [Optimization study on extraction technology of the seed of Ziziphus jujuba var. spinosa by orthogonal design with multi-targets].

    PubMed

    Wang, Xiao-liang; Zhang, Yu-jie; Chen, Ming-xia; Wang, Ze-feng

    2005-05-01

    To optimize extraction technology of the seed of Ziziphus jujuba var. spinosa with the targets of the total saponin, total jujuboside A and B and total flavonoids. In the method of one-way and orthogonal tests, ethanol concentration, amount of ethanol, extraction time and extraction times were the factors in orthogonal test, and each factor with three levels. Ethanol concentration and extraction times had significant effect on all the targets, other factors should be selected in accordance with production practice. The best extraction technology is to extract for three times with 8 fold ethanol solution (60%), and 1.5 h each time.

  19. TERRA-KLEEN RESPONSE GROUP, INC. SOLVENT EXTRACTION TECHNOLOGY: INNOVATIVE TECHNOLOGY EVALUATION REPORT

    EPA Science Inventory

    This report summarizes the results of a field demonstration conducted under the SITE program. The technology which was demonstrated was a solvent extraction technology developed by Terra-Kleen Response Group. Inc. to remove organic contaminants from soil. The technology employs...

  20. Generalized Feature Extraction for Wrist Pulse Analysis: From 1-D Time Series to 2-D Matrix.

    PubMed

    Dimin Wang; Zhang, David; Guangming Lu

    2017-07-01

    Traditional Chinese pulse diagnosis, known as an empirical science, depends on the subjective experience. Inconsistent diagnostic results may be obtained among different practitioners. A scientific way of studying the pulse should be to analyze the objectified wrist pulse waveforms. In recent years, many pulse acquisition platforms have been developed with the advances in sensor and computer technology. And the pulse diagnosis using pattern recognition theories is also increasingly attracting attentions. Though many literatures on pulse feature extraction have been published, they just handle the pulse signals as simple 1-D time series and ignore the information within the class. This paper presents a generalized method of pulse feature extraction, extending the feature dimension from 1-D time series to 2-D matrix. The conventional wrist pulse features correspond to a particular case of the generalized models. The proposed method is validated through pattern classification on actual pulse records. Both quantitative and qualitative results relative to the 1-D pulse features are given through diabetes diagnosis. The experimental results show that the generalized 2-D matrix feature is effective in extracting both the periodic and nonperiodic information. And it is practical for wrist pulse analysis.

  1. Ancient DNA in historical parchments - identifying a procedure for extraction and amplification of genetic material.

    PubMed

    Lech, T

    2016-05-06

    Historical parchments in the form of documents, manuscripts, books, or letters, make up a large portion of cultural heritage collections. Their priceless historical value is associated with not only their content, but also the information hidden in the DNA deposited on them. Analyses of ancient DNA (aDNA) retrieved from parchments can be used in various investigations, including, but not limited to, studying their authentication, tracing the development of the culture, diplomacy, and technology, as well as obtaining information on the usage and domestication of animals. This article proposes and verifies a procedure for aDNA recovery from historical parchments and its appropriate preparation for further analyses. This study involved experimental selection of an aDNA extraction method with the highest efficiency and quality of extracted genetic material, from among the multi-stage phenol-chloroform extraction methods, and the modern, column-based techniques that use selective DNA-binding membranes. Moreover, current techniques to amplify entire genetic material were questioned, and the possibility of using mitochondrial DNA for species identification was analyzed. The usefulness of the proposed procedure was successfully confirmed in identification tests of historical parchments dating back to the 13-16th century AD.

  2. An integrated system for land resources supervision based on the IoT and cloud computing

    NASA Astrophysics Data System (ADS)

    Fang, Shifeng; Zhu, Yunqiang; Xu, Lida; Zhang, Jinqu; Zhou, Peiji; Luo, Kan; Yang, Jie

    2017-01-01

    Integrated information systems are important safeguards for the utilisation and development of land resources. Information technologies, including the Internet of Things (IoT) and cloud computing, are inevitable requirements for the quality and efficiency of land resources supervision tasks. In this study, an economical and highly efficient supervision system for land resources has been established based on IoT and cloud computing technologies; a novel online and offline integrated system with synchronised internal and field data that includes the entire process of 'discovering breaches, analysing problems, verifying fieldwork and investigating cases' was constructed. The system integrates key technologies, such as the automatic extraction of high-precision information based on remote sensing, semantic ontology-based technology to excavate and discriminate public sentiment on the Internet that is related to illegal incidents, high-performance parallel computing based on MapReduce, uniform storing and compressing (bitwise) technology, global positioning system data communication and data synchronisation mode, intelligent recognition and four-level ('device, transfer, system and data') safety control technology. The integrated system based on a 'One Map' platform has been officially implemented by the Department of Land and Resources of Guizhou Province, China, and was found to significantly increase the efficiency and level of land resources supervision. The system promoted the overall development of informatisation in fields related to land resource management.

  3. Changes during storage of quality parameters and in vitro antioxidant activity of extra virgin monovarietal oils obtained with two extraction technologies.

    PubMed

    Fadda, C; Del Caro, A; Sanguinetti, A M; Urgeghe, P P; Vacca, V; Arca, P P; Piga, A

    2012-10-01

    Extraction technology has a great effect on quality of olive oils. This paper studied 18 months of storage of two Sardinian extra virgin monovarietal oils obtained with a traditional and with a low oxidative stress technology. Oil samples were subjected to the following chemical analyses: acidity, peroxide value, ultraviolet light absorption K₂₃₂ and K₂₇₀, carotenoids, chlorophylls, tocopherols and total polyphenols. The antioxidant capacity of oils, polyphenol extract and oil extract (remaining after polyphenol extraction) was also determined as radical scavenging activity. The results show that both extraction technologies resulted in minor changes in legal and quality indices during storage, due surely to the high quality of the oils as well as to the very good storage conditions used. Oils obtained with the low oxidative stress technology showed lower peroxide value and acidity and resulted in up to 103% higher total polyphenol content as well as increased radical-scavenging activity, with respect to oils obtained with the traditional technology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Medical equipment classification: method and decision-making support based on paraconsistent annotated logic.

    PubMed

    Oshiyama, Natália F; Bassani, Rosana A; D'Ottaviano, Itala M L; Bassani, José W M

    2012-04-01

    As technology evolves, the role of medical equipment in the healthcare system, as well as technology management, becomes more important. Although the existence of large databases containing management information is currently common, extracting useful information from them is still difficult. A useful tool for identification of frequently failing equipment, which increases maintenance cost and downtime, would be the classification according to the corrective maintenance data. Nevertheless, establishment of classes may create inconsistencies, since an item may be close to two classes by the same extent. Paraconsistent logic might help solve this problem, as it allows the existence of inconsistent (contradictory) information without trivialization. In this paper, a methodology for medical equipment classification based on the ABC analysis of corrective maintenance data is presented, and complemented with a paraconsistent annotated logic analysis, which may enable the decision maker to take into consideration alerts created by the identification of inconsistencies and indeterminacies in the classification.

  5. Information Technology in Critical Care: Review of Monitoring and Data Acquisition Systems for Patient Care and Research

    PubMed Central

    De Georgia, Michael A.; Kaffashi, Farhad; Jacono, Frank J.; Loparo, Kenneth A.

    2015-01-01

    There is a broad consensus that 21st century health care will require intensive use of information technology to acquire and analyze data and then manage and disseminate information extracted from the data. No area is more data intensive than the intensive care unit. While there have been major improvements in intensive care monitoring, the medical industry, for the most part, has not incorporated many of the advances in computer science, biomedical engineering, signal processing, and mathematics that many other industries have embraced. Acquiring, synchronizing, integrating, and analyzing patient data remain frustratingly difficult because of incompatibilities among monitoring equipment, proprietary limitations from industry, and the absence of standard data formatting. In this paper, we will review the history of computers in the intensive care unit along with commonly used monitoring and data acquisition systems, both those commercially available and those being developed for research purposes. PMID:25734185

  6. Information technology in critical care: review of monitoring and data acquisition systems for patient care and research.

    PubMed

    De Georgia, Michael A; Kaffashi, Farhad; Jacono, Frank J; Loparo, Kenneth A

    2015-01-01

    There is a broad consensus that 21st century health care will require intensive use of information technology to acquire and analyze data and then manage and disseminate information extracted from the data. No area is more data intensive than the intensive care unit. While there have been major improvements in intensive care monitoring, the medical industry, for the most part, has not incorporated many of the advances in computer science, biomedical engineering, signal processing, and mathematics that many other industries have embraced. Acquiring, synchronizing, integrating, and analyzing patient data remain frustratingly difficult because of incompatibilities among monitoring equipment, proprietary limitations from industry, and the absence of standard data formatting. In this paper, we will review the history of computers in the intensive care unit along with commonly used monitoring and data acquisition systems, both those commercially available and those being developed for research purposes.

  7. Genome Sequence of Stachybotrys chartarum Strain 51-11.

    PubMed

    Betancourt, Doris A; Dean, Timothy R; Kim, Jean; Levy, Josh

    2015-10-01

    The Stachybotrys chartarum strain 51-11 genome was sequenced by shotgun sequencing utilizing Illumina HiSeq 2000 and PacBio technologies. Since S. chartarum has been implicated as having health impacts within water-damaged buildings, any information extracted from the genomic sequence data relating to toxins or the metabolism of the fungus might be useful. Copyright © 2015 Betancourt et al.

  8. 78 FR 60208 - Oil and Gas and Sulphur Operations in the Outer Continental Shelf-Adjustment of Service Fees

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-01

    ... gas resources regulations to update some fees that cover BSEE's cost of processing and filing certain... natural gas on the OCS and to reflect advancements in technology and new information. The BSEE also..., Crude Petroleum and Natural Gas Extraction, and 213111, Drilling Oil and Gas Wells. For these NAICS code...

  9. Research and technology, fiscal year 1985

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Capabilities in spacecraft subsystems, sensors, space communications and navigation, the acquisition of data from space missions and the extraction of information from that data were reviewed. The use of satellite data in the study of the Earth's atmosphere and climate, the dynamics of its crust and the monitoring of land and water resources were examined. A review of NASA flight projects for 1985 was presented.

  10. Optimum Extraction, Characterization, and Antioxidant Activities of Polysaccharides from Flowers of Dendrobium devonianum

    PubMed Central

    Wang, Donghui; Fan, Bei; Wang, Yan; Zhang, Lijing

    2018-01-01

    Response surface methodology (RSM) was employed to optimize the conditions for the ultrasonic-assisted extraction (UAE) of polysaccharides from the flowers of Dendrobium devonianum. The optimal conditions for the maximum yields of DDFPs are as follows: an extraction temperature of 63.13°C, an extraction time of 53.10 min, and a water-to-raw material ratio of 22.11 mL/g. Furthermore, three fractions (DDFPs30, DDFPs50, and DDFPs70) were prepared from Dendrobium devonianum flowers polysaccharides (DDFPs) by the stepwise ethanol precipitation method. The DDFPs50 exhibited the highest antioxidant activity compared to the other fractions. The molecular weight, polydispersity, and conformation of these fractions were also characterized. In particular, the monosaccharide composition analysis of the DDFPs indicates that mannose and glucose are the primary components, similar to those of the D. officinale plant. This study provides a rapid extraction technology and essential information for the production of DDFPs, which could be potentially used as healthcare food. PMID:29581723

  11. A novel non-contact radar sensor for affective and interactive analysis.

    PubMed

    Lin, Hong-Dun; Lee, Yen-Shien; Shih, Hsiang-Lan; Chuang, Bor-Nian

    2013-01-01

    Currently, many physiological signal sensing techniques have been applied for affective analysis in Human-Computer Interaction applications. Most known maturely developed sensing methods (EEG/ECG/EMG/Temperature/BP etc. al.) replied on contact way to obtain desired physiological information for further data analysis. However, those methods might cause some inconvenient and uncomfortable problems, and not easy to be used for affective analysis in interactive performing. To improve this issue, a novel technology based on low power radar technology (Nanosecond Pulse Near-field Sensing, NPNS) with 300 MHz radio-frequency was proposed to detect humans' pulse signal by the non-contact way for heartbeat signal extraction. In this paper, a modified nonlinear HRV calculated algorithm was also developed and applied on analyzing affective status using extracted Peak-to-Peak Interval (PPI) information from detected pulse signal. The proposed new affective analysis method is designed to continuously collect the humans' physiological signal, and validated in a preliminary experiment with sound, light and motion interactive performance. As a result, the mean bias between PPI (from NPNS) and RRI (from ECG) shows less than 1ms, and the correlation is over than 0.88, respectively.

  12. Video Skimming and Characterization through the Combination of Image and Language Understanding Techniques

    NASA Technical Reports Server (NTRS)

    Smith, Michael A.; Kanade, Takeo

    1997-01-01

    Digital video is rapidly becoming important for education, entertainment, and a host of multimedia applications. With the size of the video collections growing to thousands of hours, technology is needed to effectively browse segments in a short time without losing the content of the video. We propose a method to extract the significant audio and video information and create a "skim" video which represents a very short synopsis of the original. The goal of this work is to show the utility of integrating language and image understanding techniques for video skimming by extraction of significant information, such as specific objects, audio keywords and relevant video structure. The resulting skim video is much shorter, where compaction is as high as 20:1, and yet retains the essential content of the original segment.

  13. Mining biomedical images towards valuable information retrieval in biomedical and life sciences

    PubMed Central

    Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas

    2016-01-01

    Biomedical images are helpful sources for the scientists and practitioners in drawing significant hypotheses, exemplifying approaches and describing experimental results in published biomedical literature. In last decades, there has been an enormous increase in the amount of heterogeneous biomedical image production and publication, which results in a need for bioimaging platforms for feature extraction and analysis of text and content in biomedical images to take advantage in implementing effective information retrieval systems. In this review, we summarize technologies related to data mining of figures. We describe and compare the potential of different approaches in terms of their developmental aspects, used methodologies, produced results, achieved accuracies and limitations. Our comparative conclusions include current challenges for bioimaging software with selective image mining, embedded text extraction and processing of complex natural language queries. PMID:27538578

  14. Smart Camera Technology Increases Quality

    NASA Technical Reports Server (NTRS)

    2004-01-01

    When it comes to real-time image processing, everyone is an expert. People begin processing images at birth and rapidly learn to control their responses through the real-time processing of the human visual system. The human eye captures an enormous amount of information in the form of light images. In order to keep the brain from becoming overloaded with all the data, portions of an image are processed at a higher resolution than others, such as a traffic light changing colors. changing colors. In the same manner, image processing products strive to extract the information stored in light in the most efficient way possible. Digital cameras available today capture millions of pixels worth of information from incident light. However, at frame rates more than a few per second, existing digital interfaces are overwhelmed. All the user can do is store several frames to memory until that memory is full and then subsequent information is lost. New technology pairs existing digital interface technology with an off-the-shelf complementary metal oxide semiconductor (CMOS) imager to provide more than 500 frames per second of specialty image processing. The result is a cost-effective detection system unlike any other.

  15. Smart Health - Potential and Pathways: A Survey

    NASA Astrophysics Data System (ADS)

    Arulananthan, C.; Hanifa, Sabibullah Mohamed

    2017-08-01

    Healthcare is an imperative key field of research, where individuals or groups can be engaged in the self-tracking of any kind of biological, physical, behavioral, or environmental information. In a massive health care data, the valuable information is hidden. The quantity of the available unstructured data has been expanding on an exponential scale. The newly developing Disruptive Technologies can handle many challenges that face data analysis and ability to extract valuable information via data analytics. Connected Wellness in Healthcare would retrieve patient’s physiological, pathological and behavioral parameters through sensors to perform inner workings of human body analysis. Disruptive technologies can take us from a reactive illness-driven to a proactive wellness-driven system in health care. It is need to be strive and create a smart health system towards wellness-driven instead of being illness-driven, today’s biggest problem in health care. Wellness-driven-analytics application help to promote healthiest living environment called “Smart Health”, deliver empower based quality of living. The contributions of this survey reveals and opens (touches uncovered areas) the possible doors in the line of research on smart health and its computing technologies.

  16. Minimum information about a single amplified genome (MISAG) and a metagenome-assembled genome (MIMAG) of bacteria and archaea

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas

    The number of genomes from uncultivated microbes will soon surpass the number of isolate genomes in public databases (Hugenholtz, Skarshewski, & Parks, 2016). Technological advancements in high-throughput sequencing and assembly, including single-cell genomics and the computational extraction of genomes from metagenomes (GFMs), are largely responsible. Here we propose community standards for reporting the Minimum Information about a Single-Cell Genome (MIxS-SCG) and Minimum Information about Genomes extracted From Metagenomes (MIxS-GFM) specific for Bacteria and Archaea. The standards have been developed in the context of the International Genomics Standards Consortium (GSC) community (Field et al., 2014) and can be viewed as amore » supplement to other GSC checklists including the Minimum Information about a Genome Sequence (MIGS), Minimum information about a Metagenomic Sequence(s) (MIMS) (Field et al., 2008) and Minimum Information about a Marker Gene Sequence (MIMARKS) (P. Yilmaz et al., 2011). Community-wide acceptance of MIxS-SCG and MIxS-GFM for Bacteria and Archaea will enable broad comparative analyses of genomes from the majority of taxa that remain uncultivated, improving our understanding of microbial function, ecology, and evolution.« less

  17. High resolution remote sensing information identification for characterizing uranium mineralization setting in Namibia

    NASA Astrophysics Data System (ADS)

    Zhang, Jie-Lin; Wang, Jun-hu; Zhou, Mi; Huang, Yan-ju; Xuan, Yan-xiu; Wu, Ding

    2011-11-01

    The modern Earth Observation System (EOS) technology takes important role in the uranium geological exploration, and high resolution remote sensing as one of key parts of EOS is vital to characterize spectral and spatial information of uranium mineralization factors. Utilizing satellite high spatial resolution and hyperspectral remote sensing data (QuickBird, Radarsat2, ASTER), field spectral measurement (ASD data) and geological survey, this paper established the spectral identification characteristics of uranium mineralization factors including six different types of alaskite, lower and upper marble of Rössing formation, dolerite, alkali metasomatism, hematization and chloritization in the central zone of Damara Orogen, Namibia. Moreover, adopted the texture information identification technology, the geographical distribution zones of ore-controlling faults and boundaries between the different strata were delineated. Based on above approaches, the remote sensing geological anomaly information and image interpretation signs of uranium mineralization factors were extracted, the metallogenic conditions were evaluated, and the prospective areas have been predicted.

  18. Extraction and purification of high added value compounds from by-products of the winemaking chain using alternative/nonconventional processes/technologies.

    PubMed

    Yammine, Sami; Brianceau, Sylène; Manteau, Sébastien; Turk, Mohammad; Ghidossi, Rémy; Vorobiev, Eugène; Mietton-Peuchot, Martine

    2018-05-24

    Grape byproducts are today considered as a cheap source of valuable compounds since existent technologies allow the recovery of target compounds and their recycling. The goal of the current article is to explore the different recovery stages used by both conventional and alternative techniques and processes. Alternative pre-treatments techniques reviewed are: ultrasounds, pulsed electric fields and high voltage discharges. In addition, nonconventional solvent extraction under high pressure, specifically, supercritical fluid extraction and subcritical water extraction are discussed. Finally alternative purification technologies, for example membrane processing were also examined. The intent is to describe the mechanisms involved by these alternative technologies and to summarize the work done on the improvement of the extraction process of phenolic compounds from winery by-products. With a focus on the developmental stage of each technology, highlighting the research need and challenges to be overcome for an industrial implementation of these unitary operations in the overall extraction process. A critical comparison of conventional and alternative techniques will be reviewed for ethe pre-treatment of raw material, the diffusion of polyphenols and the purification of these high added value compounds. This review intends to give the reader some key answers (costs, advantages, drawbacks) to help in the choice of alternative technologies for extraction purposes.

  19. The extraction characteristic of Au-Ag from Au concentrate by thiourea solution

    NASA Astrophysics Data System (ADS)

    Kim, Bongju; Cho, Kanghee; On, Hyunsung; Choi, Nagchoul; Park, Cheonyoung

    2013-04-01

    The cyanidation process has been used commercially for the past 100 years, there are ores that are not amenable to treatment by cyanide. Interest in alternative lixiviants, such as thiourea, halogens, thiosulfate and malononitrile, has been revived as a result of a major increase in gold price, which has stimulated new developments in extraction technology, combined with environmental concern. The Au extraction process using the thiourea solvent has many advantages over the cyanidation process, including higher leaching rates, faster extraction time and less than toxicity. The purpose of this study was investigated to the extraction characteristic of Au-Ag from two different Au concentrate (sulfuric acid washing and roasting) under various experiment conditions (thiourea concentration, pH of solvent, temperature) by thiourea solvent. The result of extraction experiment showed that the Au-Ag extraction was a fast extraction process, reaching equilibrium (maximum extraction rate) within 30 min. The Au-Ag extraction rate was higher in the roasted concentrate than in the sulfuric acid washing. The higher the Au-Ag extraction rate (Au - 70.87%, Ag - 98.12%) from roasted concentrate was found when the more concentration of thiourea increased, pH decreased and extraction temperature increased. This study informs extraction method basic knowledge when thiourea was a possibility to eco-/economic resources of Au-Ag utilization studies including the hydrometallurgy.

  20. Integrated use of spatial and semantic relationships for extracting road networks from floating car data

    NASA Astrophysics Data System (ADS)

    Li, Jun; Qin, Qiming; Xie, Chao; Zhao, Yue

    2012-10-01

    The update frequency of digital road maps influences the quality of road-dependent services. However, digital road maps surveyed by probe vehicles or extracted from remotely sensed images still have a long updating circle and their cost remain high. With GPS technology and wireless communication technology maturing and their cost decreasing, floating car technology has been used in traffic monitoring and management, and the dynamic positioning data from floating cars become a new data source for updating road maps. In this paper, we aim to update digital road maps using the floating car data from China's National Commercial Vehicle Monitoring Platform, and present an incremental road network extraction method suitable for the platform's GPS data whose sampling frequency is low and which cover a large area. Based on both spatial and semantic relationships between a trajectory point and its associated road segment, the method classifies each trajectory point, and then merges every trajectory point into the candidate road network through the adding or modifying process according to its type. The road network is gradually updated until all trajectories have been processed. Finally, this method is applied in the updating process of major roads in North China and the experimental results reveal that it can accurately derive geometric information of roads under various scenes. This paper provides a highly-efficient, low-cost approach to update digital road maps.

  1. Multivariate EMD and full spectrum based condition monitoring for rotating machinery

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaomin; Patel, Tejas H.; Zuo, Ming J.

    2012-02-01

    Early assessment of machinery health condition is of paramount importance today. A sensor network with sensors in multiple directions and locations is usually employed for monitoring the condition of rotating machinery. Extraction of health condition information from these sensors for effective fault detection and fault tracking is always challenging. Empirical mode decomposition (EMD) is an advanced signal processing technology that has been widely used for this purpose. Standard EMD has the limitation in that it works only for a single real-valued signal. When dealing with data from multiple sensors and multiple health conditions, standard EMD faces two problems. First, because of the local and self-adaptive nature of standard EMD, the decomposition of signals from different sources may not match in either number or frequency content. Second, it may not be possible to express the joint information between different sensors. The present study proposes a method of extracting fault information by employing multivariate EMD and full spectrum. Multivariate EMD can overcome the limitations of standard EMD when dealing with data from multiple sources. It is used to extract the intrinsic mode functions (IMFs) embedded in raw multivariate signals. A criterion based on mutual information is proposed for selecting a sensitive IMF. A full spectral feature is then extracted from the selected fault-sensitive IMF to capture the joint information between signals measured from two orthogonal directions. The proposed method is first explained using simple simulated data, and then is tested for the condition monitoring of rotating machinery applications. The effectiveness of the proposed method is demonstrated through monitoring damage on the vane trailing edge of an impeller and rotor-stator rub in an experimental rotor rig.

  2. Phytosterols and their extraction from various plant matrices using supercritical carbon dioxide: a review.

    PubMed

    Uddin, Md Salim; Sarker, Md Zaidul Islam; Ferdosh, Sahena; Akanda, Md Jahurul Haque; Easmin, Mst Sabina; Bt Shamsudin, Siti Hadijah; Bin Yunus, Kamaruzzaman

    2015-05-01

    Phytosterols provide important health benefits: in particular, the lowering of cholesterol. From environmental and commercial points of view, the most appropriate technique has been searched for extracting phytosterols from plant matrices. As a green technology, supercritical fluid extraction (SFE) using carbon dioxide (CO2) is widely used to extract bioactive compounds from different plant matrices. Several studies have been performed to extract phytosterols using supercritical CO2 (SC-CO2) and this technology has clearly offered potential advantages over conventional extraction methods. However, the efficiency of SFE technology fully relies on the processing parameters, chemistry of interest compounds, nature of the plant matrices and expertise of handling. This review covers SFE technology with particular reference to phytosterol extraction using SC-CO2. Moreover, the chemistry of phytosterols, properties of supercritical fluids (SFs) and the applied experimental designs have been discussed for better understanding of phytosterol solubility in SC-CO2. © 2014 Society of Chemical Industry.

  3. [The automatic iris map overlap technology in computer-aided iridiagnosis].

    PubMed

    He, Jia-feng; Ye, Hu-nian; Ye, Miao-yuan

    2002-11-01

    In the paper, iridology and computer-aided iridiagnosis technologies are briefly introduced and the extraction method of the collarette contour is then investigated. The iris map can be overlapped on the original iris image based on collarette contour extraction. The research on collarette contour extraction and iris map overlap is of great importance to computer-aided iridiagnosis technologies.

  4. Toward Routine Automatic Pathway Discovery from On-line Scientific Text Abstracts.

    PubMed

    Ng; Wong

    1999-01-01

    We are entering a new era of research where the latest scientific discoveries are often first reported online and are readily accessible by scientists worldwide. This rapid electronic dissemination of research breakthroughs has greatly accelerated the current pace in genomics and proteomics research. The race to the discovery of a gene or a drug has now become increasingly dependent on how quickly a scientist can scan through voluminous amount of information available online to construct the relevant picture (such as protein-protein interaction pathways) as it takes shape amongst the rapidly expanding pool of globally accessible biological data (e.g. GENBANK) and scientific literature (e.g. MEDLINE). We describe a prototype system for automatic pathway discovery from on-line text abstracts, combining technologies that (1) retrieve research abstracts from online sources, (2) extract relevant information from the free texts, and (3) present the extracted information graphically and intuitively. Our work demonstrates that this framework allows us to routinely scan online scientific literature for automatic discovery of knowledge, giving modern scientists the necessary competitive edge in managing the information explosion in this electronic age.

  5. Aviation obstacle auto-extraction using remote sensing information

    NASA Astrophysics Data System (ADS)

    Zimmer, N.; Lugsch, W.; Ravenscroft, D.; Schiefele, J.

    2008-10-01

    An Obstacle, in the aviation context, may be any natural, man-made, fixed or movable object, permanent or temporary. Currently, the most common way to detect relevant aviation obstacles from an aircraft or helicopter for navigation purposes and collision avoidance is the use of merged infrared and synthetic information of obstacle data. Several algorithms have been established to utilize synthetic and infrared images to generate obstacle information. There might be a situation however where the system is error-prone and may not be able to consistently determine the current environment. This situation can be avoided when the system knows the true position of the obstacle. The quality characteristics of the obstacle data strongly depends on the quality of the source data such as maps and official publications. In some countries such as newly industrializing and developing countries, quality and quantity of obstacle information is not available. The aviation world has two specifications - RTCA DO-276A and ICAO ANNEX 15 Ch. 10 - which describe the requirements for aviation obstacles. It is essential to meet these requirements to be compliant with the specifications and to support systems based on these specifications, e.g. 3D obstacle warning systems where accurate coordinates based on WGS-84 is a necessity. Existing aerial and satellite or soon to exist high quality remote sensing data makes it feasible to think about automated aviation obstacle data origination. This paper will describe the feasibility to auto-extract aviation obstacles from remote sensing data considering limitations of image and extraction technologies. Quality parameters and possible resolution of auto-extracted obstacle data will be discussed and presented.

  6. GREEN AND SUSTAINABLE REMEDIATION BEST MANAGEMENT PRACTICES

    DTIC Science & Technology

    2016-09-07

    adoption. The technologies covered include air sparging, biosparging, soil vapor extraction (SVE), enhanced reductive dechlorination (ERD), in situ...RPM Remedial Project Manager SCR selective catalytic reduction SEE steam enhanced extraction SVE soil vapor extraction TCE trichloroethene...further promote their adoption. The technologies covered include air sparging, biosparging, soil vapor extraction (SVE), enhanced reductive

  7. [Application of genetic algorithm in blending technology for extractions of Cortex Fraxini].

    PubMed

    Yang, Ming; Zhou, Yinmin; Chen, Jialei; Yu, Minying; Shi, Xiufeng; Gu, Xijun

    2009-10-01

    To explore the feasibility of genetic algorithm (GA) on multiple objective blending technology for extractions of Cortex Fraxini. According to that the optimization objective was the combination of fingerprint similarity and the root-mean-square error of multiple key constituents, a new multiple objective optimization model of 10 batches extractions of Cortex Fraxini was built. The blending coefficient was obtained by genetic algorithm. The quality of 10 batches extractions of Cortex Fraxini that after blending was evaluated with the finger print similarity and root-mean-square error as indexes. The quality of 10 batches extractions of Cortex Fraxini that after blending was well improved. Comparing with the fingerprint of the control sample, the similarity was up, but the degree of variation is down. The relative deviation of the key constituents was less than 10%. It is proved that genetic algorithm works well on multiple objective blending technology for extractions of Cortex Fraxini. This method can be a reference to control the quality of extractions of Cortex Fraxini. Genetic algorithm in blending technology for extractions of Chinese medicines is advisable.

  8. Cancer Imaging Phenomics Software Suite: Application to Brain and Breast Cancer | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    The transition of oncologic imaging from its “industrial era” to it is “information era” demands analytical methods that 1) extract information from this data that is clinically and biologically relevant; 2) integrate imaging, clinical, and genomic data via rigorous statistical and computational methodologies in order to derive models valuable for understanding cancer mechanisms, diagnosis, prognostic assessment, response evaluation, and personalized treatment management; 3) are available to the biomedical community for easy use and application, with the aim of understanding, diagnosing, an

  9. 3D imaging of translucent media with a plenoptic sensor based on phase space optics

    NASA Astrophysics Data System (ADS)

    Zhang, Xuanzhe; Shu, Bohong; Du, Shaojun

    2015-05-01

    Traditional stereo imaging technology is not working for dynamical translucent media, because there are no obvious characteristic patterns on it and it's not allowed using multi-cameras in most cases, while phase space optics can solve the problem, extracting depth information directly from "space-spatial frequency" distribution of the target obtained by plenoptic sensor with single lens. This paper discussed the presentation of depth information in phase space data, and calculating algorithms with different transparency. A 3D imaging example of waterfall was given at last.

  10. Photosynthetic Performance of the Imidazolinone Resistant Sunflower Exposed to Single and Combined Treatment by the Herbicide Imazamox and an Amino Acid Extract

    PubMed Central

    Balabanova, Dobrinka A.; Paunov, Momchil; Goltsev, Vasillij; Cuypers, Ann; Vangronsveld, Jaco; Vassilev, Andon

    2016-01-01

    The herbicide imazamox may provoke temporary yellowing and growth retardation in IMI-R sunflower hybrids, more often under stressful environmental conditions. Although, photosynthetic processes are not the primary sites of imazamox action, they might be influenced; therefore, more information about the photosynthetic performance of the herbicide-treated plants could be valuable for a further improvement of the Clearfield technology. Plant biostimulants have been shown to ameliorate damages caused by different stress factors on plants, but very limited information exists about their effects on herbicide-stressed plants. In order to characterize photosynthetic performance of imazamox-treated sunflower IMI-R plants, we carried out experiments including both single and combined treatments by imazamox and a plant biostimulants containing amino acid extract. We found that imazamox application in a rate of 132 μg per plant (equivalent of 40 g active ingredient ha−1) induced negative effects on both light-light dependent photosynthetic redox reactions and leaf gas exchange processes, which was much less pronounced after the combined application of imazamox and amino acid extract. PMID:27826304

  11. New Trends of Emerging Technologies in Digital Pathology.

    PubMed

    Bueno, Gloria; Fernández-Carrobles, M Milagro; Deniz, Oscar; García-Rojo, Marcial

    2016-01-01

    The future paradigm of pathology will be digital. Instead of conventional microscopy, a pathologist will perform a diagnosis through interacting with images on computer screens and performing quantitative analysis. The fourth generation of virtual slide telepathology systems, so-called virtual microscopy and whole-slide imaging (WSI), has allowed for the storage and fast dissemination of image data in pathology and other biomedical areas. These novel digital imaging modalities encompass high-resolution scanning of tissue slides and derived technologies, including automatic digitization and computational processing of whole microscopic slides. Moreover, automated image analysis with WSI can extract specific diagnostic features of diseases and quantify individual components of these features to support diagnoses and provide informative clinical measures of disease. Therefore, the challenge is to apply information technology and image analysis methods to exploit the new and emerging digital pathology technologies effectively in order to process and model all the data and information contained in WSI. The final objective is to support the complex workflow from specimen receipt to anatomic pathology report transmission, that is, to improve diagnosis both in terms of pathologists' efficiency and with new information. This article reviews the main concerns about and novel methods of digital pathology discussed at the latest workshop in the field carried out within the European project AIDPATH (Academia and Industry Collaboration for Digital Pathology). © 2016 S. Karger AG, Basel.

  12. FIELD EVALUATION OF DNAPL EXTRACTION TECHNOLOGIES: PROJECT OVERVIEW

    EPA Science Inventory

    Five DNAPL remediation technologies were evaluated at the Dover National Test Site, Dover AFB, Delaware. The technologies were cosolvent solubilization, cosolvent mobilization, surfactant solubilization, complex sugar flushing and air sparging/soil vapor extraction. The effectiv...

  13. Population Estimation in Singapore Based on Remote Sensing and Open Data

    NASA Astrophysics Data System (ADS)

    Guo, H.; Cao, K.; Wang, P.

    2017-09-01

    Population estimation statistics are widely used in government, commercial and educational sectors for a variety of purposes. With growing emphases on real-time and detailed population information, data users nowadays have switched from traditional census data to more technology-based data source such as LiDAR point cloud and High-Resolution Satellite Imagery. Nevertheless, such data are costly and periodically unavailable. In this paper, the authors use West Coast District, Singapore as a case study to investigate the applicability and effectiveness of using satellite image from Google Earth for extraction of building footprint and population estimation. At the same time, volunteered geographic information (VGI) is also utilized as ancillary data for building footprint extraction. Open data such as Open Street Map OSM could be employed to enhance the extraction process. In view of challenges in building shadow extraction, this paper discusses several methods including buffer, mask and shape index to improve accuracy. It also illustrates population estimation methods based on building height and number of floor estimates. The results show that the accuracy level of housing unit method on population estimation can reach 92.5 %, which is remarkably accurate. This paper thus provides insights into techniques for building extraction and fine-scale population estimation, which will benefit users such as urban planners in terms of policymaking and urban planning of Singapore.

  14. Integrated Functional and Executional Modelling of Software Using Web-Based Databases

    NASA Technical Reports Server (NTRS)

    Kulkarni, Deepak; Marietta, Roberta

    1998-01-01

    NASA's software subsystems undergo extensive modification and updates over the operational lifetimes. It is imperative that modified software should satisfy safety goals. This report discusses the difficulties encountered in doing so and discusses a solution based on integrated modelling of software, use of automatic information extraction tools, web technology and databases. To appear in an article of Journal of Database Management.

  15. Client-side Skype forensics: an overview

    NASA Astrophysics Data System (ADS)

    Meißner, Tina; Kröger, Knut; Creutzburg, Reiner

    2013-03-01

    IT security and computer forensics are important components in the information technology. In the present study, a client-side Skype forensics is performed. It is designed to explain which kind of user data are stored on a computer and which tools allow the extraction of those data for a forensic investigation. There are described both methods - a manual analysis and an analysis with (mainly) open source tools, respectively.

  16. Evaluation of human dynamic balance in Grassmann manifold

    NASA Astrophysics Data System (ADS)

    Michalczuk, Agnieszka; Wereszczyński, Kamil; Mucha, Romualda; Świtoński, Adam; Josiński, Henryk; Wojciechowski, Konrad

    2017-07-01

    The authors present an application of Grassmann manifold to the evaluation of human dynamic balance based on the time series representing movements of hip, knee and ankle joints in the sagittal, frontal and transverse planes. Time series were extracted from gait sequences which were recorded in the Human Motion Laboratory (HML) of the Polish-Japanese Academy of Information Technology in Bytom, Poland using the Vicon system.

  17. Japan Report, Science and Technology.

    DTIC Science & Technology

    1987-04-03

    n are supplied by JPRS. Processing indicators such as [Textj or TExcerpt] in the first line of each item, or following the last line of a brief...indicate how the original information was processed . Where no processing indicator is given, the infor- mation was summarized or extracted...000th of a second radiated by permanent stars during their evolutionary process , the institute said. The satellite’s astronomical survey will

  18. Outcome-Focused Market Intelligence: Extracting Better Value and Effectiveness from Strategic Sourcing

    DTIC Science & Technology

    2013-04-01

    disseminating information are not systematically taught or developed in the government’s acquisition workforce. However, a study of 30 large firms ...to keep themselves abreast of changes in the marketplace, such as technological advances, process improvements, and available sources of supply. The...and performance measurement (Monczka & Petersen, 2008). Firms that develop supply management strategic plans typically set three-to-five year

  19. Tracking transcriptional activities with high-content epifluorescent imaging

    NASA Astrophysics Data System (ADS)

    Hua, Jianping; Sima, Chao; Cypert, Milana; Gooden, Gerald C.; Shack, Sonsoles; Alla, Lalitamba; Smith, Edward A.; Trent, Jeffrey M.; Dougherty, Edward R.; Bittner, Michael L.

    2012-04-01

    High-content cell imaging based on fluorescent protein reporters has recently been used to track the transcriptional activities of multiple genes under different external stimuli for extended periods. This technology enhances our ability to discover treatment-induced regulatory mechanisms, temporally order their onsets and recognize their relationships. To fully realize these possibilities and explore their potential in biological and pharmaceutical applications, we introduce a new data processing procedure to extract information about the dynamics of cell processes based on this technology. The proposed procedure contains two parts: (1) image processing, where the fluorescent images are processed to identify individual cells and allow their transcriptional activity levels to be quantified; and (2) data representation, where the extracted time course data are summarized and represented in a way that facilitates efficient evaluation. Experiments show that the proposed procedure achieves fast and robust image segmentation with sufficient accuracy. The extracted cellular dynamics are highly reproducible and sensitive enough to detect subtle activity differences and identify mechanisms responding to selected perturbations. This method should be able to help biologists identify the alterations of cellular mechanisms that allow drug candidates to change cell behavior and thereby improve the efficiency of drug discovery and treatment design.

  20. Text Content Pushing Technology Research Based on Location and Topic

    NASA Astrophysics Data System (ADS)

    Wei, Dongqi; Wei, Jianxin; Wumuti, Naheman; Jiang, Baode

    2016-11-01

    In the field, geological workers usually want to obtain related geological background information in the working area quickly and accurately. This information exists in the massive geological data, text data is described in natural language accounted for a large proportion. This paper studied location information extracting method in the mass text data; proposed a geographic location—geological content—geological content related algorithm based on Spark and Mapreduce2, finally classified content by using KNN, and built the content pushing system based on location and topic. It is running in the geological survey cloud, and we have gained a good effect in testing by using real geological data.

  1. High-Efficiency Nitride-Base Photonic Crystal Light Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James Speck; Evelyn Hu; Claude Weisbuch

    2010-01-31

    The research activities performed in the framework of this project represent a major breakthrough in the demonstration of Photonic Crystals (PhC) as a competitive technology for LEDs with high light extraction efficiency. The goals of the project were to explore the viable approaches to manufacturability of PhC LEDS through proven standard industrial processes, establish the limits of light extraction by various concepts of PhC LEDs, and determine the possible advantages of PhC LEDs over current and forthcoming LED extraction concepts. We have developed three very different geometries for PhC light extraction in LEDs. In addition, we have demonstrated reliable methodsmore » for their in-depth analysis allowing the extraction of important parameters such as light extraction efficiency, modal extraction length, directionality, internal and external quantum efficiency. The information gained allows better understanding of the physical processes and the effect of the design parameters on the light directionality and extraction efficiency. As a result, we produced LEDs with controllable emission directionality and a state of the art extraction efficiency that goes up to 94%. Those devices are based on embedded air-gap PhC - a novel technology concept developed in the framework of this project. They rely on a simple and planar fabrication process that is very interesting for industrial implementation due to its robustness and scalability. In fact, besides the additional patterning and regrowth steps, the process is identical as that for standard industrially used p-side-up LEDs. The final devices exhibit the same good electrical characteristics and high process yield as a series of test standard LEDs obtained in comparable conditions. Finally, the technology of embedded air-gap patterns (PhC) has significant potential in other related fields such as: increasing the optical mode interaction with the active region in semiconductor lasers; increasing the coupling of the incident light into the active region of solar cells; increasing the efficiency of the phosphorous light conversion in white light LEDs etc. In addition to the technology of embedded PhC LEDs, we demonstrate a technique for improvement of the light extraction and emission directionality for existing flip-chip microcavity (thin) LEDs by introducing PhC grating into the top n-contact. Although, the performances of these devices in terms of increase of the extraction efficiency are not significantly superior compared to those obtained by other techniques like surface roughening, the use of PhC offers some significant advantages such as improved and controllable emission directionality and a process that is directly applicable to any material system. The PhC microcavity LEDs have also potential for industrial implementation as the fabrication process has only minor differences to that already used for flip-chip thin LEDs. Finally, we have demonstrated that achieving good electrical properties and high fabrication yield for these devices is straightforward.« less

  2. FIELD IMPLEMENTATION PLAN FOR A WILLISTON BASIN BRINE EXTRACTION AND STORAGE TEST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamling, John; Klapperich, Ryan; Stepan, Daniel

    2016-03-31

    The Energy & Environmental Research Center (EERC) successfully completed all technical work of Phase I, including development of a field implementation plan (FIP) for a brine extraction and storage test (BEST) in the North Dakota portion of the Williston Basin. This implementation plan was commissioned by the U.S. Department of Energy (DOE) National Energy Technology Laboratory (NETL) as a proxy for managing formation pressure plumes and measuring/monitoring the movement of differential pressure and CO2 plumes in the subsurface for future saline CO2 storage projects. BEST comprises the demonstration and validation of active reservoir management (ARM) strategies and extracted brine treatmentmore » technologies. Two prospective commercial brine injection sites were evaluated for BEST to satisfy DOE’s goals. Ultimately, an active saltwater disposal (SWD) site, Johnsons Corner, was selected because it possesses an ideal combination of key factors making it uniquely suited to host BEST. This site is located in western North Dakota and operated by Nuverra Environmental Solutions (Nuverra), a national leader in brine handling, treatment, and injection. An integrated management approach was used to incorporate local and regional geologic characterization activities with geologic and simulation models, inform a monitoring, verification, and accounting (MVA) plan, and to conduct a risk assessment. This approach was used to design a FIP for an ARM schema and an extracted brine treatment technology test bed facility. The FIP leverages an existing pressure plume generated by two commercial SWD wells. These wells, in conjunction with a new brine extraction well, will be used to conduct the ARM schema. Results of these tests will be quantified based on their impact on the performance of the existing SWD wells and the surrounding reservoir system. Extracted brine will be injected into an underlying deep saline formation through a new injection well. The locations of proposed extraction and injection wells were selected during the Phase I efforts. These wells will be permitted as North Dakota Administrative Code Underground Injection Control Class II wells and will yield additional characterization data which will further refine the FIP in Phase II. An array of surface and downhole monitoring techniques will validate ARM performance against predictive simulation results. Infrastructure will be constructed to manage extracted fluids at the surface and provide brine to a treatment test bed facility. Treatment of extracted brine can provide a means of reducing extracted brine disposal volumes, an alternate source of water, and/or salable products for beneficial use. A test bed facility will be constructed to provide a means of demonstrating these technologies on a wide range of brine concentrations. Screening criteria based on a techno-economic and life cycle assessment were developed to select high-salinity brine treatment technologies for extended duration treatment (30–60 days) in Phase II. A detailed cost assessment determined total implementation costs for BEST of $19,901,065 million (DOE share $15,680,505). These costs are inclusive of all necessary equipment, infrastructure construction, operations and project closeout costs required to implement BEST. An ideal combination of key factors makes the Johnsons Corner site uniquely suited to be the BEST demonstration.« less

  3. Photonic quantum technologies (Presentation Recording)

    NASA Astrophysics Data System (ADS)

    O'Brien, Jeremy L.

    2015-09-01

    The impact of quantum technology will be profound and far-reaching: secure communication networks for consumers, corporations and government; precision sensors for biomedical technology and environmental monitoring; quantum simulators for the design of new materials, pharmaceuticals and clean energy devices; and ultra-powerful quantum computers for addressing otherwise impossibly large datasets for machine learning and artificial intelligence applications. However, engineering quantum systems and controlling them is an immense technological challenge: they are inherently fragile; and information extracted from a quantum system necessarily disturbs the system itself. Of the various approaches to quantum technologies, photons are particularly appealing for their low-noise properties and ease of manipulation at the single qubit level. We have developed an integrated waveguide approach to photonic quantum circuits for high performance, miniaturization and scalability. We will described our latest progress in generating, manipulating and interacting single photons in waveguide circuits on silicon chips.

  4. Information Science Panel joint meeting with Imaging Science Panel

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Specific activity in information extraction science (taken to include data handling) is needed to: help identify the bounds of practical missions; identify potential data handling and analysis scenarios; identify the required enabling technology; and identify the requirements for a design data base to be used by the disciplines in determining potential parameters for future missions. It was defined that specific analysis topics were a function of the discipline involved, and therefore no attempt was made to define any specific analysis developments required. Rather, it was recognized that a number of generic data handling requirements exist whose solutions cannot be typically supported by the disciplines. The areas of concern were therefore defined as: data handling aspects of system design considerations; enabling technology for data handling, with specific attention to rectification and registration; and enabling technology for analysis. Within each of these areas, the following topics were addressed: state of the art (current status and contributing factors); critical issues; and recommendations for research and/or development.

  5. A case study of data integration for aquatic resources using semantic web technologies

    USGS Publications Warehouse

    Gordon, Janice M.; Chkhenkeli, Nina; Govoni, David L.; Lightsom, Frances L.; Ostroff, Andrea C.; Schweitzer, Peter N.; Thongsavanh, Phethala; Varanka, Dalia E.; Zednik, Stephan

    2015-01-01

    Use cases, information modeling, and linked data techniques are Semantic Web technologies used to develop a prototype system that integrates scientific observations from four independent USGS and cooperator data systems. The techniques were tested with a use case goal of creating a data set for use in exploring potential relationships among freshwater fish populations and environmental factors. The resulting prototype extracts data from the BioData Retrieval System, the Multistate Aquatic Resource Information System, the National Geochemical Survey, and the National Hydrography Dataset. A prototype user interface allows a scientist to select observations from these data systems and combine them into a single data set in RDF format that includes explicitly defined relationships and data definitions. The project was funded by the USGS Community for Data Integration and undertaken by the Community for Data Integration Semantic Web Working Group in order to demonstrate use of Semantic Web technologies by scientists. This allows scientists to simultaneously explore data that are available in multiple, disparate systems beyond those they traditionally have used.

  6. Using Best Practices to Extract, Organize, and Reuse Embedded Decision Support Content Knowledge Rules from Mature Clinical Systems.

    PubMed

    DesAutels, Spencer J; Fox, Zachary E; Giuse, Dario A; Williams, Annette M; Kou, Qing-Hua; Weitkamp, Asli; Neal R, Patel; Bettinsoli Giuse, Nunzia

    2016-01-01

    Clinical decision support (CDS) knowledge, embedded over time in mature medical systems, presents an interesting and complex opportunity for information organization, maintenance, and reuse. To have a holistic view of all decision support requires an in-depth understanding of each clinical system as well as expert knowledge of the latest evidence. This approach to clinical decision support presents an opportunity to unify and externalize the knowledge within rules-based decision support. Driven by an institutional need to prioritize decision support content for migration to new clinical systems, the Center for Knowledge Management and Health Information Technology teams applied their unique expertise to extract content from individual systems, organize it through a single extensible schema, and present it for discovery and reuse through a newly created Clinical Support Knowledge Acquisition and Archival Tool (CS-KAAT). CS-KAAT can build and maintain the underlying knowledge infrastructure needed by clinical systems.

  7. Real-time Accurate Surface Reconstruction Pipeline for Vision Guided Planetary Exploration Using Unmanned Ground and Aerial Vehicles

    NASA Technical Reports Server (NTRS)

    Almeida, Eduardo DeBrito

    2012-01-01

    This report discusses work completed over the summer at the Jet Propulsion Laboratory (JPL), California Institute of Technology. A system is presented to guide ground or aerial unmanned robots using computer vision. The system performs accurate camera calibration, camera pose refinement and surface extraction from images collected by a camera mounted on the vehicle. The application motivating the research is planetary exploration and the vehicles are typically rovers or unmanned aerial vehicles. The information extracted from imagery is used primarily for navigation, as robot location is the same as the camera location and the surfaces represent the terrain that rovers traverse. The processed information must be very accurate and acquired very fast in order to be useful in practice. The main challenge being addressed by this project is to achieve high estimation accuracy and high computation speed simultaneously, a difficult task due to many technical reasons.

  8. Mining biomedical images towards valuable information retrieval in biomedical and life sciences.

    PubMed

    Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas

    2016-01-01

    Biomedical images are helpful sources for the scientists and practitioners in drawing significant hypotheses, exemplifying approaches and describing experimental results in published biomedical literature. In last decades, there has been an enormous increase in the amount of heterogeneous biomedical image production and publication, which results in a need for bioimaging platforms for feature extraction and analysis of text and content in biomedical images to take advantage in implementing effective information retrieval systems. In this review, we summarize technologies related to data mining of figures. We describe and compare the potential of different approaches in terms of their developmental aspects, used methodologies, produced results, achieved accuracies and limitations. Our comparative conclusions include current challenges for bioimaging software with selective image mining, embedded text extraction and processing of complex natural language queries. © The Author(s) 2016. Published by Oxford University Press.

  9. Structuring and extracting knowledge for the support of hypothesis generation in molecular biology

    PubMed Central

    Roos, Marco; Marshall, M Scott; Gibson, Andrew P; Schuemie, Martijn; Meij, Edgar; Katrenko, Sophia; van Hage, Willem Robert; Krommydas, Konstantinos; Adriaans, Pieter W

    2009-01-01

    Background Hypothesis generation in molecular and cellular biology is an empirical process in which knowledge derived from prior experiments is distilled into a comprehensible model. The requirement of automated support is exemplified by the difficulty of considering all relevant facts that are contained in the millions of documents available from PubMed. Semantic Web provides tools for sharing prior knowledge, while information retrieval and information extraction techniques enable its extraction from literature. Their combination makes prior knowledge available for computational analysis and inference. While some tools provide complete solutions that limit the control over the modeling and extraction processes, we seek a methodology that supports control by the experimenter over these critical processes. Results We describe progress towards automated support for the generation of biomolecular hypotheses. Semantic Web technologies are used to structure and store knowledge, while a workflow extracts knowledge from text. We designed minimal proto-ontologies in OWL for capturing different aspects of a text mining experiment: the biological hypothesis, text and documents, text mining, and workflow provenance. The models fit a methodology that allows focus on the requirements of a single experiment while supporting reuse and posterior analysis of extracted knowledge from multiple experiments. Our workflow is composed of services from the 'Adaptive Information Disclosure Application' (AIDA) toolkit as well as a few others. The output is a semantic model with putative biological relations, with each relation linked to the corresponding evidence. Conclusion We demonstrated a 'do-it-yourself' approach for structuring and extracting knowledge in the context of experimental research on biomolecular mechanisms. The methodology can be used to bootstrap the construction of semantically rich biological models using the results of knowledge extraction processes. Models specific to particular experiments can be constructed that, in turn, link with other semantic models, creating a web of knowledge that spans experiments. Mapping mechanisms can link to other knowledge resources such as OBO ontologies or SKOS vocabularies. AIDA Web Services can be used to design personalized knowledge extraction procedures. In our example experiment, we found three proteins (NF-Kappa B, p21, and Bax) potentially playing a role in the interplay between nutrients and epigenetic gene regulation. PMID:19796406

  10. Supercritical fluid extraction. Principles and practice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McHugh, M.A.; Krukonis, V.J.

    This book is a presentation of the fundamentals and application of super-critical fluid solvents (SCF). The authors cover virtually every facet of SCF technology: the history of SCF extraction, its underlying thermodynamic principles, process principles, industrial applications, and analysis of SCF research and development efforts. The thermodynamic principles governing SCF extraction are covered in depth. The often complex three-dimensional pressure-temperature composition (PTx) phase diagrams for SCF-solute mixtures are constructed in a coherent step-by-step manner using the more familiar two-dimensional Px diagrams. The experimental techniques used to obtain high pressure phase behavior information are described in detail and the advantages andmore » disadvantages of each technique are explained. Finally, the equations used to model SCF-solute mixtures are developed, and modeling results are presented to highlight the correlational strengths of a cubic equation of state.« less

  11. Publicly Open Virtualized Gaming Environment For Simulation of All Aspects Related to '100 Year Starship Study'

    NASA Astrophysics Data System (ADS)

    Obousy, R. K.

    2012-09-01

    Sending a mission to distant stars will require our civilization to develop new technologies and change the way we live. The complexity of the task is enormous [1] thus, the thought is to involve people from around the globe through the ``citizen scientist'' paradigm. The suggestion is a ``Gaming Virtual Reality Network'' (GVRN) to simulate sociological and technological aspects involved in this project. Currently there is work being done [2] in developing a technology which will construct computer games within GVRN. This technology will provide quick and easy ways for individuals to develop game scenarios related to various aspects of the ``100YSS'' project. People will be involved in solving certain tasks just by play games. Players will be able to modify conditions, add new technologies, geological conditions, social movements and assemble new strategies just by writing scenarios. The system will interface with textual and video information, extract scenarios written in millions of texts and use it to assemble new games. Thus, players will be able to simulate enormous amounts of possibilities. Information technologies will be involved which will require us to start building the system in a way that any modules can be easily replaced. Thus, GVRN should be modular and open to the community.

  12. [Optimization of dissolution process for superfine grinding technology on total saponins of Panax ginseng fibrous root by response surface methodology].

    PubMed

    Zhao, Ya; Lai, Xiao-Pin; Yao, Hai-Yan; Zhao, Ran; Wu, Yi-Na; Li, Geng

    2014-03-01

    To investigate the effects of superfine comminution extraction technology of ginseng total saponins from Panax ginseng fibrous root, and to make sure the optimal extraction condition. Optimal condition of ginseng total saponins from Panax ginseng fibrous root was based on single factor experiment to study the effects of crushing degree, extraction time, alcohol concentration and extraction temperature on extraction rate. Response surface method was used to investigate three main factors such as superfine comminution time, extraction time and alcohol concentration. The relationship between content of ginseng total saponins in Panax ginseng fibrous root and three factors fitted second degree polynomial models. The optimal extraction condition was 9 min of superfine comminution time, 70% of alcohol, 50 degrees C of extraction temperature and 70 min of extraction time. Under the optimal condition, ginseng total saponins from Panax ginseng fibrous root was average 94. 81%, which was consistent with the predicted value. The optimization of technology is rapid, efficient, simple and stable.

  13. A novel technology coupling extraction and foam fractionation for separating the total saponins from Achyranthes bidentata.

    PubMed

    Ding, Linlin; Wang, Yanji; Wu, Zhaoliang; Liu, Wei; Li, Rui; Wang, Yanyan

    2016-10-02

    A novel technology coupling extraction and foam fractionation was developed for separating the total saponins from Achyranthes bidentata. In the developed technology, the powder of A. bidentata was loaded in a nylon filter cloth pocket with bore diameter of 180 µm. The pocket was fixed in the bulk liquid phase for continuously releasing saponins. Under the optimal conditions, the concentration and the extraction rate of the total saponins in the foamate by the developed technology were 73.5% and 416.2% higher than those by the traditional technology, respectively. The foamates obtained by the traditional technology and the developed technology were analyzed by ultraperformance liquid chromatography-mass spectrometry to determine their ingredients, and the results appeared that the developed technology exhibited a better performance for separating saponins than the traditional technology. The study is expected to develop a novel technology for cost effectively separating plant-derived materials with surface activity.

  14. FIELD EVALUATION OF THE SOLVENT EXTRACTION RESIDUAL BIOTREATMENT (SERB) TECHNOLOGY

    EPA Science Inventory

    The Solvent Extraction Residual Biotreatment (SERB) technology was demonstrated at the former Sage's Dry Cleaner site in Jacksonville, FL where an area of PCE (tetrachloroethylene) contamination was identified. The SERB technology is a treatment train approach to complete site...

  15. Character Recognition Method by Time-Frequency Analyses Using Writing Pressure

    NASA Astrophysics Data System (ADS)

    Watanabe, Tatsuhito; Katsura, Seiichiro

    With the development of information and communication technology, personal verification becomes more and more important. In the future ubiquitous society, the development of terminals handling personal information requires the personal verification technology. The signature is one of the personal verification methods; however, the number of characters is limited in the case of the signature and therefore false signature is used easily. Thus, personal identification is difficult from handwriting. This paper proposes a “haptic pen” that extracts the writing pressure, and shows a character recognition method by time-frequency analyses. Although the figures of characters written by different amanuenses are similar, the differences appear in the time-frequency domain. As a result, it is possible to use the proposed character recognition for personal identification more exactly. The experimental results showed the viability of the proposed method.

  16. A fast and efficient method for device level layout analysis

    NASA Astrophysics Data System (ADS)

    Dong, YaoQi; Zou, Elaine; Pang, Jenny; Huang, Lucas; Yang, Legender; Zhang, Chunlei; Du, Chunshan; Hu, Xinyi; Wan, Qijian

    2017-03-01

    There is an increasing demand for device level layout analysis, especially as technology advances. The analysis is to study standard cells by extracting and classifying critical dimension parameters. There are couples of parameters to extract, like channel width, length, gate to active distance, and active to adjacent active distance, etc. for 14nm technology, there are some other parameters that are cared about. On the one hand, these parameters are very important for studying standard cell structures and spice model development with the goal of improving standard cell manufacturing yield and optimizing circuit performance; on the other hand, a full chip device statistics analysis can provide useful information to diagnose the yield issue. Device analysis is essential for standard cell customization and enhancements and manufacturability failure diagnosis. Traditional parasitic parameters extraction tool like Calibre xRC is powerful but it is not sufficient for this device level layout analysis application as engineers would like to review, classify and filter out the data more easily. This paper presents a fast and efficient method based on Calibre equation-based DRC (eqDRC). Equation-based DRC extends the traditional DRC technology to provide a flexible programmable modeling engine which allows the end user to define grouped multi-dimensional feature measurements using flexible mathematical expressions. This paper demonstrates how such an engine and its programming language can be used to implement critical device parameter extraction. The device parameters are extracted and stored in a DFM database which can be processed by Calibre YieldServer. YieldServer is data processing software that lets engineers query, manipulate, modify, and create data in a DFM database. These parameters, known as properties in eqDRC language, can be annotated back to the layout for easily review. Calibre DesignRev can create a HTML formatted report of the results displayed in Calibre RVE which makes it easy to share results among groups. This method has been proven and used in SMIC PDE team and SPICE team.

  17. Digital Imaging

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Digital Imaging is the computer processed numerical representation of physical images. Enhancement of images results in easier interpretation. Quantitative digital image analysis by Perceptive Scientific Instruments, locates objects within an image and measures them to extract quantitative information. Applications are CAT scanners, radiography, microscopy in medicine as well as various industrial and manufacturing uses. The PSICOM 327 performs all digital image analysis functions. It is based on Jet Propulsion Laboratory technology, is accurate and cost efficient.

  18. Terra-Kleen Response Group, Inc. Solvent Extraction Technology Rapid Commercialization Initiative Report

    EPA Science Inventory

    Terra-Kleen Response Group Inc. (Terra-Kleen), has commercialized a solvent extraction technology that uses a proprietary extraction solvent to transfer organic constituents from soil to a liquid phase in a batch process at ambient temperatures. The proprietary solvent has a rel...

  19. RESOURCES CONSERVATIONS COMPANY - B.E.S.T. SOLVENT EXTRACTION TECHNOLOGY - APPLICATIONS ANALYSIS REPORT

    EPA Science Inventory

    This document is an evaluation of the performance of the Resources Conservation Company (RCC) Basic Extractive Sludge Treatment (B.E.S.T.®) solvent extraction technology and its applicability as a treatment technique for soils, sediments, and sludges contaminated with organics. B...

  20. [Application of micro-power system in the surgery of tooth extraction].

    PubMed

    Kaijin, Hu; Yongfeng, Li

    2015-02-01

    Tooth extraction is a common operation in oral surgery. Traditional-extraction instruments, such as bone chisel, elevator, and bone hammer, lead to not only severe trauma but also unnecessary complications, and patients easily become nervous and apprehensive if tooth extraction is performed using these violent instruments. In recent years, with the develop- ment of minimally invasive concept and technology, various micro-power instruments have been used for tooth extraction. This innovative technology can reduce the iatrogenic trauma and complications of tooth extraction. Additionally, this technology can greatly decrease the patient's physical and mental pressure. The new equipment compensates for the deficiency of traditional tooth extraction equipment and facilitates the gradual replacement of the latter. Diverse micro-power systems have distinct strengths and weaknesses, so some auxiliary instruments are still needed during tooth extraction. This paper focuses on the various micro-power systems for tooth extraction and tries to compare the advantages and disadvantages of these systems. Selection and usage of auxiliary equipment are also introduced. Thus, this paper provides reference for the proper application of the micro-power systems in tooth extraction.

  1. Methodology challenges in studying human gut microbiota - effects of collection, storage, DNA extraction and next generation sequencing technologies.

    PubMed

    Panek, Marina; Čipčić Paljetak, Hana; Barešić, Anja; Perić, Mihaela; Matijašić, Mario; Lojkić, Ivana; Vranešić Bender, Darija; Krznarić, Željko; Verbanac, Donatella

    2018-03-23

    The information on microbiota composition in the human gastrointestinal tract predominantly originates from the analyses of human faeces by application of next generation sequencing (NGS). However, the detected composition of the faecal bacterial community can be affected by various factors including experimental design and procedures. This study evaluated the performance of different protocols for collection and storage of faecal samples (native and OMNIgene.GUT system) and bacterial DNA extraction (MP Biomedicals, QIAGEN and MO BIO kits), using two NGS platforms for 16S rRNA gene sequencing (Ilumina MiSeq and Ion Torrent PGM). OMNIgene.GUT proved as a reliable and convenient system for collection and storage of faecal samples although favouring Sutterella genus. MP provided superior DNA yield and quality, MO BIO depleted Gram positive organisms while using QIAGEN with OMNIgene.GUT resulted in greatest variability compared to other two kits. MiSeq and IT platforms in their supplier recommended setups provided comparable reproducibility of donor faecal microbiota. The differences included higher diversity observed with MiSeq and increased capacity of MiSeq to detect Akkermansia muciniphila, [Odoribacteraceae], Erysipelotrichaceae and Ruminococcaceae (primarily Faecalibacterium prausnitzii). The results of our study could assist the investigators using NGS technologies to make informed decisions on appropriate tools for their experimental pipelines.

  2. Fractal-like Distributions over the Rational Numbers in High-throughput Biological and Clinical Data

    NASA Astrophysics Data System (ADS)

    Trifonov, Vladimir; Pasqualucci, Laura; Dalla-Favera, Riccardo; Rabadan, Raul

    2011-12-01

    Recent developments in extracting and processing biological and clinical data are allowing quantitative approaches to studying living systems. High-throughput sequencing (HTS), expression profiles, proteomics, and electronic health records (EHR) are some examples of such technologies. Extracting meaningful information from those technologies requires careful analysis of the large volumes of data they produce. In this note, we present a set of fractal-like distributions that commonly appear in the analysis of such data. The first set of examples are drawn from a HTS experiment. Here, the distributions appear as part of the evaluation of the error rate of the sequencing and the identification of tumorogenic genomic alterations. The other examples are obtained from risk factor evaluation and analysis of relative disease prevalence and co-mordbidity as these appear in EHR. The distributions are also relevant to identification of subclonal populations in tumors and the study of quasi-species and intrahost diversity of viral populations.

  3. Extracting Product Features and Opinion Words Using Pattern Knowledge in Customer Reviews

    PubMed Central

    Lynn, Khin Thidar

    2013-01-01

    Due to the development of e-commerce and web technology, most of online Merchant sites are able to write comments about purchasing products for customer. Customer reviews expressed opinion about products or services which are collectively referred to as customer feedback data. Opinion extraction about products from customer reviews is becoming an interesting area of research and it is motivated to develop an automatic opinion mining application for users. Therefore, efficient method and techniques are needed to extract opinions from reviews. In this paper, we proposed a novel idea to find opinion words or phrases for each feature from customer reviews in an efficient way. Our focus in this paper is to get the patterns of opinion words/phrases about the feature of product from the review text through adjective, adverb, verb, and noun. The extracted features and opinions are useful for generating a meaningful summary that can provide significant informative resource to help the user as well as merchants to track the most suitable choice of product. PMID:24459430

  4. Extracting product features and opinion words using pattern knowledge in customer reviews.

    PubMed

    Htay, Su Su; Lynn, Khin Thidar

    2013-01-01

    Due to the development of e-commerce and web technology, most of online Merchant sites are able to write comments about purchasing products for customer. Customer reviews expressed opinion about products or services which are collectively referred to as customer feedback data. Opinion extraction about products from customer reviews is becoming an interesting area of research and it is motivated to develop an automatic opinion mining application for users. Therefore, efficient method and techniques are needed to extract opinions from reviews. In this paper, we proposed a novel idea to find opinion words or phrases for each feature from customer reviews in an efficient way. Our focus in this paper is to get the patterns of opinion words/phrases about the feature of product from the review text through adjective, adverb, verb, and noun. The extracted features and opinions are useful for generating a meaningful summary that can provide significant informative resource to help the user as well as merchants to track the most suitable choice of product.

  5. SANDIA NATIONAL LABORATORIES IN SITU ELECTROKINETIC EXTRACTION TECHNOLOGY; INNOVATIVE TECHNOLOGY EVALUATION REPORT

    EPA Science Inventory

    As a part of the Superfund Innovative Technology Evaluation (SITE) Program, the U.S. Environmental Protection Agency evaluated the In-Situ Electrokinetic Extraction (ISEE) system at Sandia National Laboratories, Albuquerque, New Mexico.

    The SITE demonstration results show ...

  6. SURFACTANT-ENHANCED EXTRACTION TECHNOLOGY EVALUATION VERSUCHSEININCHTUNG ZUR GRUNDWASSER-UND ALTLASTENSANIERUNG (VEGAS) FACILITY, STUTTGART, GERMANY

    EPA Science Inventory

    This innovative technology evaluation report (ITER) summarized the results of an evaluation of a surfactant-enhanced extraction technology. This evaluation was conducted under a bilateral agreement between the United States (U.S.) Environmental Protection Agency (EPA) Superfund ...

  7. Geospatial Information from Satellite Imagery for Geovisualisation of Smart Cities in India

    NASA Astrophysics Data System (ADS)

    Mohan, M.

    2016-06-01

    In the recent past, there have been large emphasis on extraction of geospatial information from satellite imagery. The Geospatial information are being processed through geospatial technologies which are playing important roles in developing of smart cities, particularly in developing countries of the world like India. The study is based on the latest geospatial satellite imagery available for the multi-date, multi-stage, multi-sensor, and multi-resolution. In addition to this, the latest geospatial technologies have been used for digital image processing of remote sensing satellite imagery and the latest geographic information systems as 3-D GeoVisualisation, geospatial digital mapping and geospatial analysis for developing of smart cities in India. The Geospatial information obtained from RS and GPS systems have complex structure involving space, time and presentation. Such information helps in 3-Dimensional digital modelling for smart cities which involves of spatial and non-spatial information integration for geographic visualisation of smart cites in context to the real world. In other words, the geospatial database provides platform for the information visualisation which is also known as geovisualisation. So, as a result there have been an increasing research interest which are being directed to geospatial analysis, digital mapping, geovisualisation, monitoring and developing of smart cities using geospatial technologies. However, the present research has made an attempt for development of cities in real world scenario particulary to help local, regional and state level planners and policy makers to better understand and address issues attributed to cities using the geospatial information from satellite imagery for geovisualisation of Smart Cities in emerging and developing country, India.

  8. Green extraction of natural products: concept and principles.

    PubMed

    Chemat, Farid; Vian, Maryline Abert; Cravotto, Giancarlo

    2012-01-01

    The design of green and sustainable extraction methods of natural products is currently a hot research topic in the multidisciplinary area of applied chemistry, biology and technology. Herein we aimed to introduce the six principles of green-extraction, describing a multifaceted strategy to apply this concept at research and industrial level. The mainstay of this working protocol are new and innovative technologies, process intensification, agro-solvents and energy saving. The concept, principles and examples of green extraction here discussed, offer an updated glimpse of the huge technological effort that is being made and the diverse applications that are being developed.

  9. Developing a disease outbreak event corpus.

    PubMed

    Conway, Mike; Kawazoe, Ai; Chanlekha, Hutchatai; Collier, Nigel

    2010-09-28

    In recent years, there has been a growth in work on the use of information extraction technologies for tracking disease outbreaks from online news texts, yet publicly available evaluation standards (and associated resources) for this new area of research have been noticeably lacking. This study seeks to create a "gold standard" data set against which to test how accurately disease outbreak information extraction systems can identify the semantics of disease outbreak events. Additionally, we hope that the provision of an annotation scheme (and associated corpus) to the community will encourage open evaluation in this new and growing application area. We developed an annotation scheme for identifying infectious disease outbreak events in news texts. An event--in the context of our annotation scheme--consists minimally of geographical (eg, country and province) and disease name information. However, the scheme also allows for the rich encoding of other domain salient concepts (eg, international travel, species, and food contamination). The work resulted in a 200-document corpus of event-annotated disease outbreak reports that can be used to evaluate the accuracy of event detection algorithms (in this case, for the BioCaster biosurveillance online news information extraction system). In the 200 documents, 394 distinct events were identified (mean 1.97 events per document, range 0-25 events per document). We also provide a download script and graphical user interface (GUI)-based event browsing software to facilitate corpus exploration. In summary, we present an annotation scheme and corpus that can be used in the evaluation of disease outbreak event extraction algorithms. The annotation scheme and corpus were designed both with the particular evaluation requirements of the BioCaster system in mind as well as the wider need for further evaluation resources in this growing research area.

  10. Semi-Automated Approach for Mapping Urban Trees from Integrated Aerial LiDAR Point Cloud and Digital Imagery Datasets

    NASA Astrophysics Data System (ADS)

    Dogon-Yaro, M. A.; Kumar, P.; Rahman, A. Abdul; Buyuksalih, G.

    2016-09-01

    Mapping of trees plays an important role in modern urban spatial data management, as many benefits and applications inherit from this detailed up-to-date data sources. Timely and accurate acquisition of information on the condition of urban trees serves as a tool for decision makers to better appreciate urban ecosystems and their numerous values which are critical to building up strategies for sustainable development. The conventional techniques used for extracting trees include ground surveying and interpretation of the aerial photography. However, these techniques are associated with some constraints, such as labour intensive field work and a lot of financial requirement which can be overcome by means of integrated LiDAR and digital image datasets. Compared to predominant studies on trees extraction mainly in purely forested areas, this study concentrates on urban areas, which have a high structural complexity with a multitude of different objects. This paper presented a workflow about semi-automated approach for extracting urban trees from integrated processing of airborne based LiDAR point cloud and multispectral digital image datasets over Istanbul city of Turkey. The paper reveals that the integrated datasets is a suitable technology and viable source of information for urban trees management. As a conclusion, therefore, the extracted information provides a snapshot about location, composition and extent of trees in the study area useful to city planners and other decision makers in order to understand how much canopy cover exists, identify new planting, removal, or reforestation opportunities and what locations have the greatest need or potential to maximize benefits of return on investment. It can also help track trends or changes to the urban trees over time and inform future management decisions.

  11. Extraction and purification methods in downstream processing of plant-based recombinant proteins.

    PubMed

    Łojewska, Ewelina; Kowalczyk, Tomasz; Olejniczak, Szymon; Sakowicz, Tomasz

    2016-04-01

    During the last two decades, the production of recombinant proteins in plant systems has been receiving increased attention. Currently, proteins are considered as the most important biopharmaceuticals. However, high costs and problems with scaling up the purification and isolation processes make the production of plant-based recombinant proteins a challenging task. This paper presents a summary of the information regarding the downstream processing in plant systems and provides a comprehensible overview of its key steps, such as extraction and purification. To highlight the recent progress, mainly new developments in the downstream technology have been chosen. Furthermore, besides most popular techniques, alternative methods have been described. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Information processing for aerospace structural health monitoring

    NASA Astrophysics Data System (ADS)

    Lichtenwalner, Peter F.; White, Edward V.; Baumann, Erwin W.

    1998-06-01

    Structural health monitoring (SHM) technology provides a means to significantly reduce life cycle of aerospace vehicles by eliminating unnecessary inspections, minimizing inspection complexity, and providing accurate diagnostics and prognostics to support vehicle life extension. In order to accomplish this, a comprehensive SHM system will need to acquire data from a wide variety of diverse sensors including strain gages, accelerometers, acoustic emission sensors, crack growth gages, corrosion sensors, and piezoelectric transducers. Significant amounts of computer processing will then be required to convert this raw sensor data into meaningful information which indicates both the diagnostics of the current structural integrity as well as the prognostics necessary for planning and managing the future health of the structure in a cost effective manner. This paper provides a description of the key types of information processing technologies required in an effective SHM system. These include artificial intelligence techniques such as neural networks, expert systems, and fuzzy logic for nonlinear modeling, pattern recognition, and complex decision making; signal processing techniques such as Fourier and wavelet transforms for spectral analysis and feature extraction; statistical algorithms for optimal detection, estimation, prediction, and fusion; and a wide variety of other algorithms for data analysis and visualization. The intent of this paper is to provide an overview of the role of information processing for SHM, discuss various technologies which can contribute to accomplishing this role, and present some example applications of information processing for SHM implemented at the Boeing Company.

  13. Initiatives promoting seamless care in medication management: an international review of the grey literature.

    PubMed

    Claeys, Coraline; Foulon, Veerle; de Winter, Sabrina; Spinewine, Anne

    2013-12-01

    Patients' transition between hospital and community is a high-risk period for the occurrence of medication-related problems. The objective was to review initiatives, implemented at national and regional levels in seven selected countries, aiming at improving continuity in medication management upon admission and hospital discharge. We performed a structured search of grey literature, mainly through relevant websites (scientific, professional and governmental organizations). Regional or national initiatives were selected. For each initiative data on the characteristics, impact, success factors and barriers were extracted. National experts were asked to validate the initiatives identified and the data extracted. Most initiatives have been implemented since the early 2000 and are still ongoing. The principal actions include: development and implementation of guidelines for healthcare professionals, national information campaigns, education of healthcare professionals and development of information technologies to share data across settings of care. Positive results have been partially reported in terms of intake into practice or process measures. Critical success factors identified included: leadership and commitment to convey national and local forces, tailoring to local settings, development of a regulatory framework and information technology support. Barriers identified included: lack of human and financial resources, questions relative to responsibility and accountability, lack of training and lack of agreement on privacy issues. Although not all initiatives are applicable as such to a particular healthcare setting, most of them convey very interesting data that should be used when drawing recommendations and implementing approaches to optimize continuity of care.

  14. Diethylstilbestrol in fish tissue determined through subcritical fluid extraction and with GC-MS

    NASA Astrophysics Data System (ADS)

    Qiao, Qinghui; Shi, Nianrong; Feng, Xiaomei; Lu, Jie; Han, Yuqian; Xue, Changhu

    2016-06-01

    As the key point in sex hormone analysis, sample pre-treatment technology has attracted scientists' attention all over the world, and the development trend of sample preparation forwarded to faster and more efficient technologies. Taking economic and environmental concerns into account, subcritical fluid extraction as a faster and more efficient method has stood out as a sample pre-treatment technology. This new extraction technology can overcome the shortcomings of supercritical fluid and achieve higher extraction efficiency at relatively low pressures and temperatures. In this experiment, a simple, sensitive and efficient method has been developed for the determination of diethylstilbestrol (DES) in fish tissue using subcritical 1,1,1,2-tetrafluoroethane (R134a) extraction in combination with gas chromatography-mass spectrometry (GC-MS). After extraction, freezing-lipid filtration was utilized to remove fatty co-extract. Further purification steps were performed with C18 and NH2 solid phase extraction (SPE). Finally, the analyte was derived by heptafluorobutyric anhydride (HFBA), followed by GC-MS analysis. Response surface methodology (RSM) was employed to optimizing the extraction condition, and the optimized was as follows: extraction pressure, 4.3 MPa; extraction temperature, 26°C; amount of co-solvent volume, 4.7 mL. Under this condition, at a spiked level of 1, 5, 10 μg kg-1, the mean recovery of DES was more than 90% with relative standard deviations (RSDs) less than 10%. Finally, the developed method has been successfully used to analyzing the real samples.

  15. Testing the Technology Acceptance Model: HIV case managers' intention to use a continuity of care record with context-specific links.

    PubMed

    Schnall, Rebecca; Bakken, Suzanne

    2011-09-01

    To assess the applicability of the Technology Acceptance Model (TAM) constructs in explaining HIV case managers' behavioural intention to use a continuity of care record (CCR) with context-specific links designed to meet their information needs. Data were collected from 94 case managers who provide care to persons living with HIV (PLWH) using an online survey comprising three components: (1) demographic information: age, gender, ethnicity, race, Internet usage and computer experience; (2) mock-up of CCR with context-specific links; and items related to TAM constructs. Data analysis included: principal components factor analysis (PCA), assessment of internal consistency reliability and univariate and multivariate analysis. PCA extracted three factors (Perceived Ease of Use, Perceived Usefulness and Perceived Barriers to Use), explained variance = 84.9%, Cronbach's ά = 0.69-0.91. In a linear regression model, Perceived Ease of Use, Perceived Usefulness and Perceived Barriers to Use explained 43.6% (p < 0.001) of the variance in Behavioural Intention to use a CCR with context-specific links. Our study contributes to the evidence base regarding TAM in health care through expanding the type of professional surveyed, study setting and Health Information Technology assessed.

  16. Big Data Technologies

    PubMed Central

    Bellazzi, Riccardo; Dagliati, Arianna; Sacchi, Lucia; Segagni, Daniele

    2015-01-01

    The so-called big data revolution provides substantial opportunities to diabetes management. At least 3 important directions are currently of great interest. First, the integration of different sources of information, from primary and secondary care to administrative information, may allow depicting a novel view of patient’s care processes and of single patient’s behaviors, taking into account the multifaceted nature of chronic care. Second, the availability of novel diabetes technologies, able to gather large amounts of real-time data, requires the implementation of distributed platforms for data analysis and decision support. Finally, the inclusion of geographical and environmental information into such complex IT systems may further increase the capability of interpreting the data gathered and extract new knowledge from them. This article reviews the main concepts and definitions related to big data, it presents some efforts in health care, and discusses the potential role of big data in diabetes care. Finally, as an example, it describes the research efforts carried on in the MOSAIC project, funded by the European Commission. PMID:25910540

  17. Interactive Visualization of Large-Scale Hydrological Data using Emerging Technologies in Web Systems and Parallel Programming

    NASA Astrophysics Data System (ADS)

    Demir, I.; Krajewski, W. F.

    2013-12-01

    As geoscientists are confronted with increasingly massive datasets from environmental observations to simulations, one of the biggest challenges is having the right tools to gain scientific insight from the data and communicate the understanding to stakeholders. Recent developments in web technologies make it easy to manage, visualize and share large data sets with general public. Novel visualization techniques and dynamic user interfaces allow users to interact with data, and modify the parameters to create custom views of the data to gain insight from simulations and environmental observations. This requires developing new data models and intelligent knowledge discovery techniques to explore and extract information from complex computational simulations or large data repositories. Scientific visualization will be an increasingly important component to build comprehensive environmental information platforms. This presentation provides an overview of the trends and challenges in the field of scientific visualization, and demonstrates information visualization and communication tools developed within the light of these challenges.

  18. Ultrasound‐assisted emerging technologies for chemical processes

    PubMed Central

    Geertman, Rob; Wierschem, Matthias; Skiborowski, Mirko; Gielen, Bjorn; Jordens, Jeroen; John, Jinu J; Van Gerven, Tom

    2018-01-01

    Abstract The chemical industry has witnessed many important developments during past decades largely enabled by process intensification techniques. Some of them are already proven at commercial scale (e.g. reactive distillation) while others (e.g. ultrasound‐assisted extraction/crystallization/reaction) are on their way to becoming the next‐generation technologies. This article focuses on the advances of ultrasound (US)‐assisted technologies that could lead in the near future to significant improvements in commercial activities. The aim is to provide an authoritative discussion on US‐assisted technologies that are currently emerging from the research environment into the chemical industry, as well as give an overview of the current state‐of‐the‐art applications of US in chemical processing (e.g. enzymatic reactive distillation, crystallization of API). Sufficient information is included to allow the assessment of US‐assisted technologies and the challenges for implementation, as well as their potential for commercial applications. © 2017 The Authors. Journal of Chemical Technology & Biotechnology published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry. PMID:29780194

  19. Built-up Areas Extraction in High Resolution SAR Imagery based on the method of Multiple Feature Weighted Fusion

    NASA Astrophysics Data System (ADS)

    Liu, X.; Zhang, J. X.; Zhao, Z.; Ma, A. D.

    2015-06-01

    Synthetic aperture radar in the application of remote sensing technology is becoming more and more widely because of its all-time and all-weather operation, feature extraction research in high resolution SAR image has become a hot topic of concern. In particular, with the continuous improvement of airborne SAR image resolution, image texture information become more abundant. It's of great significance to classification and extraction. In this paper, a novel method for built-up areas extraction using both statistical and structural features is proposed according to the built-up texture features. First of all, statistical texture features and structural features are respectively extracted by classical method of gray level co-occurrence matrix and method of variogram function, and the direction information is considered in this process. Next, feature weights are calculated innovatively according to the Bhattacharyya distance. Then, all features are weighted fusion. At last, the fused image is classified with K-means classification method and the built-up areas are extracted after post classification process. The proposed method has been tested by domestic airborne P band polarization SAR images, at the same time, two groups of experiments based on the method of statistical texture and the method of structural texture were carried out respectively. On the basis of qualitative analysis, quantitative analysis based on the built-up area selected artificially is enforced, in the relatively simple experimentation area, detection rate is more than 90%, in the relatively complex experimentation area, detection rate is also higher than the other two methods. In the study-area, the results show that this method can effectively and accurately extract built-up areas in high resolution airborne SAR imagery.

  20. The guided-mode resonance biosensor: principles, technology, and implementation

    NASA Astrophysics Data System (ADS)

    Magnusson, Robert; Lee, Kyu J.; Hemmati, Hafez; Ko, Yeong Hwan; Wenner, Brett R.; Allen, Jeffery W.; Allen, Monica S.; Gimlin, Susanne; Weidanz, Debra Wawro

    2018-02-01

    The guided-mode resonance (GMR) sensor operates with quasi-guided modes induced in periodic films. The resonance is enabled by 1D or 2D nanopatterns that are expeditiously fabricated. Optical sensors are needed in many fields including medical diagnostics, chemical analyses, and environmental monitoring. Inducing resonance in multiple modes enables extraction of complete bioreaction information including the biolayer thickness, biolayer refractive index, and any change in the refractive index in the background buffer solution. Thus, we refer to this version of the GMR sensor as the complete biosensor. We address the fundamentals, state of technological development, and implementation of this basic sensor modality.

  1. Single molecule optical measurements of orientation and rotations of biological macromolecules.

    PubMed

    Shroder, Deborah Y; Lippert, Lisa G; Goldman, Yale E

    2016-11-22

    Subdomains of macromolecules often undergo large orientation changes during their catalytic cycles that are essential for their activity. Tracking these rearrangements in real time opens a powerful window into the link between protein structure and functional output. Site-specific labeling of individual molecules with polarized optical probes and measurement of their spatial orientation can give insight into the crucial conformational changes, dynamics, and fluctuations of macromolecules. Here we describe the range of single molecule optical technologies that can extract orientation information from these probes, review the relevant types of probes and labeling techniques, and highlight the advantages and disadvantages of these technologies for addressing specific inquiries.

  2. Preparation of acellular scaffold for corneal tissue engineering by supercritical carbon dioxide extraction technology.

    PubMed

    Huang, Yi-Hsun; Tseng, Fan-Wei; Chang, Wen-Hsin; Peng, I-Chen; Hsieh, Dar-Jen; Wu, Shu-Wei; Yeh, Ming-Long

    2017-08-01

    In this study, we developed a novel method using supercritical carbon dioxide (SCCO 2 ) to prepare acellular porcine cornea (APC). Under gentle extraction conditions using SCCO 2 technology, hematoxylin and eosin staining showed that cells were completely lysed, and cell debris, including nuclei, was efficiently removed from the porcine cornea. The SCCO 2 -treated corneas exhibited intact stromal structures and appropriate mechanical properties. Moreover, no immunological reactions and neovascularization were observed after lamellar keratoplasty in rabbits. All transplanted grafts and animals survived without complications. The transplanted APCs were opaque after the operation but became transparent within 2weeks. Complete re-epithelialization of the transplanted APCs was observed within 4weeks. In conclusion, APCs produced by SCCO 2 extraction technology could be an ideal and useful scaffold for corneal tissue engineering. We decellularized the porcine cornea using SCCO 2 extraction technology and investigated the characteristics, mechanical properties, and biocompatibility of the decellularized porcine cornea by lamellar keratoplasty in rabbits. To the best of our knowledge, this is the first report describing the use of SCCO 2 extraction technology for preparation of acellular corneal scaffold. We proved that the cellular components of porcine corneas had been efficiently removed, and the biomechanical properties of the scaffold were well preserved by SCCO 2 extraction technology. SCCO 2 -treated corneas maintained optical transparency and exhibited appropriate strength to withstand surgical procedures. In vivo, the transplanted corneas showed no evidence of immunological reactions and exhibited good biocompatibility and long-term stability. Our results suggested that the APCs developed by SCCO 2 extraction technology could be an ideal and useful scaffold for corneal replacement and corneal tissue engineering. Copyright © 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  3. Review: Magnetic resonance imaging techniques in ophthalmology

    PubMed Central

    Fagan, Andrew J.

    2012-01-01

    Imaging the eye with magnetic resonance imaging (MRI) has proved difficult due to the eye’s propensity to move involuntarily over typical imaging timescales, obscuring the fine structure in the eye due to the resulting motion artifacts. However, advances in MRI technology help to mitigate such drawbacks, enabling the acquisition of high spatiotemporal resolution images with a variety of contrast mechanisms. This review aims to classify the MRI techniques used to date in clinical and preclinical ophthalmologic studies, describing the qualitative and quantitative information that may be extracted and how this may inform on ocular pathophysiology. PMID:23112569

  4. [Optimization of extraction process for tannins from Geranium orientali-tibeticum by supercritical CO2 method].

    PubMed

    Xie, Song; Tong, Zhi-Ping; Tan, Rui; Liu, Xiao-Zhen

    2014-08-01

    In order to optimize extraction process conditions of tannins from Geranium orientali-tibeticum by supercritical CO2, the content of tannins was determined by phosphomolybdium tungsten acid-casein reaction, with extraction pressure, extraction temper- ature and extraction time as factors, the content of tannins from extract of G. orientali-tibeticum as index, technology conditions were optimized by orthogonal test. Optimum technology conditions were as follows: extraction pressure was 25 MPa, extraction temperature was 50 °C, extracted 1.5 h. The content of tannins in extract was 12.91 mg x g(-1), extract rate was 3.67%. The method established could be used for assay the contents of tannin in G. orientali-tibeticum. The circulated extraction was an effective extraction process that was stable and feasible, and that provides a way of the extraction process conditions of tannin from G. orientali-tibeticum.

  5. Smart Shop Assistant - Using Semantic Technologies to Improve Online Shopping

    NASA Astrophysics Data System (ADS)

    Niemann, Magnus; Mochol, Malgorzata; Tolksdorf, Robert

    Internet commerce experiences a rising complexity: Not only more and more products become available online but also the amount of information available on a single product has been constantly increasing. Thanks to the Web 2.0 development it is, in the meantime, quite common to involve customers in the creation of product description and extraction of additional product information by offering customers feedback forms and product review sites, users' weblogs and other social web services. To face this situation, one of the main tasks in a future internet will be to aggregate, sort and evaluate this huge amount of information to aid the customers in choosing the "perfect" product for their needs.

  6. The (In)Effectiveness of Simulated Blur for Depth Perception in Naturalistic Images.

    PubMed

    Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J

    2015-01-01

    We examine depth perception in images of real scenes with naturalistic variation in pictorial depth cues, simulated dioptric blur and binocular disparity. Light field photographs of natural scenes were taken with a Lytro plenoptic camera that simultaneously captures images at up to 12 focal planes. When accommodation at any given plane was simulated, the corresponding defocus blur at other depth planes was extracted from the stack of focal plane images. Depth information from pictorial cues, relative blur and stereoscopic disparity was separately introduced into the images. In 2AFC tasks, observers were required to indicate which of two patches extracted from these images was farther. Depth discrimination sensitivity was highest when geometric and stereoscopic disparity cues were both present. Blur cues impaired sensitivity by reducing the contrast of geometric information at high spatial frequencies. While simulated generic blur may not assist depth perception, it remains possible that dioptric blur from the optics of an observer's own eyes may be used to recover depth information on an individual basis. The implications of our findings for virtual reality rendering technology are discussed.

  7. The (In)Effectiveness of Simulated Blur for Depth Perception in Naturalistic Images

    PubMed Central

    Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J.

    2015-01-01

    We examine depth perception in images of real scenes with naturalistic variation in pictorial depth cues, simulated dioptric blur and binocular disparity. Light field photographs of natural scenes were taken with a Lytro plenoptic camera that simultaneously captures images at up to 12 focal planes. When accommodation at any given plane was simulated, the corresponding defocus blur at other depth planes was extracted from the stack of focal plane images. Depth information from pictorial cues, relative blur and stereoscopic disparity was separately introduced into the images. In 2AFC tasks, observers were required to indicate which of two patches extracted from these images was farther. Depth discrimination sensitivity was highest when geometric and stereoscopic disparity cues were both present. Blur cues impaired sensitivity by reducing the contrast of geometric information at high spatial frequencies. While simulated generic blur may not assist depth perception, it remains possible that dioptric blur from the optics of an observer’s own eyes may be used to recover depth information on an individual basis. The implications of our findings for virtual reality rendering technology are discussed. PMID:26447793

  8. Automated site characterization for robotic sample acquisition systems

    NASA Astrophysics Data System (ADS)

    Scholl, Marija S.; Eberlein, Susan J.

    1993-04-01

    A mobile, semiautonomous vehicle with multiple sensors and on-board intelligence is proposed for performing preliminary scientific investigations on extraterrestrial bodies prior to human exploration. Two technologies, a hybrid optical-digital computer system based on optical correlator technology and an image and instrument data analysis system, provide complementary capabilities that might be part of an instrument package for an intelligent robotic vehicle. The hybrid digital-optical vision system could perform real-time image classification tasks using an optical correlator with programmable matched filters under control of a digital microcomputer. The data analysis system would analyze visible and multiband imagery to extract mineral composition and textural information for geologic characterization. Together these technologies would support the site characterization needs of a robotic vehicle for both navigational and scientific purposes.

  9. Active assistance technology for health-related behavior change: an interdisciplinary review.

    PubMed

    Kennedy, Catriona M; Powell, John; Payne, Thomas H; Ainsworth, John; Boyd, Alan; Buchan, Iain

    2012-06-14

    Information technology can help individuals to change their health behaviors. This is due to its potential for dynamic and unbiased information processing enabling users to monitor their own progress and be informed about risks and opportunities specific to evolving contexts and motivations. However, in many behavior change interventions, information technology is underused by treating it as a passive medium focused on efficient transmission of information and a positive user experience. To conduct an interdisciplinary literature review to determine the extent to which the active technological capabilities of dynamic and adaptive information processing are being applied in behavior change interventions and to identify their role in these interventions. We defined key categories of active technology such as semantic information processing, pattern recognition, and adaptation. We conducted the literature search using keywords derived from the categories and included studies that indicated a significant role for an active technology in health-related behavior change. In the data extraction, we looked specifically for the following technology roles: (1) dynamic adaptive tailoring of messages depending on context, (2) interactive education, (3) support for client self-monitoring of behavior change progress, and (4) novel ways in which interventions are grounded in behavior change theories using active technology. The search returned 228 potentially relevant articles, of which 41 satisfied the inclusion criteria. We found that significant research was focused on dialog systems, embodied conversational agents, and activity recognition. The most covered health topic was physical activity. The majority of the studies were early-stage research. Only 6 were randomized controlled trials, of which 4 were positive for behavior change and 5 were positive for acceptability. Empathy and relational behavior were significant research themes in dialog systems for behavior change, with many pilot studies showing a preference for those features. We found few studies that focused on interactive education (3 studies) and self-monitoring (2 studies). Some recent research is emerging in dynamic tailoring (15 studies) and theoretically grounded ontologies for automated semantic processing (4 studies). The potential capabilities and risks of active assistance technologies are not being fully explored in most current behavior change research. Designers of health behavior interventions need to consider the relevant informatics methods and algorithms more fully. There is also a need to analyze the possibilities that can result from interaction between different technology components. This requires deep interdisciplinary collaboration, for example, between health psychology, computer science, health informatics, cognitive science, and educational methodology.

  10. Active Assistance Technology for Health-Related Behavior Change: An Interdisciplinary Review

    PubMed Central

    Kennedy, Catriona M; Powell, John; Payne, Thomas H; Ainsworth, John; Boyd, Alan

    2012-01-01

    Background Information technology can help individuals to change their health behaviors. This is due to its potential for dynamic and unbiased information processing enabling users to monitor their own progress and be informed about risks and opportunities specific to evolving contexts and motivations. However, in many behavior change interventions, information technology is underused by treating it as a passive medium focused on efficient transmission of information and a positive user experience. Objective To conduct an interdisciplinary literature review to determine the extent to which the active technological capabilities of dynamic and adaptive information processing are being applied in behavior change interventions and to identify their role in these interventions. Methods We defined key categories of active technology such as semantic information processing, pattern recognition, and adaptation. We conducted the literature search using keywords derived from the categories and included studies that indicated a significant role for an active technology in health-related behavior change. In the data extraction, we looked specifically for the following technology roles: (1) dynamic adaptive tailoring of messages depending on context, (2) interactive education, (3) support for client self-monitoring of behavior change progress, and (4) novel ways in which interventions are grounded in behavior change theories using active technology. Results The search returned 228 potentially relevant articles, of which 41 satisfied the inclusion criteria. We found that significant research was focused on dialog systems, embodied conversational agents, and activity recognition. The most covered health topic was physical activity. The majority of the studies were early-stage research. Only 6 were randomized controlled trials, of which 4 were positive for behavior change and 5 were positive for acceptability. Empathy and relational behavior were significant research themes in dialog systems for behavior change, with many pilot studies showing a preference for those features. We found few studies that focused on interactive education (3 studies) and self-monitoring (2 studies). Some recent research is emerging in dynamic tailoring (15 studies) and theoretically grounded ontologies for automated semantic processing (4 studies). Conclusions The potential capabilities and risks of active assistance technologies are not being fully explored in most current behavior change research. Designers of health behavior interventions need to consider the relevant informatics methods and algorithms more fully. There is also a need to analyze the possibilities that can result from interaction between different technology components. This requires deep interdisciplinary collaboration, for example, between health psychology, computer science, health informatics, cognitive science, and educational methodology. PMID:22698679

  11. CHLORINATED SOLVENT CONTAMINATED SOILS AND GROUNDWATER: FIELD APPLICATION OF THE SOLVENT EXTRACTION RESIDUAL BIOTREATMENT TECHNOLOGY

    EPA Science Inventory

    A pilot scale demonstration of the Solvent Extraction Residual Biotreatment (SERB) technology was conducted at the former Sage's Dry Cleaner site in Jacksonville, FL. The SERB technology is a treatment train approach to complete site restoration, which combines an active in situ...

  12. Applications of aerospace technology to petroleum extraction and reservoir engineering

    NASA Technical Reports Server (NTRS)

    Jaffe, L. D.; Back, L. H.; Berdahl, C. M.; Collins, E. E., Jr.; Gordon, P. G.; Houseman, J.; Humphrey, M. F.; Hsu, G. C.; Ham, J. D.; Marte, J. E.; hide

    1977-01-01

    Through contacts with the petroleum industry, the petroleum service industry, universities and government agencies, important petroleum extraction problems were identified. For each problem, areas of aerospace technology that might aid in its solution were also identified, where possible. Some of the problems were selected for further consideration. Work on these problems led to the formulation of specific concepts as candidate for development. Each concept is addressed to the solution of specific extraction problems and makes use of specific areas of aerospace technology.

  13. CD-SEM real time bias correction using reference metrology based modeling

    NASA Astrophysics Data System (ADS)

    Ukraintsev, V.; Banke, W.; Zagorodnev, G.; Archie, C.; Rana, N.; Pavlovsky, V.; Smirnov, V.; Briginas, I.; Katnani, A.; Vaid, A.

    2018-03-01

    Accuracy of patterning impacts yield, IC performance and technology time to market. Accuracy of patterning relies on optical proximity correction (OPC) models built using CD-SEM inputs and intra die critical dimension (CD) control based on CD-SEM. Sub-nanometer measurement uncertainty (MU) of CD-SEM is required for current technologies. Reported design and process related bias variation of CD-SEM is in the range of several nanometers. Reference metrology and numerical modeling are used to correct SEM. Both methods are slow to be used for real time bias correction. We report on real time CD-SEM bias correction using empirical models based on reference metrology (RM) data. Significant amount of currently untapped information (sidewall angle, corner rounding, etc.) is obtainable from SEM waveforms. Using additional RM information provided for specific technology (design rules, materials, processes) CD extraction algorithms can be pre-built and then used in real time for accurate CD extraction from regular CD-SEM images. The art and challenge of SEM modeling is in finding robust correlation between SEM waveform features and bias of CD-SEM as well as in minimizing RM inputs needed to create accurate (within the design and process space) model. The new approach was applied to improve CD-SEM accuracy of 45 nm GATE and 32 nm MET1 OPC 1D models. In both cases MU of the state of the art CD-SEM has been improved by 3x and reduced to a nanometer level. Similar approach can be applied to 2D (end of line, contours, etc.) and 3D (sidewall angle, corner rounding, etc.) cases.

  14. Extracting Temporal and Spatial Distributions Information about Algal Glooms Based on Multitemporal Modis

    NASA Astrophysics Data System (ADS)

    Chunguang, L.; Qingjiu, T.

    2012-07-01

    Based on MODIS remote sensing data, method and technology to extraction the time and space distribution information of algae bloom is studied and established. The dynamic feature of time and space in Taihu Lake from 2009 to 2011 can be obtained by extracted method. Variation of cyanobacterial bloom in the Taihu Lake is analyzed and discussed. The algae bloom frequency index (AFI) and algae bloom sustainability index (ASI) is important criterion which can show the interannual and inter-monthly variation in the whole area or the subregion of Taihu Lake. Utilizing the AFI and ASI from 2009 to 2011, it found some phenomena that: the booming frequency decreased from the north and west to the East and South of Taihu Lake. The annual month algae bloom variation of AFI reflect the booming existing twin peaks in the high shock level and lag trend in general. In the subregion statistics, the IBD and ASI in 2011 show the abnormal condition in the border between the Gongshan Bay and Central Lake. The date is obvious earlier than that on the same subregion in previous years and that on others subregion in the same year.

  15. Informatics in radiology: automated Web-based graphical dashboard for radiology operational business intelligence.

    PubMed

    Nagy, Paul G; Warnock, Max J; Daly, Mark; Toland, Christopher; Meenan, Christopher D; Mezrich, Reuben S

    2009-11-01

    Radiology departments today are faced with many challenges to improve operational efficiency, performance, and quality. Many organizations rely on antiquated, paper-based methods to review their historical performance and understand their operations. With increased workloads, geographically dispersed image acquisition and reading sites, and rapidly changing technologies, this approach is increasingly untenable. A Web-based dashboard was constructed to automate the extraction, processing, and display of indicators and thereby provide useful and current data for twice-monthly departmental operational meetings. The feasibility of extracting specific metrics from clinical information systems was evaluated as part of a longer-term effort to build a radiology business intelligence architecture. Operational data were extracted from clinical information systems and stored in a centralized data warehouse. Higher-level analytics were performed on the centralized data, a process that generated indicators in a dynamic Web-based graphical environment that proved valuable in discussion and root cause analysis. Results aggregated over a 24-month period since implementation suggest that this operational business intelligence reporting system has provided significant data for driving more effective management decisions to improve productivity, performance, and quality of service in the department.

  16. Alert management for home healthcare based on home automation analysis.

    PubMed

    Truong, T T; de Lamotte, F; Diguet, J-Ph; Said-Hocine, F

    2010-01-01

    Rising healthcare for elder and disabled people can be controlled by offering people autonomy at home by means of information technology. In this paper, we present an original and sensorless alert management solution which performs multimedia and home automation service discrimination and extracts highly regular home activities as sensors for alert management. The results of simulation data, based on real context, allow us to evaluate our approach before application to real data.

  17. JPRS Report, Science & Technology, Japan, 27th Aircraft Symposium

    DTIC Science & Technology

    1990-10-29

    screen; the relative attitude is then determined . 2) Video Sensor System Specific patterns (grapple target, etc.) drawn on the target spacecraft , or the...entire target spacecraft , is imaged by camera . Navigation information is obtained by on-board image processing, such as extraction of contours and...standard figure called "grapple target" located in the vicinity of the grapple fixture on the target spacecraft is imaged by camera . Contour lines and

  18. Information Management of Web Application Based Environmental Performance Management in Concentrating Division of PTFI

    NASA Astrophysics Data System (ADS)

    Susanto, Arif; Mulyono, Nur Budi

    2018-02-01

    The changes of environmental management system standards into the latest version, i.e. ISO 14001:2015, may cause a change on a data and information need in decision making and achieving the objectives in the organization coverage. Information management is the organization's responsibility to ensure that effectiveness and efficiency start from its creating, storing, processing and distribution processes to support operations and effective decision making activity in environmental performance management. The objective of this research was to set up an information management program and to adopt the technology as the supporting component of the program which was done by PTFI Concentrating Division so that it could be in line with the desirable organization objective in environmental management based on ISO 14001:2015 environmental management system standards. Materials and methods used covered technical aspects in information management, i.e. with web-based application development by using usage centered design. The result of this research showed that the use of Single Sign On gave ease to its user to interact further on the use of the environmental management system. Developing a web-based through creating entity relationship diagram (ERD) and information extraction by conducting information extraction which focuses on attributes, keys, determination of constraints. While creating ERD is obtained from relational database scheme from a number of database from environmental performances in Concentrating Division.

  19. Study on Karst Information Identification of Qiandongnan Prefecture Based on RS and GIS Technology

    NASA Astrophysics Data System (ADS)

    Yao, M.; Zhou, G.; Wang, W.; Wu, Z.; Huang, Y.; Huang, X.

    2018-04-01

    Karst area is a pure natural resource base, at the same time, due to the special geological environment; there are droughts and floods alternating with frequent karst collapse, rocky desertification and other resource and environment problems, which seriously restrict the sustainable economic and social development in karst areas. Therefore, this paper identifies and studies the karst, and clarifies the distribution of karst. Provide basic data for the rational development of resources in the karst region and the governance of desertification. Due to the uniqueness of the karst landscape, it can't be directly recognized and extracted by computer in remote sensing images. Therefore, this paper uses the idea of "RS + DEM" to solve the above problems. this article is based on Landsat-5 TM imagery in 2010 and DEM data, proposes the methods to identify karst information research what is use of slope vector diagram, vegetation distribution map, distribution map of karst rocky desertification and other auxiliary data in combination with the signs for human-computer interaction interpretation, identification and extraction of peak forest, peaks cluster and isolated peaks, and further extraction of karst depression. Experiments show that this method achieves the "RS + DEM" mode through the reasonable combination of remote sensing images and DEM data. It not only effectively extracts karst areas covered with vegetation, but also quickly and accurately locks down the karst area and greatly improves the efficiency and precision of visual interpretation. The accurate interpretation rate of karst information in study area in this paper is 86.73 %.

  20. Crossword: A Fully Automated Algorithm for the Segmentation and Quality Control of Protein Microarray Images

    PubMed Central

    2015-01-01

    Biological assays formatted as microarrays have become a critical tool for the generation of the comprehensive data sets required for systems-level understanding of biological processes. Manual annotation of data extracted from images of microarrays, however, remains a significant bottleneck, particularly for protein microarrays due to the sensitivity of this technology to weak artifact signal. In order to automate the extraction and curation of data from protein microarrays, we describe an algorithm called Crossword that logically combines information from multiple approaches to fully automate microarray segmentation. Automated artifact removal is also accomplished by segregating structured pixels from the background noise using iterative clustering and pixel connectivity. Correlation of the location of structured pixels across image channels is used to identify and remove artifact pixels from the image prior to data extraction. This component improves the accuracy of data sets while reducing the requirement for time-consuming visual inspection of the data. Crossword enables a fully automated protocol that is robust to significant spatial and intensity aberrations. Overall, the average amount of user intervention is reduced by an order of magnitude and the data quality is increased through artifact removal and reduced user variability. The increase in throughput should aid the further implementation of microarray technologies in clinical studies. PMID:24417579

  1. Giving Voice to Emotion: Voice Analysis Technology Uncovering Mental States is Playing a Growing Role in Medicine, Business, and Law Enforcement.

    PubMed

    Allen, Summer

    2016-01-01

    It's tough to imagine anything more frustrating than interacting with a call center. Generally, people don't reach out to call centers when they?re happy-they're usually trying to get help with a problem or gearing up to do battle over a billing error. Add in an automatic phone tree, and you have a recipe for annoyance. But what if that robotic voice offering you a smorgasbord of numbered choices could tell that you were frustrated and then funnel you to an actual human being? This type of voice analysis technology exists, and it's just one example of the many ways that computers can use your voice to extract information about your mental and emotional state-including information you may not think of as being accessible through your voice alone.

  2. An integrative review of communication between parents and nurses of hospitalized technology-dependent children.

    PubMed

    Giambra, Barbara K; Stiffler, Deborah; Broome, Marion E

    2014-12-01

    With advances in health care, the population of children who are technology-dependent is increasing and, therefore, the need for nurses to understand how best to engage in communication with the parents of these children is critical. Shared communication between the parents of hospitalized technology-dependent children and their nurses is essential to provide optimal care for the child. The components and behaviors of the parent-nurse communication process that improve mutual understanding of optimal care for the child had not previously been examined. Among parents of hospitalized technology-dependent children and their nurses, what communication behaviors, components, concepts, or processes improve mutual understanding of optimal care for the child? An integrative review of both qualitative and quantitative studies was conducted. Key words including communication, hospitalized, nurse, parent, pediatric, and technology-dependent were used to search databases such as Cumulative Index to Nursing and Allied Health and Medline for years 2000-2014. The data regarding the process of parent-nurse communication were extracted as they related to the mutual understanding of optimal care for the child. The data were grouped into themes and compared across studies, designs, populations, and settings. Six articles were identified that provided information regarding the processes of shared communication among the parents of hospitalized technology-dependent children and their nurses. Providing clear information, involving parents in care decisions, trust and respect for each other's expertise, caring attitudes, advocacy, and role negotiation were all found to be important factors in shared parent-nurse communication. The results of this integrative review inform our understanding of the parent-nurse communication process. The findings provide nurses with an understanding of strategies to better engage in respectful, engaging, and intentional communication with parents of hospitalized technology-dependent children and improve patient outcomes. © 2014 Sigma Theta Tau International.

  3. [The research on separating and extracting overlapping spectral feature lines in LIBS using damped least squares method].

    PubMed

    Wang, Yin; Zhao, Nan-jing; Liu, Wen-qing; Yu, Yang; Fang, Li; Meng, De-shuo; Hu, Li; Zhang, Da-hai; Ma, Min-jun; Xiao, Xue; Wang, Yu; Liu, Jian-guo

    2015-02-01

    In recent years, the technology of laser induced breakdown spectroscopy has been developed rapidly. As one kind of new material composition detection technology, laser induced breakdown spectroscopy can simultaneously detect multi elements fast and simply without any complex sample preparation and realize field, in-situ material composition detection of the sample to be tested. This kind of technology is very promising in many fields. It is very important to separate, fit and extract spectral feature lines in laser induced breakdown spectroscopy, which is the cornerstone of spectral feature recognition and subsequent elements concentrations inversion research. In order to realize effective separation, fitting and extraction of spectral feature lines in laser induced breakdown spectroscopy, the original parameters for spectral lines fitting before iteration were analyzed and determined. The spectral feature line of' chromium (Cr I : 427.480 nm) in fly ash gathered from a coal-fired power station, which was overlapped with another line(FeI: 427.176 nm), was separated from the other one and extracted by using damped least squares method. Based on Gauss-Newton iteration, damped least squares method adds damping factor to step and adjust step length dynamically according to the feedback information after each iteration, in order to prevent the iteration from diverging and make sure that the iteration could converge fast. Damped least squares method helps to obtain better results of separating, fitting and extracting spectral feature lines and give more accurate intensity values of these spectral feature lines: The spectral feature lines of chromium in samples which contain different concentrations of chromium were separated and extracted. And then, the intensity values of corresponding spectral lines were given by using damped least squares method and least squares method separately. The calibration curves were plotted, which showed the relationship between spectral line intensity values and chromium concentrations in different samples. And then their respective linear correlations were compared. The experimental results showed that the linear correlation of the intensity values of spectral feature lines and the concentrations of chromium in different samples, which was obtained by damped least squares method, was better than that one obtained by least squares method. And therefore, damped least squares method was stable, reliable and suitable for separating, fitting and extracting spectral feature lines in laser induced breakdown spectroscopy.

  4. Reproducible Tissue Homogenization and Protein Extraction for Quantitative Proteomics Using MicroPestle-Assisted Pressure-Cycling Technology.

    PubMed

    Shao, Shiying; Guo, Tiannan; Gross, Vera; Lazarev, Alexander; Koh, Ching Chiek; Gillessen, Silke; Joerger, Markus; Jochum, Wolfram; Aebersold, Ruedi

    2016-06-03

    The reproducible and efficient extraction of proteins from biopsy samples for quantitative analysis is a critical step in biomarker and translational research. Recently, we described a method consisting of pressure-cycling technology (PCT) and sequential windowed acquisition of all theoretical fragment ions-mass spectrometry (SWATH-MS) for the rapid quantification of thousands of proteins from biopsy-size tissue samples. As an improvement of the method, we have incorporated the PCT-MicroPestle into the PCT-SWATH workflow. The PCT-MicroPestle is a novel, miniaturized, disposable mechanical tissue homogenizer that fits directly into the microTube sample container. We optimized the pressure-cycling conditions for tissue lysis with the PCT-MicroPestle and benchmarked the performance of the system against the conventional PCT-MicroCap method using mouse liver, heart, brain, and human kidney tissues as test samples. The data indicate that the digestion of the PCT-MicroPestle-extracted proteins yielded 20-40% more MS-ready peptide mass from all tissues tested with a comparable reproducibility when compared to the conventional PCT method. Subsequent SWATH-MS analysis identified a higher number of biologically informative proteins from a given sample. In conclusion, we have developed a new device that can be seamlessly integrated into the PCT-SWATH workflow, leading to increased sample throughput and improved reproducibility at both the protein extraction and proteomic analysis levels when applied to the quantitative proteomic analysis of biopsy-level samples.

  5. Full Characterization of CO2-Oil Properties On-Chip: Solubility, Diffusivity, Extraction Pressure, Miscibility, and Contact Angle.

    PubMed

    Sharbatian, Atena; Abedini, Ali; Qi, ZhenBang; Sinton, David

    2018-02-20

    Carbon capture, storage, and utilization technologies target a reduction in net CO 2 emissions to mitigate greenhouse gas effects. The largest such projects worldwide involve storing CO 2 through enhanced oil recovery-a technologically and economically feasible approach that combines both storage and oil recovery. Successful implementation relies on detailed measurements of CO 2 -oil properties at relevant reservoir conditions (P = 2.0-13.0 MPa and T = 23 and 50 °C). In this paper, we demonstrate a microfluidic method to quantify the comprehensive suite of mutual properties of a CO 2 and crude oil mixture including solubility, diffusivity, extraction pressure, minimum miscibility pressure (MMP), and contact angle. The time-lapse oil swelling/extraction in response to CO 2 exposure under stepwise increasing pressure was quantified via fluorescence microscopy, using the inherent fluorescence property of the oil. The CO 2 solubilities and diffusion coefficients were determined from the swelling process with measurements in strong agreement with previous results. The CO 2 -oil MMP was determined from the subsequent oil extraction process with measurements within 5% of previous values. In addition, the oil-CO 2 -silicon contact angle was measured throughout the process, with contact angle increasing with pressure. In contrast with conventional methods, which require days and ∼500 mL of fluid sample, the approach here provides a comprehensive suite of measurements, 100-fold faster with less than 1 μL of sample, and an opportunity to better inform large-scale CO 2 projects.

  6. Innovative Alternative Technologies to Extract Carotenoids from Microalgae and Seaweeds

    PubMed Central

    Poojary, Mahesha M.; Barba, Francisco J.; Aliakbarian, Bahar; Donsì, Francesco; Pataro, Gianpiero; Dias, Daniel A.; Juliano, Pablo

    2016-01-01

    Marine microalgae and seaweeds (microalgae) represent a sustainable source of various bioactive natural carotenoids, including β-carotene, lutein, astaxanthin, zeaxanthin, violaxanthin and fucoxanthin. Recently, the large-scale production of carotenoids from algal sources has gained significant interest with respect to commercial and industrial applications for health, nutrition, and cosmetic applications. Although conventional processing technologies, based on solvent extraction, offer a simple approach to isolating carotenoids, they suffer several, inherent limitations, including low efficiency (extraction yield), selectivity (purity), high solvent consumption, and long treatment times, which have led to advancements in the search for innovative extraction technologies. This comprehensive review summarizes the recent trends in the extraction of carotenoids from microalgae and seaweeds through the assistance of different innovative techniques, such as pulsed electric fields, liquid pressurization, supercritical fluids, subcritical fluids, microwaves, ultrasounds, and high-pressure homogenization. In particular, the review critically analyzes technologies, characteristics, advantages, and shortcomings of the different innovative processes, highlighting the differences in terms of yield, selectivity, and economic and environmental sustainability. PMID:27879659

  7. Innovative Alternative Technologies to Extract Carotenoids from Microalgae and Seaweeds.

    PubMed

    Poojary, Mahesha M; Barba, Francisco J; Aliakbarian, Bahar; Donsì, Francesco; Pataro, Gianpiero; Dias, Daniel A; Juliano, Pablo

    2016-11-22

    Marine microalgae and seaweeds (microalgae) represent a sustainable source of various bioactive natural carotenoids, including β-carotene, lutein, astaxanthin, zeaxanthin, violaxanthin and fucoxanthin. Recently, the large-scale production of carotenoids from algal sources has gained significant interest with respect to commercial and industrial applications for health, nutrition, and cosmetic applications. Although conventional processing technologies, based on solvent extraction, offer a simple approach to isolating carotenoids, they suffer several, inherent limitations, including low efficiency (extraction yield), selectivity (purity), high solvent consumption, and long treatment times, which have led to advancements in the search for innovative extraction technologies. This comprehensive review summarizes the recent trends in the extraction of carotenoids from microalgae and seaweeds through the assistance of different innovative techniques, such as pulsed electric fields, liquid pressurization, supercritical fluids, subcritical fluids, microwaves, ultrasounds, and high-pressure homogenization. In particular, the review critically analyzes technologies, characteristics, advantages, and shortcomings of the different innovative processes, highlighting the differences in terms of yield, selectivity, and economic and environmental sustainability.

  8. Extracting DEM from airborne X-band data based on PolInSAR

    NASA Astrophysics Data System (ADS)

    Hou, X. X.; Huang, G. M.; Zhao, Z.

    2015-06-01

    Polarimetric Interferometric Synthetic Aperture Radar (PolInSAR) is a new trend of SAR remote sensing technology which combined polarized multichannel information and Interferometric information. It is of great significance for extracting DEM in some regions with low precision of DEM such as vegetation coverage area and building concentrated area. In this paper we describe our experiments with high-resolution X-band full Polarimetric SAR data acquired by a dual-baseline interferometric airborne SAR system over an area of Danling in southern China. Pauli algorithm is used to generate the double polarimetric interferometry data, Singular Value Decomposition (SVD), Numerical Radius (NR) and Phase diversity (PD) methods are used to generate the full polarimetric interferometry data. Then we can make use of the polarimetric interferometric information to extract DEM with processing of pre filtering , image registration, image resampling, coherence optimization, multilook processing, flat-earth removal, interferogram filtering, phase unwrapping, parameter calibration, height derivation and geo-coding. The processing system named SARPlore has been exploited based on VC++ led by Chinese Academy of Surveying and Mapping. Finally compared optimization results with the single polarimetric interferometry, it has been observed that optimization ways can reduce the interferometric noise and the phase unwrapping residuals, and improve the precision of DEM. The result of full polarimetric interferometry is better than double polarimetric interferometry. Meanwhile, in different terrain, the result of full polarimetric interferometry will have a different degree of increase.

  9. Perceptual learning modules in mathematics: enhancing students' pattern recognition, structure extraction, and fluency.

    PubMed

    Kellman, Philip J; Massey, Christine M; Son, Ji Y

    2010-04-01

    Learning in educational settings emphasizes declarative and procedural knowledge. Studies of expertise, however, point to other crucial components of learning, especially improvements produced by experience in the extraction of information: perceptual learning (PL). We suggest that such improvements characterize both simple sensory and complex cognitive, even symbolic, tasks through common processes of discovery and selection. We apply these ideas in the form of perceptual learning modules (PLMs) to mathematics learning. We tested three PLMs, each emphasizing different aspects of complex task performance, in middle and high school mathematics. In the MultiRep PLM, practice in matching function information across multiple representations improved students' abilities to generate correct graphs and equations from word problems. In the Algebraic Transformations PLM, practice in seeing equation structure across transformations (but not solving equations) led to dramatic improvements in the speed of equation solving. In the Linear Measurement PLM, interactive trials involving extraction of information about units and lengths produced successful transfer to novel measurement problems and fraction problem solving. Taken together, these results suggest (a) that PL techniques have the potential to address crucial, neglected dimensions of learning, including discovery and fluent processing of relations; (b) PL effects apply even to complex tasks that involve symbolic processing; and (c) appropriately designed PL technology can produce rapid and enduring advances in learning. Copyright © 2009 Cognitive Science Society, Inc.

  10. Utility of linking primary care electronic medical records with Canadian census data to study the determinants of chronic disease: an example based on socioeconomic status and obesity.

    PubMed

    Biro, Suzanne; Williamson, Tyler; Leggett, Jannet Ann; Barber, David; Morkem, Rachael; Moore, Kieran; Belanger, Paul; Mosley, Brian; Janssen, Ian

    2016-03-11

    Electronic medical records (EMRs) used in primary care contain a breadth of data that can be used in public health research. Patient data from EMRs could be linked with other data sources, such as a postal code linkage with Census data, to obtain additional information on environmental determinants of health. While promising, successful linkages between primary care EMRs with geographic measures is limited due to ethics review board concerns. This study tested the feasibility of extracting full postal code from primary care EMRs and linking this with area-level measures of the environment to demonstrate how such a linkage could be used to examine the determinants of disease. The association between obesity and area-level deprivation was used as an example to illustrate inequalities of obesity in adults. The analysis included EMRs of 7153 patients aged 20 years and older who visited a single, primary care site in 2011. Extracted patient information included demographics (date of birth, sex, postal code) and weight status (height, weight). Information extraction and management procedures were designed to mitigate the risk of individual re-identification when extracting full postal code from source EMRs. Based on patients' postal codes, area-based deprivation indexes were created using the smallest area unit used in Canadian censuses. Descriptive statistics and socioeconomic disparity summary measures of linked census and adult patients were calculated. The data extraction of full postal code met technological requirements for rendering health information extracted from local EMRs into anonymized data. The prevalence of obesity was 31.6 %. There was variation of obesity between deprivation quintiles; adults in the most deprived areas were 35 % more likely to be obese compared with adults in the least deprived areas (Chi-Square = 20.24(1), p < 0.0001). Maps depicting spatial representation of regional deprivation and obesity were created to highlight high risk areas. An area based socio-economic measure was linked with EMR-derived objective measures of height and weight to show a positive association between area-level deprivation and obesity. The linked dataset demonstrates a promising model for assessing health disparities and ecological factors associated with the development of chronic diseases with far reaching implications for informing public health and primary health care interventions and services.

  11. The patient experience of high technology medical imaging: a systematic review of the qualitative evidence.

    PubMed

    Munn, Zachary; Jordan, Zoe

    When presenting to an imaging department, the person who is to be imaged is often in a vulnerable state, and out of their comfort zone. It is the role of the medical imaging technician to produce a high quality image and facilitate patient care throughout the imaging process. Qualitative research is necessary to better inform the medical imaging technician and to help them to understand the experience of the person being imaged. Some issues that have been identified in the literature include fear, claustrophobia, dehumanisation, and an uncomfortable or unusual experience. There is now a small but worthwhile qualitative literature base focusing on the patient experience in high technology imaging. There is no current qualitative synthesis of the literature on the patient experience in high technology imaging. It is therefore timely and worthwhile to produce a systematic review to identify and summarise the existent literature exploring the patient experience of high technology imaging. To identify the patient experience of high technology medical imaging. Studies that were of a qualitative design that explored the phenomenon of interest, the patient experience of high technology medical imaging. Participants included anyone who had undergone one of these procedures. The search strategy aimed to find both published and unpublished studies, and was conducted over a period from June - September 2010. No time limits were imposed on this search strategy. A three-step search strategy was utilised in this review. All studies that met the criteria were selected for retrieval. They were then assessed by two independent reviewers for methodological validity prior to inclusion in the review using standardised critical appraisal instruments from the Joanna Briggs Institute Qualitative Assessment and Review Instrument. Data was extracted from papers included in the review using the standardised data extraction tool from the Joanna Briggs Institute Qualitative Assessment and Review Instrument. Research findings were pooled using the Qualitative Assessment and Review Instrument. Following the search and critical appraisal processes, 15 studies were identified that were deemed of suitable quality to be included in the review. From these 15 studies, 127 findings were extracted, forming 33 categories and 11 synthesised findings. These synthesised findings related to the patient experience, the emotions they felt (whether negative or positive), the need for support and information, and highlighted the importance of imaging to the patient. The synthesised findings in this review highlight the diverse, unique and challenging ways in which people experience imaging with MRI and CT scanners. All health professionals involved in imaging need to be aware of the different ways each patient may experience imaging, and provide them with ongoing support and information. The implications for practice are derived directly from the results of the meta-synthesis, and each of the 11 synthesised findings. There is still scope for further high methodological qualitative studies to be conducted in this field, particularly in the field of nuclear medicine imaging and Positron Emission Tomography. Further studies may be conducted in certain patient groups, and in certain age ranges. No studies were found assessing the experience of children undergoing high technology imaging.

  12. Multisensor multiresolution data fusion for improvement in classification

    NASA Astrophysics Data System (ADS)

    Rubeena, V.; Tiwari, K. C.

    2016-04-01

    The rapid advancements in technology have facilitated easy availability of multisensor and multiresolution remote sensing data. Multisensor, multiresolution data contain complementary information and fusion of such data may result in application dependent significant information which may otherwise remain trapped within. The present work aims at improving classification by fusing features of coarse resolution hyperspectral (1 m) LWIR and fine resolution (20 cm) RGB data. The classification map comprises of eight classes. The class names are Road, Trees, Red Roof, Grey Roof, Concrete Roof, Vegetation, bare Soil and Unclassified. The processing methodology for hyperspectral LWIR data comprises of dimensionality reduction, resampling of data by interpolation technique for registering the two images at same spatial resolution, extraction of the spatial features to improve classification accuracy. In the case of fine resolution RGB data, the vegetation index is computed for classifying the vegetation class and the morphological building index is calculated for buildings. In order to extract the textural features, occurrence and co-occurence statistics is considered and the features will be extracted from all the three bands of RGB data. After extracting the features, Support Vector Machine (SVMs) has been used for training and classification. To increase the classification accuracy, post processing steps like removal of any spurious noise such as salt and pepper noise is done which is followed by filtering process by majority voting within the objects for better object classification.

  13. Information Technology and the Autonomous Control of a Mars In-Situ Propellant Production System

    NASA Technical Reports Server (NTRS)

    Gross, Anthony R.; Sridhar, K. R.; Larson, William E.; Clancy, Daniel J.; Peschur, Charles; Briggs, Geoffrey A.; Zornetzer, Steven F. (Technical Monitor)

    1999-01-01

    With the rapidly increasing performance of information technology, i.e., computer hardware and software systems, as well as networks and communication systems, a new capability is being developed that holds the clear promise of greatly increased exploration capability, along with dramatically reduced design, development, and operating costs. These new intelligent systems technologies, utilizing knowledge-based software and very high performance computer systems, will provide new design and development tools, scheduling mechanisms, and vehicle and system health monitoring capabilities. In addition, specific technologies such as neural nets will provide a degree of machine intelligence and associated autonomy which has previously been unavailable to the mission and spacecraft designer and to the system operator. One of the most promising applications of these new information technologies is to the area of in situ resource utilization. Useful resources such as oxygen, compressed carbon dioxide, water, methane, and buffer gases can be extracted and/or generated from planetary atmospheres, such as the Martian atmosphere. These products, when used for propulsion and life-support needs can provide significant savings in the launch mass and costs for both robotic and crewed missions. In the longer term the utilization of indigenous resources is an enabling technology that is vital to sustaining long duration human presence on Mars. This paper will present the concepts that are currently under investigation and development for mining the Martian atmosphere, such as temperature-swing adsorption, zirconia electrolysis etc., to create propellants and life-support materials. This description will be followed by an analysis of the information technology and control needs for the reliable and autonomous operation of such processing plants in a fault tolerant manner, as well as the approach being taken for the development of the controlling software. Finally, there will be a brief discussion of the verification and validation process so crucial to the implementation of mission-critical software.

  14. [Application of microwave technology in extraction process of Guizhi Fuling capsule].

    PubMed

    Wang, Zheng-kuan; Zhou, Mao; Liu, Yuan; Bi, Yu-an; Wang, Zhen-zhong; Xiao, Wei

    2015-06-01

    In this paper, optimization of the conditions of microwave technique in extraction process of Guizhi Fuling capsule in the condition of a pilot scale was carried out. First of all, through the single factor experiment investigation of various factors, the overall impact tendency and range of each factor were determined. Secondly, L9 (3(4)) orthogonal test optimization was used, and the contents of gallic acid in liquid, paeoniflorin, benzoic acid, cinnamic acid, benzoyl paeoniflorin, amygdalin of the liquid medicine were detected. The extraction rate and comprehensive evaluation were calculated with the extraction effect, as the judgment basis. Theoptimum extraction process of Guizhi Fuling capsule by microwave technology was as follows: the ratio of liquid to solid was 6: 1 added to drinking water, the microwave power was 6 kW, extraction time was 20 min for 3 times. The process of the three batch of amplification through verification, the results are stable, and compared with conventional water extraction has the advantages of energy saving, time saving, high efficiency advantages. The above results show the optimum extracting technology of high efficiency, stable and feasible.

  15. A study on spatial decision support systems for HIV/AIDS prevention based on COM GIS technology

    NASA Astrophysics Data System (ADS)

    Yang, Kun; Luo, Huasong; Peng, Shungyun; Xu, Quanli

    2007-06-01

    Based on the deeply analysis of the current status and the existing problems of GIS technology applications in Epidemiology, this paper has proposed the method and process for establishing the spatial decision support systems of AIDS epidemic prevention by integrating the COM GIS, Spatial Database, GPS, Remote Sensing, and Communication technologies, as well as ASP and ActiveX software development technologies. One of the most important issues for constructing the spatial decision support systems of AIDS epidemic prevention is how to integrate the AIDS spreading models with GIS. The capabilities of GIS applications in the AIDS epidemic prevention have been described here in this paper firstly. Then some mature epidemic spreading models have also been discussed for extracting the computation parameters. Furthermore, a technical schema has been proposed for integrating the AIDS spreading models with GIS and relevant geospatial technologies, in which the GIS and model running platforms share a common spatial database and the computing results can be spatially visualized on Desktop or Web GIS clients. Finally, a complete solution for establishing the decision support systems of AIDS epidemic prevention has been offered in this paper based on the model integrating methods and ESRI COM GIS software packages. The general decision support systems are composed of data acquisition sub-systems, network communication sub-systems, model integrating sub-systems, AIDS epidemic information spatial database sub-systems, AIDS epidemic information querying and statistical analysis sub-systems, AIDS epidemic dynamic surveillance sub-systems, AIDS epidemic information spatial analysis and decision support sub-systems, as well as AIDS epidemic information publishing sub-systems based on Web GIS.

  16. Health technology funding decision-making processes around the world: the same, yet different.

    PubMed

    Stafinski, Tania; Menon, Devidas; Philippon, Donald J; McCabe, Christopher

    2011-06-01

    All healthcare systems routinely make resource allocation decisions that trade off potential health gains to different patient populations. However, when such trade-offs relate to the introduction of new, promising health technologies, perceived 'winners' and 'losers' are more apparent. In recent years, public scrutiny over such decisions has intensified, raising the need to better understand how they are currently made and how they might be improved. The objective of this paper is to critically review and compare current processes for making health technology funding decisions at the regional, state/provincial and national level in 20 countries. A comprehensive search for published, peer-reviewed and grey literature describing actual national, state/provincial and regional/institutional technology decision-making processes was conducted. Information was extracted by two independent reviewers and tabulated to facilitate qualitative comparative analyses. To identify strengths and weaknesses of processes identified, websites of corresponding organizations were searched for commissioned reviews/evaluations, which were subsequently analysed using standard qualitative methods. A total of 21 national, four provincial/state and six regional/institutional-level processes were found. Although information on each one varied, they could be grouped into four sequential categories: (i) identification of the decision problem; (ii) information inputs; (iii) elements of the decision-making process; and (iv) public accountability and decision implementation. While information requirements of all processes appeared substantial and decision-making factors comprehensive, the way in which they were utilized was often unclear, as were approaches used to incorporate social values or equity arguments into decisions. A comprehensive inventory of approaches to implementing the four main components of all technology funding decision-making processes was compiled, from which areas for future work or research aimed at improving the acceptability of decisions were identified. They include the explication of decision criteria and social values underpinning processes.

  17. Pressure-relief and methane production performance of pressure relief gas extraction technology in the longwall mining

    NASA Astrophysics Data System (ADS)

    Zhang, Cun; Tu, Shihao; Chen, Min; Zhang, Lei

    2017-02-01

    Pressure relief gas extraction technology (PRGET) has been successfully implemented at many locations as a coal mine methane exploitation and outburst prevention technology. Comprehensive PRGET including gob gas venthole (GGV), crossing seam drilling hole (CSDH), large diameter horizontal long drilling hole (LDHLDH) and buried pipe for extraction (BPE) have been used to extract abundant pressure-relief methane (PRM) during protective coal seam mining; these techniques mitigated dangers associated with coal and gas outbursts in 13-1 coal seam mining in the Huainan coalfield. These extraction technologies can ensure safe protective seam mining and effectively extract coal and gas. This article analyses PRGET production performance and verifies it with the field measurement. The results showed that PRGET drilling to extract PRM from the protected coal seam significantly reduced methane emissions from a longwall ventilation system and produced highly efficient extraction. Material balance analyses indicated a significant decrease in gas content and pressure in the protected coal seam, from 8.78 m3 t-1 and 4.2 MPa to 2.34 m3 t-1 and 0.285 MPa, respectively. The field measurement results of the residual gas content in protected coal seam (13-1 coal seam) indicated the reliability of the material balance analyses and the pressure relief range of PRGET in the protected coal seam is obtained.

  18. Increasing tsunami risk awareness via mobile application

    NASA Astrophysics Data System (ADS)

    Leelawat, N.; Suppasri, A.; Latcharote, P.; Imamura, F.; Abe, Y.; Sugiyasu, K.

    2017-02-01

    In the information and communication technology era, smartphones have become a necessity. With the capacity and availability of smart technologies, a number of benefits are possible. As a result, designing a mobile application to increase tsunami awareness has been proposed, and a prototype has been designed and developed. The application uses data from the 2011 Great East Japan Tsunami. Based on the current location determined by a GPS function matched with the nearest point extracted from the detailed mesh data of that earlier disaster, the application generates the inundation depth at the user’s location. Thus, not only local people but also tourists visiting the affected areas can understand the risks involved. Application testing has been conducted in an evacuation experiment involving both Japanese and foreign students. The proposed application can be used as a supplementary information tool in tsunami evacuation drills. It also supports the idea of smart tourism: when people realize their risks, they possess risk awareness and hence can reduce their risks. This application can also be considered a contribution to disaster knowledge and technology, as well as to the lessons learned from the practical outcome.

  19. Big data, big knowledge: big data for personalized healthcare.

    PubMed

    Viceconti, Marco; Hunter, Peter; Hose, Rod

    2015-07-01

    The idea that the purely phenomenological knowledge that we can extract by analyzing large amounts of data can be useful in healthcare seems to contradict the desire of VPH researchers to build detailed mechanistic models for individual patients. But in practice no model is ever entirely phenomenological or entirely mechanistic. We propose in this position paper that big data analytics can be successfully combined with VPH technologies to produce robust and effective in silico medicine solutions. In order to do this, big data technologies must be further developed to cope with some specific requirements that emerge from this application. Such requirements are: working with sensitive data; analytics of complex and heterogeneous data spaces, including nontextual information; distributed data management under security and performance constraints; specialized analytics to integrate bioinformatics and systems biology information with clinical observations at tissue, organ and organisms scales; and specialized analytics to define the "physiological envelope" during the daily life of each patient. These domain-specific requirements suggest a need for targeted funding, in which big data technologies for in silico medicine becomes the research priority.

  20. Making a protein extract from plant pathogenic fungi for gel- and LC-based proteomics.

    PubMed

    Fernández, Raquel González; Redondo, Inmaculada; Jorrin-Novo, Jesus V

    2014-01-01

    Proteomic technologies have become a successful tool to provide relevant information on fungal biology. In the case of plant pathogenic fungi, this approach would allow a deeper knowledge of the interaction and the biological cycle of the pathogen, as well as the identification of pathogenicity and virulence factors. These two elements open up new possibilities for crop disease diagnosis and environment-friendly crop protection. Phytopathogenic fungi, due to its particular cellular characteristics, can be considered as a recalcitrant biological material, which makes it difficult to obtain quality protein samples for proteomic analysis. This chapter focuses on protein extraction for gel- and LC-based proteomics with specific protocols of our current research with Botrytis cinerea.

  1. The Hydrometallurgical Extraction and Recovery of High-Purity Silver

    NASA Astrophysics Data System (ADS)

    Hoffmann, James E.

    2012-06-01

    With the continuous reduction in the availability of extractive metallurgical curricula in colleges and universities, the concern has in part been from where will the next generation of extractive metallurgists come? One objective of this article is to emphasize the fact that extractive metallurgy is, in fact, one of many areas of chemical engineering technology. Thus, although the extractive metallurgist may have disappeared in name, its activity is alive and well, subsumed in the field of chemical engineering. One goal of this lecture is to demonstrate the applicability of chemical engineering principles to what is typically considered "the field of extractive metallurgy." Two processes will be described that have supplanted typical pyrometallurgical fire refining of precious metals, particularly silver. The origins of fire refining can be traced back to biblical times. There are numerous references to it in the old testament: Ezekiel 22:20, "As men gather silver and bronze and iron and lead and tin into a furnace to blow the fire upon it in order melt it"; Jeremiah 6:29, "The bellows blow fiercely; the lead is consumed by the fire; in vain the refining goes on"; and Malachi 3:2 (The Oxford Annotated Bible with the Apocrypha), "For he is like a refiners fire." Many references to it will also be found in "De Re Metallurgica" and as well in Lazarus Ercker's 1574 Manual "Treatise on Ores and Refining." Today, fire refining has been improved greatly by innovative furnace design, new fluxing technologies, and the improved use of oxygen. However, fundamentally, the process chemistry has not changed much in the last millennium. Illustrations of hydrometallurgical processing of silver-bearing inputs will be provided by the treatment of sulfated silver-bearing materials and chlorinated slimes. The first of these technologies will be described briefly as practiced by the Phelps Dodge Refining Corporation for several years. The second, the treatment of silver chloride-bearing inputs, will be described in detail to demonstrate how typical chemical engineering unit process and unit operations have supplanted classic smelting and fire refining techniques. The Kennecott Copper Company, which has operated a hydrometallurgical circuit successfully for the recovery of high-purity silver from the slimes wet chlorination residue, has permitted me to provide some operation information and results using the technology. Both Phelps Dodge and Kennecott should be recognized for their forward-looking attitude in undertaking the conversion of conceptual chemistry into successful, full-scale plants. The process as employed at Phelps Dodge is discussed at length in reference (J.E. Hoffmann and B. Wesstrom: Hydrometallurgy, 1994, vol. 94, pp. 69-105).

  2. Information and Communication Technologies for the Dissemination of Clinical Practice Guidelines to Health Professionals: A Systematic Review.

    PubMed

    De Angelis, Gino; Davies, Barbara; King, Judy; McEwan, Jessica; Cavallo, Sabrina; Loew, Laurianne; Wells, George A; Brosseau, Lucie

    2016-11-30

    The transfer of research knowledge into clinical practice can be a continuous challenge for researchers. Information and communication technologies, such as websites and email, have emerged as popular tools for the dissemination of evidence to health professionals. The objective of this systematic review was to identify research on health professionals' perceived usability and practice behavior change of information and communication technologies for the dissemination of clinical practice guidelines. We used a systematic approach to retrieve and extract data about relevant studies. We identified 2248 citations, of which 21 studies met criteria for inclusion; 20 studies were randomized controlled trials, and 1 was a controlled clinical trial. The following information and communication technologies were evaluated: websites (5 studies), computer software (3 studies), Web-based workshops (2 studies), computerized decision support systems (2 studies), electronic educational game (1 study), email (2 studies), and multifaceted interventions that consisted of at least one information and communication technology component (6 studies). Website studies demonstrated significant improvements in perceived usefulness and perceived ease of use, but not for knowledge, reducing barriers, and intention to use clinical practice guidelines. Computer software studies demonstrated significant improvements in perceived usefulness, but not for knowledge and skills. Web-based workshop and email studies demonstrated significant improvements in knowledge, perceived usefulness, and skills. An electronic educational game intervention demonstrated a significant improvement from baseline in knowledge after 12 and 24 weeks. Computerized decision support system studies demonstrated variable findings for improvement in skills. Multifaceted interventions demonstrated significant improvements in beliefs about capabilities, perceived usefulness, and intention to use clinical practice guidelines, but variable findings for improvements in skills. Most multifaceted studies demonstrated significant improvements in knowledge. The findings suggest that health professionals' perceived usability and practice behavior change vary by type of information and communication technology. Heterogeneity and the paucity of properly conducted studies did not allow for a clear comparison between studies and a conclusion on the effectiveness of information and communication technologies as a knowledge translation strategy for the dissemination of clinical practice guidelines. ©Gino De Angelis, Barbara Davies, Judy King, Jessica McEwan, Sabrina Cavallo, Laurianne Loew, George A Wells, Lucie Brosseau. Originally published in JMIR Medical Education (http://mededu.jmir.org), 30.11.2016.

  3. A hybrid sales forecasting scheme by combining independent component analysis with K-means clustering and support vector regression.

    PubMed

    Lu, Chi-Jie; Chang, Chi-Chang

    2014-01-01

    Sales forecasting plays an important role in operating a business since it can be used to determine the required inventory level to meet consumer demand and avoid the problem of under/overstocking. Improving the accuracy of sales forecasting has become an important issue of operating a business. This study proposes a hybrid sales forecasting scheme by combining independent component analysis (ICA) with K-means clustering and support vector regression (SVR). The proposed scheme first uses the ICA to extract hidden information from the observed sales data. The extracted features are then applied to K-means algorithm for clustering the sales data into several disjoined clusters. Finally, the SVR forecasting models are applied to each group to generate final forecasting results. Experimental results from information technology (IT) product agent sales data reveal that the proposed sales forecasting scheme outperforms the three comparison models and hence provides an efficient alternative for sales forecasting.

  4. Using Best Practices to Extract, Organize, and Reuse Embedded Decision Support Content Knowledge Rules from Mature Clinical Systems

    PubMed Central

    DesAutels, Spencer J.; Fox, Zachary E.; Giuse, Dario A.; Williams, Annette M.; Kou, Qing-hua; Weitkamp, Asli; Neal R, Patel; Bettinsoli Giuse, Nunzia

    2016-01-01

    Clinical decision support (CDS) knowledge, embedded over time in mature medical systems, presents an interesting and complex opportunity for information organization, maintenance, and reuse. To have a holistic view of all decision support requires an in-depth understanding of each clinical system as well as expert knowledge of the latest evidence. This approach to clinical decision support presents an opportunity to unify and externalize the knowledge within rules-based decision support. Driven by an institutional need to prioritize decision support content for migration to new clinical systems, the Center for Knowledge Management and Health Information Technology teams applied their unique expertise to extract content from individual systems, organize it through a single extensible schema, and present it for discovery and reuse through a newly created Clinical Support Knowledge Acquisition and Archival Tool (CS-KAAT). CS-KAAT can build and maintain the underlying knowledge infrastructure needed by clinical systems. PMID:28269846

  5. Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud

    NASA Astrophysics Data System (ADS)

    Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.

    2018-04-01

    In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.

  6. A Hybrid Sales Forecasting Scheme by Combining Independent Component Analysis with K-Means Clustering and Support Vector Regression

    PubMed Central

    2014-01-01

    Sales forecasting plays an important role in operating a business since it can be used to determine the required inventory level to meet consumer demand and avoid the problem of under/overstocking. Improving the accuracy of sales forecasting has become an important issue of operating a business. This study proposes a hybrid sales forecasting scheme by combining independent component analysis (ICA) with K-means clustering and support vector regression (SVR). The proposed scheme first uses the ICA to extract hidden information from the observed sales data. The extracted features are then applied to K-means algorithm for clustering the sales data into several disjoined clusters. Finally, the SVR forecasting models are applied to each group to generate final forecasting results. Experimental results from information technology (IT) product agent sales data reveal that the proposed sales forecasting scheme outperforms the three comparison models and hence provides an efficient alternative for sales forecasting. PMID:25045738

  7. A review of electrostatic monitoring technology: The state of the art and future research directions

    NASA Astrophysics Data System (ADS)

    Wen, Zhenhua; Hou, Junxing; Atkin, Jason

    2017-10-01

    Electrostatic monitoring technology is a useful tool for monitoring and detecting component faults and degradation, which is necessary for system health management. It encompasses three key research areas: sensor technology; signal detection, processing and feature extraction; and verification experimentation. It has received considerable recent attention for condition monitoring due to its ability to provide warning information and non-obstructive measurements on-line. A number of papers in recent years have covered specific aspects of the technology, including sensor design optimization, sensor characteristic analysis, signal de-noising and practical applications of the technology. This paper provides a review of the recent research and of the development of electrostatic monitoring technology, with a primary emphasis on its application for the aero-engine gas path. The paper also presents a summary of some of the current applications of electrostatic monitoring technology in other industries, before concluding with a brief discussion of the current research situation and possible future challenges and research gaps in this field. The aim of this paper is to promote further research into this promising technology by increasing awareness of both the potential benefits of the technology and the current research gaps.

  8. Single molecule optical measurements of orientation and rotations of biological macromolecules

    PubMed Central

    Shroder, Deborah Y; Lippert, Lisa G; Goldman, Yale E

    2016-01-01

    The subdomains of macromolecules often undergo large orientation changes during their catalytic cycles that are essential for their activity. Tracking these rearrangements in real time opens a powerful window into the link between protein structure and functional output. Site-specific labeling of individual molecules with polarized optical probes and measuring their spatial orientation can give insight into the crucial conformational changes, dynamics, and fluctuations of macromolecules. Here we describe the range of single molecule optical technologies that can extract orientation information from these probes, we review the relevant types of probes and labeling techniques, and we highlight the advantages and disadvantages of these technologies for addressing specific inquiries. PMID:28192292

  9. Implementation of a New Traceability Process for Breast Milk Feeding.

    PubMed

    Daus, Mariana Y; Maydana, Thelma G; Rizzato Lede, Daniel A; Luna, Daniel R

    2018-01-01

    Many newborns at the neonatal intensive care unit are unable to feed themselves, and receive human milk through enteric nutrition devices such as orogastric or nasogastric probes. The mothers extract their milk, and the nursing staff is responsible for the fractionation, storage and administration when prescribed by physicians. It is very important to remind that it is a bodily fluid that carries the risk of disease transmission if misused. Health information technologies can enhance patient safety by avoiding preventable adverse events. Barcoding technology could track every step of the milk manipulation. Many processes must be addressed to implement it. Our goal is to explain our planning and implementation process in an academic tertiary hospital.

  10. The Promise of Information and Communication Technology in Healthcare: Extracting Value From the Chaos.

    PubMed

    Mamlin, Burke W; Tierney, William M

    2016-01-01

    Healthcare is an information business with expanding use of information and communication technologies (ICTs). Current ICT tools are immature, but a brighter future looms. We examine 7 areas of ICT in healthcare: electronic health records (EHRs), health information exchange (HIE), patient portals, telemedicine, social media, mobile devices and wearable sensors and monitors, and privacy and security. In each of these areas, we examine the current status and future promise, highlighting how each might reach its promise. Steps to better EHRs include a universal programming interface, universal patient identifiers, improved documentation and improved data analysis. HIEs require federal subsidies for sustainability and support from EHR vendors, targeting seamless sharing of EHR data. Patient portals must bring patients into the EHR with better design and training, greater provider engagement and leveraging HIEs. Telemedicine needs sustainable payment models, clear rules of engagement, quality measures and monitoring. Social media needs consensus on rules of engagement for providers, better data mining tools and approaches to counter disinformation. Mobile and wearable devices benefit from a universal programming interface, improved infrastructure, more rigorous research and integration with EHRs and HIEs. Laws for privacy and security need updating to match current technologies, and data stewards should share information on breaches and standardize best practices. ICT tools are evolving quickly in healthcare and require a rational and well-funded national agenda for development, use and assessment. Copyright © 2016 Southern Society for Clinical Investigation. Published by Elsevier Inc. All rights reserved.

  11. Improved efficiency of extraction of polycyclic aromatic hydrocarbons (PAHs) from the National Institute of Standards and Technology (NIST) Standard Reference Material Diesel Particulate Matter (SRM 2975) using accelerated solvent extraction.

    PubMed

    Masala, Silvia; Ahmed, Trifa; Bergvall, Christoffer; Westerholm, Roger

    2011-12-01

    The efficiency of extraction of polycyclic aromatic hydrocarbons (PAHs) with molecular masses of 252, 276, 278, 300, and 302 Da from standard reference material diesel particulate matter (SRM 2975) has been investigated using accelerated solvent extraction (ASE) with dichloromethane, toluene, methanol, and mixtures of toluene and methanol. Extraction of SRM 2975 using toluene/methanol (9:1, v/v) at maximum instrumental settings (200 °C, 20.7 MPa, and five extraction cycles) with 30-min extraction times resulted in the following elevations of the measured concentration when compared with the certified and reference concentrations reported by the National Institute of Standards and Technology (NIST): benzo[b]fluoranthene, 46%; benzo[k]fluoranthene, 137%; benzo[e]pyrene, 103%; benzo[a]pyrene, 1,570%; perylene, 37%; indeno[1,2,3-cd]pyrene, 41%; benzo[ghi]perylene, 163%; and coronene, 361%. The concentrations of the following PAHs were comparable to the reference values assigned by NIST: indeno[1,2,3-cd]fluoranthene, dibenz[a,h]anthracene, and picene. The measured concentration of dibenzo[a,e]-pyrene was lower than the information value reported by the NIST. The measured concentrations of other highly carcinogenic PAHs (dibenzo[a,l]pyrene, dibenzo[a,i]pyrene, and dibenzo[a,h]pyrene) in SRM 2975 are also reported. Comparison of measurements using the optimized ASE method and using similar conditions to those applied by the NIST for the assignment of PAH concentrations in SRM 2975 indicated that the higher values obtained in the present study were associated with more complete extraction of PAHs from the diesel particulate material. Re-extraction of the particulate samples demonstrated that the deuterated internal standards were more readily recovered than the native PAHs, which may explain the lower values reported by the NIST. The analytical results obtained in the study demonstrated that the efficient extraction of PAHs from SRM 2975 is a critical requirement for the accurate determination of PAHs with high molecular masses in this standard reference material and that the optimization of extraction conditions is essential to avoid underestimation of the PAH concentrations. The requirement is especially relevant to the human carcinogen benzo[a]pyrene, which is commonly used as an indicator of the carcinogenic risk presented by PAH mixtures.

  12. Applying different independent component analysis algorithms and support vector regression for IT chain store sales forecasting.

    PubMed

    Dai, Wensheng; Wu, Jui-Yu; Lu, Chi-Jie

    2014-01-01

    Sales forecasting is one of the most important issues in managing information technology (IT) chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR), is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA) is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model) was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA), temporal ICA (tICA), and spatiotemporal ICA (stICA) to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting.

  13. Applying Different Independent Component Analysis Algorithms and Support Vector Regression for IT Chain Store Sales Forecasting

    PubMed Central

    Dai, Wensheng

    2014-01-01

    Sales forecasting is one of the most important issues in managing information technology (IT) chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR), is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA) is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model) was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA), temporal ICA (tICA), and spatiotemporal ICA (stICA) to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting. PMID:25165740

  14. [Studies on preparative technology and quantitative determination for extracts of total saponin in roof of Panax japonicus].

    PubMed

    He, Yu-min; Lu, Ke-ming; Yuan, Ding; Zhang, Chang-cheng

    2008-11-01

    To explore the optimum extraction and purification condition of the total saponins in the root of Panax japonicus (RPJ), and establish its quality control methods. Designed L16 (4(5)) orthogonal test with the extraction rate of total saponins as index, to determine the rational extraction process, and the techniques of water-saturated n-butanol extraction and acetone precipitation were applied to purify the alcohol extract of RPJ. Total saponins were detected by spectrophotometry and its triterpenoidal sapogenin oleanolic acid detected by HPLC. The optimum conditions of total saponins from RPJ was as follows: the material was pulverized, dipped in 60% ethanol aqueous solution as extract solvent at 10 times of volume, and refluxed 3 times for 3 h each time. Extractant of water-saturated n-butanol with extraction times of 3 and precipitant of acetone with precipitation amount of 4-5 times were included in the purification process, which would obtain the quality products. The content of total saponins could reach to 83.48%, and oleanolic acid to 38.30%. The optimized preparative technology is stable, convenient and practical. The extract rate of RPJ was high and steady with this technology, which provided new evidence for industrializing production of the plant and developing new drug.

  15. From the outside looking in: developing snapshot imaging spectro-polarimeters

    NASA Astrophysics Data System (ADS)

    Dereniak, E. L.

    2014-09-01

    The information from a scene is critical in autonomous optical systems, and the variety of information that can be extracted is determined by the application. To characterize a target, the information of interest captured is spectral (λ), polarization (S) and distance (Z). There are many technologies that capture this information in different ways to identify the target. In many fields, such as mining and military reconnaissance, there is a need for rapid data acquisition and, for this reason, a relatively new method has been devised that can obtain all this information simultaneously. The need for snapshot acquisition of data without moving parts was the goal of the research. This paper reviews the chain of novel research instruments that were sequentially developed to capture spectral and polarization information of a scene in a snapshot or flash. The distance (Z) is yet to be integrated.

  16. Extraction of latent images from printed media

    NASA Astrophysics Data System (ADS)

    Sergeyev, Vladislav; Fedoseev, Victor

    2015-12-01

    In this paper we propose an automatic technology for extraction of latent images from printed media such as documents, banknotes, financial securities, etc. This technology includes image processing by adaptively constructed Gabor filter bank for obtaining feature images, as well as subsequent stages of feature selection, grouping and multicomponent segmentation. The main advantage of the proposed technique is versatility: it allows to extract latent images made by different texture variations. Experimental results showing performance of the method over another known system for latent image extraction are given.

  17. Technology transfer opportunities: patent license: electrochemical technique for introducing and redistributing ionic species into the earth

    USGS Publications Warehouse

    Leinz, Reinhard

    1996-01-01

    Scientists at the U.S. Geological Survey have expanded applications of the Chim electrode, technology used to perform partial geochemical extractions from soils. Recent applications of the the improved electrode technology show that geochemical extraction efficiencies can be improved by 2 orders of magnitude or better to about 30%.

  18. [Study on molecular recognition technology in active constituents extracted and isolated from Aconitum pendulum].

    PubMed

    Ma, Xue-Qin; Li, Guo-Shan; Fu, Xue-Yan; Ma, Jing-Zu

    2011-03-01

    To investigate CD molecular recognition technology applied in active constituents extracted and isolated from traditional Chinese medicine--Aconitum pendulum. The inclusion constant and form probability of the inclusion complex of Aconitum pendulum with p-CD was calculated by UV spectra method. The active constituents of Aconitum pendulum were extracted and isolated by molecular recognition technology. The inclusion complex was identified by UV. The chemical constituents of Aconitum pendulum and inclusion complex was determined by HPLC. The analgesic effects of inclusion complex was investigated by experiment of intraperitoneal injection of acetic acid in rats. The inclusion complex was identified and confirmed by UV spectra method, the chemical components of inclusion complex were simple, and the content of active constituents increased significantly, the analgesic effects of inclusion complex was well. The molecular recognition technology can be used for extracting and isolating active constituents of Aconitum pendulum, and the effects are obvious.

  19. Industrial application of semantic process mining

    NASA Astrophysics Data System (ADS)

    Espen Ingvaldsen, Jon; Atle Gulla, Jon

    2012-05-01

    Process mining relates to the extraction of non-trivial and useful information from information system event logs. It is a new research discipline that has evolved significantly since the early work on idealistic process logs. Over the last years, process mining prototypes have incorporated elements from semantics and data mining and targeted visualisation techniques that are more user-friendly to business experts and process owners. In this article, we present a framework for evaluating different aspects of enterprise process flows and address practical challenges of state-of-the-art industrial process mining. We also explore the inherent strengths of the technology for more efficient process optimisation.

  20. Protocols for the Investigation of Information Processing in Human Assessment of Fundamental Movement Skills.

    PubMed

    Ward, Brodie J; Thornton, Ashleigh; Lay, Brendan; Rosenberg, Michael

    2017-01-01

    Fundamental movement skill (FMS) assessment remains an important tool in classifying individuals' level of FMS proficiency. The collection of FMS performances for assessment and monitoring has remained unchanged over the last few decades, but new motion capture technologies offer opportunities to automate this process. To achieve this, a greater understanding of the human process of movement skill assessment is required. The authors present the rationale and protocols of a project in which they aim to investigate the visual search patterns and information extraction employed by human assessors during FMS assessment, as well as the implementation of the Kinect system for FMS capture.

  1. FPGA-based real time processing of the Plenoptic Wavefront Sensor

    NASA Astrophysics Data System (ADS)

    Rodríguez-Ramos, L. F.; Marín, Y.; Díaz, J. J.; Piqueras, J.; García-Jiménez, J.; Rodríguez-Ramos, J. M.

    The plenoptic wavefront sensor combines measurements at pupil and image planes in order to obtain simultaneously wavefront information from different points of view, being capable to sample the volume above the telescope to extract the tomographic information of the atmospheric turbulence. The advantages of this sensor are presented elsewhere at this conference (José M. Rodríguez-Ramos et al). This paper will concentrate in the processing required for pupil plane phase recovery, and its computation in real time using FPGAs (Field Programmable Gate Arrays). This technology eases the implementation of massive parallel processing and allows tailoring the system to the requirements, maintaining flexibility, speed and cost figures.

  2. Extraction of High Molecular Weight DNA from Fungal Rust Spores for Long Read Sequencing.

    PubMed

    Schwessinger, Benjamin; Rathjen, John P

    2017-01-01

    Wheat rust fungi are complex organisms with a complete life cycle that involves two different host plants and five different spore types. During the asexual infection cycle on wheat, rusts produce massive amounts of dikaryotic urediniospores. These spores are dikaryotic (two nuclei) with each nucleus containing one haploid genome. This dikaryotic state is likely to contribute to their evolutionary success, making them some of the major wheat pathogens globally. Despite this, most published wheat rust genomes are highly fragmented and contain very little haplotype-specific sequence information. Current long-read sequencing technologies hold great promise to provide more contiguous and haplotype-phased genome assemblies. Long reads are able to span repetitive regions and phase structural differences between the haplomes. This increased genome resolution enables the identification of complex loci and the study of genome evolution beyond simple nucleotide polymorphisms. Long-read technologies require pure high molecular weight DNA as an input for sequencing. Here, we describe a DNA extraction protocol for rust spores that yields pure double-stranded DNA molecules with molecular weight of >50 kilo-base pairs (kbp). The isolated DNA is of sufficient purity for PacBio long-read sequencing, but may require additional purification for other sequencing technologies such as Nanopore and 10× Genomics.

  3. Extracting foreground ensemble features to detect abnormal crowd behavior in intelligent video-surveillance systems

    NASA Astrophysics Data System (ADS)

    Chan, Yi-Tung; Wang, Shuenn-Jyi; Tsai, Chung-Hsien

    2017-09-01

    Public safety is a matter of national security and people's livelihoods. In recent years, intelligent video-surveillance systems have become important active-protection systems. A surveillance system that provides early detection and threat assessment could protect people from crowd-related disasters and ensure public safety. Image processing is commonly used to extract features, e.g., people, from a surveillance video. However, little research has been conducted on the relationship between foreground detection and feature extraction. Most current video-surveillance research has been developed for restricted environments, in which the extracted features are limited by having information from a single foreground; they do not effectively represent the diversity of crowd behavior. This paper presents a general framework based on extracting ensemble features from the foreground of a surveillance video to analyze a crowd. The proposed method can flexibly integrate different foreground-detection technologies to adapt to various monitored environments. Furthermore, the extractable representative features depend on the heterogeneous foreground data. Finally, a classification algorithm is applied to these features to automatically model crowd behavior and distinguish an abnormal event from normal patterns. The experimental results demonstrate that the proposed method's performance is both comparable to that of state-of-the-art methods and satisfies the requirements of real-time applications.

  4. Research on key technology of prognostic and health management for autonomous underwater vehicle

    NASA Astrophysics Data System (ADS)

    Zhou, Zhi

    2017-12-01

    Autonomous Underwater Vehicles (AUVs) are non-cable and autonomous motional underwater robotics. With a wide range of activities, it can reach thousands of kilometers. Because it has the advantages of wide range, good maneuverability, safety and intellectualization, it becomes an important tool for various underwater tasks. How to improve diagnosis accuracy of the AUVs electrical system faults, and how to repair AUVs by the information are the focus of navy in the world. In turn, ensuring safe and reliable operation of the system has very important significance to improve AUVs sailing performance. To solve these problems, in the paper the prognostic and health management(PHM) technology is researched and used to AUV, and the overall framework and key technology are proposed, such as data acquisition, feature extraction, fault diagnosis, failure prediction and so on.

  5. Object-oriented recognition of high-resolution remote sensing image

    NASA Astrophysics Data System (ADS)

    Wang, Yongyan; Li, Haitao; Chen, Hong; Xu, Yuannan

    2016-01-01

    With the development of remote sensing imaging technology and the improvement of multi-source image's resolution in satellite visible light, multi-spectral and hyper spectral , the high resolution remote sensing image has been widely used in various fields, for example military field, surveying and mapping, geophysical prospecting, environment and so forth. In remote sensing image, the segmentation of ground targets, feature extraction and the technology of automatic recognition are the hotspot and difficulty in the research of modern information technology. This paper also presents an object-oriented remote sensing image scene classification method. The method is consist of vehicles typical objects classification generation, nonparametric density estimation theory, mean shift segmentation theory, multi-scale corner detection algorithm, local shape matching algorithm based on template. Remote sensing vehicles image classification software system is designed and implemented to meet the requirements .

  6. [Application progress on near infrared spectroscopy in quality control and process monitoring of traditional Chinese medicine].

    PubMed

    Li, Wenlong; Qu, Haibin

    2017-01-25

    The industry of traditional Chinese medicine (TCM) encounters problems like quality fluctuation of raw materials and unstandardized production process. Near infrared (NIR) spectroscopy technology is widely used in quality control of TCM because of its abundant information, fast and nondestructive characters. The main applications include quantitative analysis of Chinese medicinal materials, intermediates and Chinese patent medicines; the authenticity of TCM, species, origins and manufacturers; monitoring and control of the extraction, alcohol precipitation, column chromatography and blending process. This article reviews the progress on the application of NIR spectroscopy technology in TCM field. In view of the problems existing in the application, the article proposes that the standardization of NIR analysis method should be developed according to specific characteristics of TCM, which will promote the application of NIR technology in the TCM industry.

  7. [Application of regular expression in extracting key information from Chinese medicine literatures about re-evaluation of post-marketing surveillance].

    PubMed

    Wang, Zhifei; Xie, Yanming; Wang, Yongyan

    2011-10-01

    Computerizing extracting information from Chinese medicine literature seems more convenient than hand searching, which could simplify searching process and improve the accuracy. However, many computerized auto-extracting methods are increasingly used, regular expression is so special that could be efficient for extracting useful information in research. This article focused on regular expression applying in extracting information from Chinese medicine literature. Two practical examples were reported in this article about regular expression to extract "case number (non-terminology)" and "efficacy rate (subgroups for related information identification)", which explored how to extract information in Chinese medicine literature by means of some special research method.

  8. Microwave-Assisted Extraction for Microalgae: From Biofuels to Biorefinery

    PubMed Central

    Pandhal, Jagroop

    2018-01-01

    The commercial reality of bioactive compounds and oil production from microalgal species is constrained by the high cost of production. Downstream processing, which includes harvesting and extraction, can account for 70–80% of the total cost of production. Consequently, from an economic perspective extraction technologies need to be improved. Microalgal cells are difficult to disrupt due to polymers within their cell wall such as algaenan and sporopollenin. Consequently, solvents and disruption devices are required to obtain products of interest from within the cells. Conventional techniques used for cell disruption and extraction are expensive and are often hindered by low efficiencies. Microwave-assisted extraction offers a possibility for extraction of biochemical components including lipids, pigments, carbohydrates, vitamins and proteins, individually and as part of a biorefinery. Microwave technology has advanced since its use in the 1970s. It can cut down working times and result in higher yields and purity of products. In this review, the ability and challenges in using microwave technology are discussed for the extraction of bioactive products individually and as part of a biorefinery approach. PMID:29462888

  9. Continuous section extraction and over-underbreak detection of tunnel based on 3D laser technology and image analysis

    NASA Astrophysics Data System (ADS)

    Wang, Weixing; Wang, Zhiwei; Han, Ya; Li, Shuang; Zhang, Xin

    2015-03-01

    In order to ensure safety, long term stability and quality control in modern tunneling operations, the acquisition of geotechnical information about encountered rock conditions and detailed installed support information is required. The limited space and time in an operational tunnel environment make the acquiring data challenging. The laser scanning in a tunneling environment, however, shows a great potential. The surveying and mapping of tunnels are crucial for the optimal use after construction and in routine inspections. Most of these applications focus on the geometric information of the tunnels extracted from the laser scanning data. There are two kinds of applications widely discussed: deformation measurement and feature extraction. The traditional deformation measurement in an underground environment is performed with a series of permanent control points installed around the profile of an excavation, which is unsuitable for a global consideration of the investigated area. Using laser scanning for deformation analysis provides many benefits as compared to traditional monitoring techniques. The change in profile is able to be fully characterized and the areas of the anomalous movement can easily be separated from overall trends due to the high density of the point cloud data. Furthermore, monitoring with a laser scanner does not require the permanent installation of control points, therefore the monitoring can be completed more quickly after excavation, and the scanning is non-contact, hence, no damage is done during the installation of temporary control points. The main drawback of using the laser scanning for deformation monitoring is that the point accuracy of the original data is generally the same magnitude as the smallest level of deformations that are to be measured. To overcome this, statistical techniques and three dimensional image processing techniques for the point clouds must be developed. For safely, effectively and easily control the problem of Over Underbreak detection of road and solve the problemof the roadway data collection difficulties, this paper presents a new method of continuous section extraction and Over Underbreak detection of road based on 3D laser scanning technology and image processing, the method is divided into the following three steps: based on Canny edge detection, local axis fitting, continuous extraction section and Over Underbreak detection of section. First, after Canny edge detection, take the least-squares curve fitting method to achieve partial fitting in axis. Then adjust the attitude of local roadway that makes the axis of the roadway be consistent with the direction of the extraction reference, and extract section along the reference direction. Finally, we compare the actual cross-sectional view and the cross-sectional design to complete Overbreak detected. Experimental results show that the proposed method have a great advantage in computing costs and ensure cross-section orthogonal intercept terms compared with traditional detection methods.

  10. A Framework for Land Cover Classification Using Discrete Return LiDAR Data: Adopting Pseudo-Waveform and Hierarchical Segmentation

    NASA Technical Reports Server (NTRS)

    Jung, Jinha; Pasolli, Edoardo; Prasad, Saurabh; Tilton, James C.; Crawford, Melba M.

    2014-01-01

    Acquiring current, accurate land-use information is critical for monitoring and understanding the impact of anthropogenic activities on natural environments.Remote sensing technologies are of increasing importance because of their capability to acquire information for large areas in a timely manner, enabling decision makers to be more effective in complex environments. Although optical imagery has demonstrated to be successful for land cover classification, active sensors, such as light detection and ranging (LiDAR), have distinct capabilities that can be exploited to improve classification results. However, utilization of LiDAR data for land cover classification has not been fully exploited. Moreover, spatial-spectral classification has recently gained significant attention since classification accuracy can be improved by extracting additional information from the neighboring pixels. Although spatial information has been widely used for spectral data, less attention has been given to LiDARdata. In this work, a new framework for land cover classification using discrete return LiDAR data is proposed. Pseudo-waveforms are generated from the LiDAR data and processed by hierarchical segmentation. Spatial featuresare extracted in a region-based way using a new unsupervised strategy for multiple pruning of the segmentation hierarchy. The proposed framework is validated experimentally on a real dataset acquired in an urban area. Better classification results are exhibited by the proposed framework compared to the cases in which basic LiDAR products such as digital surface model and intensity image are used. Moreover, the proposed region-based feature extraction strategy results in improved classification accuracies in comparison with a more traditional window-based approach.

  11. Remote sensing monitoring and driving force analysis to forest and greenbelt in Zhuhai

    NASA Astrophysics Data System (ADS)

    Yuliang Qiao, Pro.

    As an important city in the southern part of Chu Chiang Delta, Zhuhai is one of the four special economic zones which are opening up to the outside at the earliest in China. With pure and fresh air and trees shading the street, Zhuhai is a famous beach port city which is near the mountain and by the sea. On the basis of Garden City, the government of Zhuhai decides to build National Forest City in 2011, which firstly should understand the situation of greenbelt in Zhuhai in short term. Traditional methods of greenbelt investigation adopt the combination of field surveying and statistics, whose efficiency is low and results are not much objective because of artificial influence. With the adventure of the information technology such as remote sensing to earth observation, especially the launch of many remote sensing satellites with high resolution for the past few years, kinds of urban greenbelt information extraction can be carried out by using remote sensing technology; and dynamic monitoring to spatial pattern evolvement of forest and greenbelt in Zhuhai can be achieved by the combination of remote sensing and GIS technology. Taking Landsat5 TM data in 1995, Landsat7 ETM+ data in 2002, CCD and HR data of CBERS-02B in 2009 as main information source, this research firstly makes remote sensing monitoring to dynamic change of forest and greenbelt in Zhuhai by using the combination of vegetation coverage index and three different information extraction methods, then does a driving force analysis to the dynamic change results in 3 months. The results show: the forest area in Zhuhai shows decreasing tendency from 1995 to 2002, increasing tendency from 2002 to 2009; overall, the forest area show a small diminution tendency from 1995 to 2009. Through the comparison to natural and artificial driving force, the artificial driving force is the leading factor to the change of forest and greenbelt in Zhuhai. The research results provide a timely and reliable scientific basis for the Zhuhai Government in building National Forest City. Keywords: forest and greenbelt; remote sensing; dynamic monitoring; driving force; vegetation coverage

  12. Scaling up Dietary Data for Decision-Making in Low-Income Countries: New Technological Frontiers.

    PubMed

    Bell, Winnie; Colaiezzi, Brooke A; Prata, Cathleen S; Coates, Jennifer C

    2017-11-01

    Dietary surveys in low-income countries (LICs) are hindered by low investment in the necessary research infrastructure, including a lack of basic technology for data collection, links to food composition information, and data processing. The result has been a dearth of dietary data in many LICs because of the high cost and time burden associated with dietary surveys, which are typically carried out by interviewers using pencil and paper. This study reviewed innovative dietary assessment technologies and gauged their suitability to improve the quality and time required to collect dietary data in LICs. Predefined search terms were used to identify technologies from peer-reviewed and gray literature. A total of 78 technologies were identified and grouped into 6 categories: 1 ) computer- and tablet-based, 2 ) mobile-based, 3 ) camera-enabled, 4 ) scale-based, 5 ) wearable, and 6 ) handheld spectrometers. For each technology, information was extracted on a number of overarching factors, including the primary purpose, mode of administration, and data processing capabilities. Each technology was then assessed against predetermined criteria, including requirements for respondent literacy, battery life, requirements for connectivity, ability to measure macro- and micronutrients, and overall appropriateness for use in LICs. Few technologies reviewed met all the criteria, exhibiting both practical constraints and a lack of demonstrated feasibility for use in LICs, particularly for large-scale, population-based surveys. To increase collection of dietary data in LICs, development of a contextually adaptable, interviewer-administered dietary assessment platform is recommended. Additional investments in the research infrastructure are equally important to ensure time and cost savings for the user.

  13. Scaling up Dietary Data for Decision-Making in Low-Income Countries: New Technological Frontiers

    PubMed Central

    Bell, Winnie; Colaiezzi, Brooke A; Prata, Cathleen S

    2017-01-01

    Dietary surveys in low-income countries (LICs) are hindered by low investment in the necessary research infrastructure, including a lack of basic technology for data collection, links to food composition information, and data processing. The result has been a dearth of dietary data in many LICs because of the high cost and time burden associated with dietary surveys, which are typically carried out by interviewers using pencil and paper. This study reviewed innovative dietary assessment technologies and gauged their suitability to improve the quality and time required to collect dietary data in LICs. Predefined search terms were used to identify technologies from peer-reviewed and gray literature. A total of 78 technologies were identified and grouped into 6 categories: 1) computer- and tablet-based, 2) mobile-based, 3) camera-enabled, 4) scale-based, 5) wearable, and 6) handheld spectrometers. For each technology, information was extracted on a number of overarching factors, including the primary purpose, mode of administration, and data processing capabilities. Each technology was then assessed against predetermined criteria, including requirements for respondent literacy, battery life, requirements for connectivity, ability to measure macro- and micronutrients, and overall appropriateness for use in LICs. Few technologies reviewed met all the criteria, exhibiting both practical constraints and a lack of demonstrated feasibility for use in LICs, particularly for large-scale, population-based surveys. To increase collection of dietary data in LICs, development of a contextually adaptable, interviewer-administered dietary assessment platform is recommended. Additional investments in the research infrastructure are equally important to ensure time and cost savings for the user. PMID:29141974

  14. Challenges in Managing Information Extraction

    ERIC Educational Resources Information Center

    Shen, Warren H.

    2009-01-01

    This dissertation studies information extraction (IE), the problem of extracting structured information from unstructured data. Example IE tasks include extracting person names from news articles, product information from e-commerce Web pages, street addresses from emails, and names of emerging music bands from blogs. IE is all increasingly…

  15. Laser-Induced Breakdown Spectroscopy for Rapid Discrimination of Heavy-Metal-Contaminated Seafood Tegillarca granosa

    PubMed Central

    Ji, Guoli; Ye, Pengchao; Shi, Yijian; Yuan, Leiming; Chen, Xiaojing; Yuan, Mingshun; Zhu, Dehua; Chen, Xi; Hu, Xinyu; Jiang, Jing

    2017-01-01

    Tegillarca granosa samples contaminated artificially by three kinds of toxic heavy metals including zinc (Zn), cadmium (Cd), and lead (Pb) were attempted to be distinguished using laser-induced breakdown spectroscopy (LIBS) technology and pattern recognition methods in this study. The measured spectra were firstly processed by a wavelet transform algorithm (WTA), then the generated characteristic information was subsequently expressed by an information gain algorithm (IGA). As a result, 30 variables obtained were used as input variables for three classifiers: partial least square discriminant analysis (PLS-DA), support vector machine (SVM), and random forest (RF), among which the RF model exhibited the best performance, with 93.3% discrimination accuracy among those classifiers. Besides, the extracted characteristic information was used to reconstruct the original spectra by inverse WTA, and the corresponding attribution of the reconstructed spectra was then discussed. This work indicates that the healthy shellfish samples of Tegillarca granosa could be distinguished from the toxic heavy-metal-contaminated ones by pattern recognition analysis combined with LIBS technology, which only requires minimal pretreatments. PMID:29149053

  16. Prediction of villages at risk for filariasis transmission in the Nile Delta using remote sensing and geographic information system technologies.

    PubMed

    Hassan, A N; Beck, L R; Dister, S

    1998-04-01

    Remote sensing and geographic information system (GIS) technologies were used to discriminate between 130 villages, in the Nile Delta, at high and low risk for filariasis, as defined by microfilarial prevalence. Landsat Thematic Mapper (TM) data were digitally processed to generate a map of landcover as well as spectral indices such as NDVI and moisture index. A Tasseled Cap transformation was also carried out on the TM data which produced three more indices: brightness, greenness and wetness. GIS functions were used to extract information on landcover and spectral indices within one km buffers around the study villages. The relationship between satellite data and prevalence was investigated using discriminant analysis. The analysis indicated that the most important landscape elements associated with prevalence were water and marginal vegetation, while wetness and moisture index were the most important indices. Discriminant functions generated for these variables were able to correctly predict 80% and 74% of high and low prevalence villages, respectively, with an overall accuracy of 77%. The present approach provides a promising tool for regional filariasis surveillance and helps direct control efforts.

  17. Summary of Work for Joint Research Interchanges with DARWIN Integrated Product Team 1998

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus

    1999-01-01

    The intent of Stanford University's SciVis group is to develop technologies that enabled comparative analysis and visualization techniques for simulated and experimental flow fields. These techniques would then be made available under the Joint Research Interchange for potential injection into the DARWIN Workspace Environment (DWE). In the past, we have focused on techniques that exploited feature based comparisons such as shock and vortex extractions. Our current research effort focuses on finding a quantitative comparison of general vector fields based on topological features. Since the method relies on topological information, grid matching and vector alignment is not needed in the comparison. This is often a problem with many data comparison techniques. In addition, since only topology based information is stored and compared for each field, there is a significant compression of information that enables large databases to be quickly searched. This report will briefly (1) describe current technologies in the area of comparison techniques, (2) will describe the theory of our new method and finally (3) summarize a few of the results.

  18. Summary of Work for Joint Research Interchanges with DARWIN Integrated Product Team

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus

    1999-01-01

    The intent of Stanford University's SciVis group is to develop technologies that enabled comparative analysis and visualization techniques for simulated and experimental flow fields. These techniques would then be made available un- der the Joint Research Interchange for potential injection into the DARWIN Workspace Environment (DWE). In the past, we have focused on techniques that exploited feature based comparisons such as shock and vortex extractions. Our current research effort focuses on finding a quantitative comparison of general vector fields based on topological features. Since the method relies on topological information, grid matching an@ vector alignment is not needed in the comparison. This is often a problem with many data comparison techniques. In addition, since only topology based information is stored and compared for each field, there is a significant compression of information that enables large databases to be quickly searched. This report will briefly (1) describe current technologies in the area of comparison techniques, (2) will describe the theory of our new method and finally (3) summarize a few of the results.

  19. COGNIS TERRAMET® LEAD EXTRACTION PROCESS; INNOVATIVE TECHNOLOGY EVALUATION REPORT

    EPA Science Inventory

    This report documents an evaluation of lead removal from sands and fines fractions of contaminated soils by the COGNIS TERRAMET® lead extraction process (COGNIS process). The evaluation was performed under the U.S. Environmental Protection Agency's Superfund Innovative Technolog...

  20. SOIL VAPOR EXTRACTION TECHNOLOGY: REFERENCE HANDBOOK

    EPA Science Inventory

    Soil vapor extraction (SVE) systems are being used in Increasing numbers because of the many advantages these systems hold over other soil treatment technologies. SVE systems appear to be simple in design and operation, yet the fundamentals governing subsurface vapor transport ar...

  1. [Nasal submicron emulsion of Scutellariae Radix extract preparation technology research based on phase transfer of solute technology].

    PubMed

    Shi, Ya-jun; Shi, Jun-hui; Chen, Shi-bin; Yang, Ming

    2015-07-01

    Based on the demand of nasal drug delivery high drug loadings, using the unique phase transfer of solute, integrating the phospholipid complex preparation and submicron emulsion molding process of Scutellariae Radix extract, the study obtained the preparation of the high drug loadings submicron emulsion of Scutellariae Radix extract. In the study of drug solution dispersion method, the uniformity of drug dispersed as the evaluation index, the traditional mixing method, grinding, homogenate and solute phase transfer technology were investigated, and the solute phase transfer technology was adopted in the last. With the adoption of new technology, the drug loading capacity reached 1.33% (phospholipid complex was 4%). The drug loading capacity was improved significantly. The transfer of solute method and timing were studied as follows,join the oil phase when the volume of phospholipid complex anhydrous ethanol solution remaining 30%, the solute phase transfer was completed with the continued recycling of anhydrous ethanol. After drug dissolved away to oil phase, the preparation technology of colostrum was determined with the evaluation index of emulsion droplet form. The particle size of submicron emulsion, PDI and stability parameters were used as evaluation index, orthogonal methodology were adopted to optimize the submicron emulsion ingredient and main influential factors of high pressure homogenization technology. The optimized preparation technology of Scutellariae Radix extract nasal submicron emulsion is practical and stable.

  2. Automatic recognition of seismic intensity based on RS and GIS: a case study in Wenchuan Ms8.0 earthquake of China.

    PubMed

    Zhang, Qiuwen; Zhang, Yan; Yang, Xiaohong; Su, Bin

    2014-01-01

    In recent years, earthquakes have frequently occurred all over the world, which caused huge casualties and economic losses. It is very necessary and urgent to obtain the seismic intensity map timely so as to master the distribution of the disaster and provide supports for quick earthquake relief. Compared with traditional methods of drawing seismic intensity map, which require many investigations in the field of earthquake area or are too dependent on the empirical formulas, spatial information technologies such as Remote Sensing (RS) and Geographical Information System (GIS) can provide fast and economical way to automatically recognize the seismic intensity. With the integrated application of RS and GIS, this paper proposes a RS/GIS-based approach for automatic recognition of seismic intensity, in which RS is used to retrieve and extract the information on damages caused by earthquake, and GIS is applied to manage and display the data of seismic intensity. The case study in Wenchuan Ms8.0 earthquake in China shows that the information on seismic intensity can be automatically extracted from remotely sensed images as quickly as possible after earthquake occurrence, and the Digital Intensity Model (DIM) can be used to visually query and display the distribution of seismic intensity.

  3. Automation and adaptation: Nurses' problem-solving behavior following the implementation of bar coded medication administration technology.

    PubMed

    Holden, Richard J; Rivera-Rodriguez, A Joy; Faye, Héléne; Scanlon, Matthew C; Karsh, Ben-Tzion

    2013-08-01

    The most common change facing nurses today is new technology, particularly bar coded medication administration technology (BCMA). However, there is a dearth of knowledge on how BCMA alters nursing work. This study investigated how BCMA technology affected nursing work, particularly nurses' operational problem-solving behavior. Cognitive systems engineering observations and interviews were conducted after the implementation of BCMA in three nursing units of a freestanding pediatric hospital. Problem-solving behavior, associated problems, and goals, were specifically defined and extracted from observed episodes of care. Three broad themes regarding BCMA's impact on problem solving were identified. First, BCMA allowed nurses to invent new problem-solving behavior to deal with pre-existing problems. Second, BCMA made it difficult or impossible to apply some problem-solving behaviors that were commonly used pre-BCMA, often requiring nurses to use potentially risky workarounds to achieve their goals. Third, BCMA created new problems that nurses were either able to solve using familiar or novel problem-solving behaviors, or unable to solve effectively. Results from this study shed light on hidden hazards and suggest three critical design needs: (1) ecologically valid design; (2) anticipatory control; and (3) basic usability. Principled studies of the actual nature of clinicians' work, including problem solving, are necessary to uncover hidden hazards and to inform health information technology design and redesign.

  4. Automation and adaptation: Nurses’ problem-solving behavior following the implementation of bar coded medication administration technology

    PubMed Central

    Holden, Richard J.; Rivera-Rodriguez, A. Joy; Faye, Héléne; Scanlon, Matthew C.; Karsh, Ben-Tzion

    2012-01-01

    The most common change facing nurses today is new technology, particularly bar coded medication administration technology (BCMA). However, there is a dearth of knowledge on how BCMA alters nursing work. This study investigated how BCMA technology affected nursing work, particularly nurses’ operational problem-solving behavior. Cognitive systems engineering observations and interviews were conducted after the implementation of BCMA in three nursing units of a freestanding pediatric hospital. Problem-solving behavior, associated problems, and goals, were specifically defined and extracted from observed episodes of care. Three broad themes regarding BCMA’s impact on problem solving were identified. First, BCMA allowed nurses to invent new problem-solving behavior to deal with pre-existing problems. Second, BCMA made it difficult or impossible to apply some problem-solving behaviors that were commonly used pre-BCMA, often requiring nurses to use potentially risky workarounds to achieve their goals. Third, BCMA created new problems that nurses were either able to solve using familiar or novel problem-solving behaviors, or unable to solve effectively. Results from this study shed light on hidden hazards and suggest three critical design needs: (1) ecologically valid design; (2) anticipatory control; and (3) basic usability. Principled studies of the actual nature of clinicians’ work, including problem solving, are necessary to uncover hidden hazards and to inform health information technology design and redesign. PMID:24443642

  5. Understanding Unintended Consequences and Health Information Technology:. Contribution from the IMIA Organizational and Social Issues Working Group.

    PubMed

    Kuziemsky, C E; Randell, R; Borycki, E M

    2016-11-10

    No framework exists to identify and study unintended consequences (UICs) with a focus on organizational and social issues (OSIs). To address this shortcoming, we conducted a literature review to develop a framework for considering UICs and health information technology (HIT) from the perspective of OSIs. A literature review was conducted for the period 2000- 2015 using the search terms "unintended consequences" and "health information technology". 67 papers were screened, of which 18 met inclusion criteria. Data extraction was focused on the types of technologies studied, types of UICs identified, and methods of data collection and analysis used. A thematic analysis was used to identify themes related to UICs. We identified two overarching themes. One was the definition and terminology of how people classify and discuss UICs. Second was OSIs and UICs. For the OSI theme, we also identified four sub-themes: process change and evolution, individual-collaborative interchange, context of use, and approaches to model, study, and understand UICs. While there is a wide body of research on UICs, there is a lack of overall consensus on how they should be classified and reported, limiting our ability to understand the implications of UICs and how to manage them. More mixed-methods research and better proactive identification of UICs remain priorities. Our findings and framework of OSI considerations for studying UICs and HIT extend existing work on HIT and UICs by focusing on organizational and social issues.

  6. Application of Mls Data to the Assessment of Safety-Related Features in the Surrounding Area of Automatically Detected Pedestrian Crossings

    NASA Astrophysics Data System (ADS)

    Soilán, M.; Riveiro, B.; Sánchez-Rodríguez, A.; González-deSantos, L. M.

    2018-05-01

    During the last few years, there has been a huge methodological development regarding the automatic processing of 3D point cloud data acquired by both terrestrial and aerial mobile mapping systems, motivated by the improvement of surveying technologies and hardware performance. This paper presents a methodology that, in a first place, extracts geometric and semantic information regarding the road markings within the surveyed area from Mobile Laser Scanning (MLS) data, and then employs it to isolate street areas where pedestrian crossings are found and, therefore, pedestrians are more likely to cross the road. Then, different safety-related features can be extracted in order to offer information about the adequacy of the pedestrian crossing regarding its safety, which can be displayed in a Geographical Information System (GIS) layer. These features are defined in four different processing modules: Accessibility analysis, traffic lights classification, traffic signs classification, and visibility analysis. The validation of the proposed methodology has been carried out in two different cities in the northwest of Spain, obtaining both quantitative and qualitative results for pedestrian crossing classification and for each processing module of the safety assessment on pedestrian crossing environments.

  7. Information extraction and knowledge graph construction from geoscience literature

    NASA Astrophysics Data System (ADS)

    Wang, Chengbin; Ma, Xiaogang; Chen, Jianguo; Chen, Jingwen

    2018-03-01

    Geoscience literature published online is an important part of open data, and brings both challenges and opportunities for data analysis. Compared with studies of numerical geoscience data, there are limited works on information extraction and knowledge discovery from textual geoscience data. This paper presents a workflow and a few empirical case studies for that topic, with a focus on documents written in Chinese. First, we set up a hybrid corpus combining the generic and geology terms from geology dictionaries to train Chinese word segmentation rules of the Conditional Random Fields model. Second, we used the word segmentation rules to parse documents into individual words, and removed the stop-words from the segmentation results to get a corpus constituted of content-words. Third, we used a statistical method to analyze the semantic links between content-words, and we selected the chord and bigram graphs to visualize the content-words and their links as nodes and edges in a knowledge graph, respectively. The resulting graph presents a clear overview of key information in an unstructured document. This study proves the usefulness of the designed workflow, and shows the potential of leveraging natural language processing and knowledge graph technologies for geoscience.

  8. Multiplexed Sequence Encoding: A Framework for DNA Communication

    PubMed Central

    Zakeri, Bijan; Carr, Peter A.; Lu, Timothy K.

    2016-01-01

    Synthetic DNA has great propensity for efficiently and stably storing non-biological information. With DNA writing and reading technologies rapidly advancing, new applications for synthetic DNA are emerging in data storage and communication. Traditionally, DNA communication has focused on the encoding and transfer of complete sets of information. Here, we explore the use of DNA for the communication of short messages that are fragmented across multiple distinct DNA molecules. We identified three pivotal points in a communication—data encoding, data transfer & data extraction—and developed novel tools to enable communication via molecules of DNA. To address data encoding, we designed DNA-based individualized keyboards (iKeys) to convert plaintext into DNA, while reducing the occurrence of DNA homopolymers to improve synthesis and sequencing processes. To address data transfer, we implemented a secret-sharing system—Multiplexed Sequence Encoding (MuSE)—that conceals messages between multiple distinct DNA molecules, requiring a combination key to reveal messages. To address data extraction, we achieved the first instance of chromatogram patterning through multiplexed sequencing, thereby enabling a new method for data extraction. We envision these approaches will enable more widespread communication of information via DNA. PMID:27050646

  9. a Probability-Based Statistical Method to Extract Water Body of TM Images with Missing Information

    NASA Astrophysics Data System (ADS)

    Lian, Shizhong; Chen, Jiangping; Luo, Minghai

    2016-06-01

    Water information cannot be accurately extracted using TM images because true information is lost in some images because of blocking clouds and missing data stripes, thereby water information cannot be accurately extracted. Water is continuously distributed in natural conditions; thus, this paper proposed a new method of water body extraction based on probability statistics to improve the accuracy of water information extraction of TM images with missing information. Different disturbing information of clouds and missing data stripes are simulated. Water information is extracted using global histogram matching, local histogram matching, and the probability-based statistical method in the simulated images. Experiments show that smaller Areal Error and higher Boundary Recall can be obtained using this method compared with the conventional methods.

  10. Can mobile technology improve response times of junior doctors to urgent out-of-hours calls? A prospective observational study.

    PubMed

    Herrod, P J J; Barclay, C; Blakey, J D

    2014-04-01

    The Hospital at Night system has been widely adopted to manage Out-of-Hours workload. However, it has the potential to introduce delays and corruption of information. The introduction of newer technologies to replace landlines, pagers and paper may ameliorate these issues. To establish if the introduction of a Hospital at Night system supported by a wireless taskflow system affected the escalation of high Early Warning Scores (EWSs) to medical attention, and the time taken to medical review. Prospective 'pre and post' observational study in a teaching hospital in the UK. Review of observation charts and medical records, and data extraction from the electronic taskflow system. The implementation of a technology-supported Hospital at Night system was associated with a significant decrease in time to documentation of initial review in those who were reviewed. However, there was no change in the proportion of those with a high EWS that were reviewed, and throughout the study a majority of patients with high EWSs were not reviewed in accordance with guidelines. Introduction of a Hospital at Night system supported by mobile technology appeared to improve the transfer of information, but did not affect the nursing decision whether to escalate abnormal findings.

  11. Getting to low-cost algal biofuels: A monograph on conventional and cutting-edge harvesting and extraction technologies

    DOE PAGES

    Coons, James E.; Kalb, Daniel M.; Dale, Taraka; ...

    2014-08-31

    Among the most formidable challenges to algal biofuels is the ability to harvest algae and extract intracellular lipids at low cost and with a positive energy balance. Here, we construct two paradigms that contrast energy requirements and costs of conventional and cutting-edge Harvesting and Extraction (H&E) technologies. By application of the parity criterion and the moderate condition reference state, an energy–cost paradigm is created that allows 1st stage harvesting technologies to be compared with easy reference to the National Alliance for Advanced Biofuels and Bioproducts (NAABB) target of $0.013/gallon of gasoline equivalent (GGE) and to the U.S. DOE's Bioenergy Technologiesmore » Office 2022 cost metrics. Drawing from the moderate condition reference state, a concentration-dependency paradigm is developed for extraction technologies, making easier comparison to the National Algal Biofuels Technology Roadmap (NABTR) target of less than 10% total energy. This monograph identifies cost-bearing factors for a variety of H&E technologies, describes a design basis for ultrasonic harvesters, and provides a framework to measure future technological advancements toward reducing H&E costs. Finally, we show that ultrasonic harvesters and extractors are uniquely capable of meeting both NAABB and NABTR targets. Ultrasonic technologies require further development and scale-up before they can achieve low-cost performance at industrially relevant scales. But, the advancement of this technology would greatly reduce H&E costs and accelerate the commercial viability of algae-based biofuels.« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    The Benchmarks of Global Clean Energy Manufacturing will help policymakers and industry gain deeper understanding of global manufacturing of clean energy technologies. Increased knowledge of the product supply chains can inform decisions related to manufacturing facilities for extracting and processing raw materials, making the array of required subcomponents, and assembling and shipping the final product. This brochure summarized key findings from the analysis and includes important figures from the report. The report was prepared by the Clean Energy Manufacturing Analysis Center (CEMAC) analysts at the U.S. Department of Energy's National Renewable Energy Laboratory.

  13. Recurrence plots and recurrence quantification analysis of human motion data

    NASA Astrophysics Data System (ADS)

    Josiński, Henryk; Michalczuk, Agnieszka; Świtoński, Adam; Szczesna, Agnieszka; Wojciechowski, Konrad

    2016-06-01

    The authors present exemplary application of recurrence plots, cross recurrence plots and recurrence quantification analysis for the purpose of exploration of experimental time series describing selected aspects of human motion. Time series were extracted from treadmill gait sequences which were recorded in the Human Motion Laboratory (HML) of the Polish-Japanese Academy of Information Technology in Bytom, Poland by means of the Vicon system. Analysis was focused on the time series representing movements of hip, knee, ankle and wrist joints in the sagittal plane.

  14. Forest and Range Inventory and Mapping

    NASA Technical Reports Server (NTRS)

    Aldrich, R. C.

    1971-01-01

    The state of the art in remote sensing for forest and range inventories and mapping has been discussed. There remains a long way to go before some of these techniques can be used on an operational basis. By the time that the Earth Resources Technology Satellite and Skylab space missions are flown, it should be possible to tell what kind and what quality of information can be extracted from remote sensors and how it can be used for surveys of forest and range resources.

  15. Natural Language Processing as a Discipline at LLNL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Firpo, M A

    The field of Natural Language Processing (NLP) is described as it applies to the needs of LLNL in handling free-text. The state of the practice is outlined with the emphasis placed on two specific aspects of NLP: Information Extraction and Discourse Integration. A brief description is included of the NLP applications currently being used at LLNL. A gap analysis provides a look at where the technology needs work in order to meet the needs of LLNL. Finally, recommendations are made to meet these needs.

  16. Concomitant Leaching and Electrochemical Extraction of Rare Earth Elements from Monazite.

    PubMed

    Maes, Synthia; Zhuang, Wei-Qin; Rabaey, Korneel; Alvarez-Cohen, Lisa; Hennebel, Tom

    2017-02-07

    Rare earth elements (REEs) have become increasingly important in modern day technologies. Unfortunately, their recycling is currently limited, and the conventional technologies for their extraction and purification are exceedingly energy and chemical intensive. New sustainable technologies for REE extraction from both primary and secondary resources would be extremely beneficial. This research investigated a two-stage recovery strategy focused on the recovery of neodymium (Nd) and lanthanum (La) from monazite ore that combines microbially based leaching (using citric acid and spent fungal supernatant) with electrochemical extraction. Pretreating the phosphate-based monazite rock (via roasting) dramatically increased the microbial REE leaching efficiency. Batch experiments demonstrated the effective and continued leaching of REEs by recycled citric acid, with up to 392 mg of Nd L -1 and 281 mg of La L -1 leached during seven consecutive 24 h cycles. Neodymium was further extracted in the catholyte of a three-compartment electrochemical system, with up to 880 mg of Nd L -1 achieved within 4 days (at 40 A m -2 ). Meanwhile, the radioactive element thorium and counterions phosphate and citrate were separated effectively from the REEs in the anolyte, favoring REE extraction and allowing sustainable reuse of the leaching agent. This study shows a promising technology that is suitable for primary ores and can further be optimized for secondary resources.

  17. Innovations in food technology for health.

    PubMed

    Hsieh, Yun-Hwa Peggy; Ofori, Jack Appiah

    2007-01-01

    Modern nutritional science is providing ever more information on the functions and mechanisms of specific food components in health promotion and/or disease prevention. In response to demands from increasingly health conscious consumers, the global trend is for food industries to translate nutritional information into consumer reality by developing food products that provide not only superior sensory appeal but also nutritional and health benefits. Today's busy life styles are also driving the development of healthy convenience foods. Recent innovations in food technologies have led to the use of many traditional technologies, such as fermentation, extraction, encapsulation, fat replacement, and enzyme technology, to produce new health food ingredients, reduce or remove undesirable food components, add specific nutrient or functional ingredients, modify food compositions, mask undesirable flavors or stabilize ingredients. Modern biotechnology has even revolutionized the way foods are created. Recent discoveries in gene science are making it possible to manipulate the components in natural foods. In combination with biofermentation, desirable natural compounds can now be produced in large amounts at a low cost and with little environmental impact. Nanotechnology is also beginning to find potential applications in the area of food and agriculture. Although the use of new technologies in the production of health foods is often a cause for concern, the possibility that innovative food technology will allow us to produce a wide variety of food with enhanced flavor and texture, while at the same time conferring multiple health benefits on the consumer, is very exciting.

  18. DEMONSTRATION BULLETIN: TERRA KLEEN SOLVENT EXTRACTION TECHNOLOGY - TERRA-KLEEN RESPONSE GROUP, INC.

    EPA Science Inventory

    The Terra-Kleen Solvent Extraction Technology was developed by Terra-Kleen Response Group, Inc., to remove polychlorinated biphenyls (PCB) and other organic constituents from contaminated soil. This batch process system uses a proprietary solvent at ambient temperatures to treat ...

  19. Superfund Innovative Technology Evaluation: Demonstration Bulletin: Organic Extraction Utilizing Solvents

    EPA Science Inventory

    This technology utilizes liquified gases as the extracting solvent to remove organics, such as hydrocarbons, oil and grease, from wastewater or contaminated sludges and soils. Carbon dioxide is generally used for aqueous solutions, and propane is used for sediment, sludges and ...

  20. Information extraction system

    DOEpatents

    Lemmond, Tracy D; Hanley, William G; Guensche, Joseph Wendell; Perry, Nathan C; Nitao, John J; Kidwell, Paul Brandon; Boakye, Kofi Agyeman; Glaser, Ron E; Prenger, Ryan James

    2014-05-13

    An information extraction system and methods of operating the system are provided. In particular, an information extraction system for performing meta-extraction of named entities of people, organizations, and locations as well as relationships and events from text documents are described herein.

  1. Visualization of DNA in highly processed botanical materials.

    PubMed

    Lu, Zhengfei; Rubinsky, Maria; Babajanian, Silva; Zhang, Yanjun; Chang, Peter; Swanson, Gary

    2018-04-15

    DNA-based methods have been gaining recognition as a tool for botanical authentication in herbal medicine; however, their application in processed botanical materials is challenging due to the low quality and quantity of DNA left after extensive manufacturing processes. The low amount of DNA recovered from processed materials, especially extracts, is "invisible" by current technology, which has casted doubt on the presence of amplifiable botanical DNA. A method using adapter-ligation and PCR amplification was successfully applied to visualize the "invisible" DNA in botanical extracts. The size of the "invisible" DNA fragments in botanical extracts was around 20-220 bp compared to fragments of around 600 bp for the more easily visualized DNA in botanical powders. This technique is the first to allow characterization and visualization of small fragments of DNA in processed botanical materials and will provide key information to guide the development of appropriate DNA-based botanical authentication methods in the future. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Uniform Local Binary Pattern Based Texture-Edge Feature for 3D Human Behavior Recognition.

    PubMed

    Ming, Yue; Wang, Guangchao; Fan, Chunxiao

    2015-01-01

    With the rapid development of 3D somatosensory technology, human behavior recognition has become an important research field. Human behavior feature analysis has evolved from traditional 2D features to 3D features. In order to improve the performance of human activity recognition, a human behavior recognition method is proposed, which is based on a hybrid texture-edge local pattern coding feature extraction and integration of RGB and depth videos information. The paper mainly focuses on background subtraction on RGB and depth video sequences of behaviors, extracting and integrating historical images of the behavior outlines, feature extraction and classification. The new method of 3D human behavior recognition has achieved the rapid and efficient recognition of behavior videos. A large number of experiments show that the proposed method has faster speed and higher recognition rate. The recognition method has good robustness for different environmental colors, lightings and other factors. Meanwhile, the feature of mixed texture-edge uniform local binary pattern can be used in most 3D behavior recognition.

  3. A mobile unit for memory retrieval in daily life based on image and sensor processing

    NASA Astrophysics Data System (ADS)

    Takesumi, Ryuji; Ueda, Yasuhiro; Nakanishi, Hidenobu; Nakamura, Atsuyoshi; Kakimori, Nobuaki

    2003-10-01

    We developed a Mobile Unit which purpose is to support memory retrieval of daily life. In this paper, we describe the two characteristic factors of this unit. (1)The behavior classification with an acceleration sensor. (2)Extracting the difference of environment with image processing technology. In (1), By analyzing power and frequency of an acceleration sensor which turns to gravity direction, the one's activities can be classified using some techniques to walk, stay, and so on. In (2), By extracting the difference between the beginning scene and the ending scene of a stay scene with image processing, the result which is done by user is recognized as the difference of environment. Using those 2 techniques, specific scenes of daily life can be extracted, and important information at the change of scenes can be realized to record. Especially we describe the effect to support retrieving important things, such as a thing left behind and a state of working halfway.

  4. An effective hand vein feature extraction method.

    PubMed

    Li, Haigang; Zhang, Qian; Li, Chengdong

    2015-01-01

    As a new authentication method developed years ago, vein recognition technology features the unique advantage of bioassay. This paper studies the specific procedure for the extraction of hand back vein characteristics. There are different positions used in the collecting process, so that a suitable intravenous regional orientation method is put forward, allowing the positioning area to be the same for all hand positions. In addition, to eliminate the pseudo vein area, the valley regional shape extraction operator can be improved and combined with multiple segmentation algorithms. The images should be segmented step by step, making the vein texture to appear clear and accurate. Lastly, the segmented images should be filtered, eroded, and refined. This process helps to filter the most of the pseudo vein information. Finally, a clear vein skeleton diagram is obtained, demonstrating the effectiveness of the algorithm. This paper presents a hand back vein region location method. This makes it possible to rotate and correct the image by working out the inclination degree of contour at the side of hand back.

  5. Large-screen display technology assessment for military applications

    NASA Astrophysics Data System (ADS)

    Blaha, Richard J.

    1990-08-01

    Full-color, large screen display systems can enhance military applications that require group presentation, coordinated decisions, or interaction between decision makers. The technology already plays an important role in operations centers, simulation facilities, conference rooms, and training centers. Some applications display situational, status, or briefing information, while others portray instructional material for procedural training or depict realistic panoramic scenes that are used in simulators. While each specific application requires unique values of luminance, resolution, response time, reliability, and the video interface, suitable performance can be achieved with available commercial large screen displays. Advances in the technology of large screen displays are driven by the commercial applications because the military applications do not provide the significant market share enjoyed by high definition television (HDTV), entertainment, advertisement, training, and industrial applications. This paper reviews the status of full-color, large screen display technologies and includes the performance and cost metrics of available systems. For this discussion, performance data is based upon either measurements made by our personnel or extractions from vendors' data sheets.

  6. Quantifying Traces of Tool Use: A Novel Morphometric Analysis of Damage Patterns on Percussive Tools

    PubMed Central

    Caruana, Matthew V.; Carvalho, Susana; Braun, David R.; Presnyakova, Darya; Haslam, Michael; Archer, Will; Bobe, Rene; Harris, John W. K.

    2014-01-01

    Percussive technology continues to play an increasingly important role in understanding the evolution of tool use. Comparing the archaeological record with extractive foraging behaviors in nonhuman primates has focused on percussive implements as a key to investigating the origins of lithic technology. Despite this, archaeological approaches towards percussive tools have been obscured by a lack of standardized methodologies. Central to this issue have been the use of qualitative, non-diagnostic techniques to identify percussive tools from archaeological contexts. Here we describe a new morphometric method for distinguishing anthropogenically-generated damage patterns on percussive tools from naturally damaged river cobbles. We employ a geomatic approach through the use of three-dimensional scanning and geographical information systems software to statistically quantify the identification process in percussive technology research. This will strengthen current technological analyses of percussive tools in archaeological frameworks and open new avenues for translating behavioral inferences of early hominins from percussive damage patterns. PMID:25415303

  7. A review on green trend for oil extraction using subcritical water technology and biodiesel production.

    PubMed

    Abdelmoez, Weal; Ashour, Eman; Naguib, Shahenaz M

    2015-01-01

    It became a global agenda to develop clean alternative fuels which were domestically available, environmentally acceptable and technically feasible. Thus, biodiesel was destined to make a substantial contribution to the future energy demands of the domestic and industrial economies. Utilization of the non edible vegetable oils as raw materials for biodiesel production had been handled frequently for the past few years. The oil content of these seeds could be extracted by different oil extraction methods, such as mechanical extraction, solvent extraction and by subcritical water extraction technology SWT. Among them, SWT represents a new promising green extraction method. Therefore this review covered the current used non edible oil seeds for biodiesel production as well as giving a sharp focus on the efficiency of using the SWT as a promising extraction method. In addition the advantages and the disadvantages of the different biodiesel production techniques would be covered.

  8. SITE TECHNOLOGY CAPSULE: TERRA-KLEEN SOLVENT EXTRACTION TECHNOLOGY

    EPA Science Inventory

    Remediation of PCBs in soils has been difficult to implement on a full-scale, cost-effective basis. The Terra-Kleen solvent extraction system has overcome many of the soil handling, contaminant removal, and regulatory restrictions that have made it difficult to implement a cost-e...

  9. U.S. ENVIRONMENTAL PROTECTION AGENCY'S SUPERFUND INNOVATIVE TECHNOLOGY EVALUATION OF PNEUMATIC FRACTURING EXTRACTION

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA), in cooperation with Accutech Remedial Systems (ARS) and the New Jersey Institute of Technology (NJIT) performed a field demonstration of Pneumatic Fracturing Extraction (PFE) for the removal of chlorinated volatile organics (VOCS) f...

  10. [Application of continuous mixing technology in ethanol precipitation process of Salvia miltiorrhiza by using micromixer].

    PubMed

    Gong, Xing-Chu; Shen, Ji-Chen; Qu, Hai-Bin

    2016-12-01

    Continuous pharmaceutical manufacturing is one of the development directions in international pharmaceutical technology. In this study, a continuous mixing technology of ethanol and concentrated extract in the ethanol precipitation of Salvia miltiorrhiza was realized by using a membrane dispersion method. The effects of ethanol flowrate, concentrated extract flowrate, and flowrate ratio on ethanol precipitation results were investigated. With the increase of the flowrates of ethanol and concentrated extract, retention rate of active phenolic acids components was increased, and the total solid removal rate was decreased. The purity of active components in supernatants was mainly affected by the ratio of ethanol flowrate and concentrated extract flowrate. The mixing efficiency of adding ethanol under continuous flow mixing mode in this study was comparable to that of industrial ethanol precipitation. Continuous adding ethanol by using a membrane dispersion mixer is a promising technology with many advantages such as easy enlargement, large production per unit volume, and easy control. Copyright© by the Chinese Pharmaceutical Association.

  11. The effectiveness of information and communication technology-based psychological interventions for paediatric chronic pain: protocol for a systematic review, meta-analysis and intervention content analysis.

    PubMed

    Traynor, Angeline; Morrissey, Eimear; Egan, Jonathan; McGuire, Brian E

    2016-10-18

    Resource and geographic barriers are the commonly cited constraints preventing the uptake of psychological treatment for chronic pain management. For adults, there is some evidence to support the use of information and communication technology (ICT) as a mode of treatment delivery. However, mixed findings have been reported for the effectiveness and acceptability of psychological interventions delivered using information and communication technology for children and adolescents. This is a protocol for a review that aims to (i) evaluate the effectiveness of psychological interventions delivered using information and communication technology for children and adolescents with chronic pain and (ii) identify the intervention components and usability factors in technology-based treatments associated with behaviour change. We will conduct a systematic review to evaluate the effectiveness of psychological interventions for paediatric chronic pain delivered using ICT. We plan to directly compare ICT-based, psychological interventions with active control, treatment as usual or waiting list control conditions. This systematic review will be reported in line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidance. Published and unpublished randomised controlled trials will be included and the literature search will comprise Ovid MEDLINE, Ovid Embase, PsycINFO and the Cochrane Library on Wiley, including CENTRAL and Cochrane Database of Systematic Reviews. Grey literature including theses, dissertations, technical and research reports will also be examined. Two review authors will independently conduct study selection, relevant data extraction and assessment of methodological quality. Risk of bias in included studies will be assessed using the Cochrane Collaboration risk of bias tool criteria. Two qualified coders will independently code behaviour change techniques according to the behaviour change taxonomy (v1) of 93 hierarchically clustered techniques and a novel coding scheme for mode of delivery and usability factors. A quantitative synthesis will be conducted if appropriate. The findings of this review may offer insight for healthcare professionals working in chronic pain services and to researchers involved in designing and evaluating information and communication technology-based interventions. PROSPERO CRD42016017657.

  12. The Grid as a healthcare provision tool.

    PubMed

    Hernández, V; Blanquer, I

    2005-01-01

    This paper presents a survey on HealthGrid technologies, describing the current status of Grid and eHealth and analyzing them in the medium-term future. The objective is to analyze the key points, barriers and driving forces for the take-up of HealthGrids. The article considers the procedures from other Grid disciplines such as high energy physics or biomolecular engineering and discusses the differences with respect to healthcare. It analyzes the status of the basic technology, the needs of the eHealth environment and the successes of current projects in health and other relevant disciplines. Information and communication technology (ICT) in healthcare is a promising area for the use of the Grid. There are many driving forces that are fostering the application of the secure, pervasive, ubiquitous and transparent access to information and computing resources that Grid technologies can provide. However, there are many barriers that must be solved. Many technical problems that arise in eHealth (standardization of data, federation of databases, content-based knowledge extraction, and management of personal data ...) can be solved with Grid technologies. The article presents the development of successful and demonstrative applications as the key for the take-up of HealthGrids, where short-term future medical applications will surely be biocomputing-oriented, and the future of Grid technologies on medical imaging seems promising. Finally, exploitation of HealthGrid is analyzed considering the curve of the adoption of ICT solutions and the definition of business models, which are far more complex than in other e-business technologies such ASP.

  13. Mind Reading and Writing: The Future of Neurotechnology.

    PubMed

    Roelfsema, Pieter R; Denys, Damiaan; Klink, P Christiaan

    2018-05-02

    Recent advances in neuroscience and technology have made it possible to record from large assemblies of neurons and to decode their activity to extract information. At the same time, available methods to stimulate the brain and influence ongoing processing are also rapidly expanding. These developments pave the way for advanced neurotechnological applications that directly read from, and write to, the human brain. While such technologies are still primarily used in restricted therapeutic contexts, this may change in the future once their performance has improved and they become more widely applicable. Here, we provide an overview of methods to interface with the brain, speculate about potential applications, and discuss important issues associated with a neurotechnologically assisted future. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Fixing clearance as early as lead optimization using high throughput in vitro incubations in combination with exact mass detection and automatic structure elucidation of metabolites.

    PubMed

    Zimmerlin, Alfred; Kiffe, Michael

    2013-01-01

    New enabling MS technologies have made it possible to elucidate metabolic pathways present in ex vivo (blood, bile and/or urine) or in vitro (liver microsomes, hepatocytes and/or S9) samples. When investigating samples from high throughput assays the challenge that the user is facing now is to extract the appropriate information and compile it so that it is understandable to all. Medicinal chemist may then design the next generation of (better) drug candidates combining the needs for potency and metabolic stability and their synthetic creativity. This review focuses on the comparison of these enabling MS technologies and the IT tools developed for their interpretation.

  15. A thesis on the Development of an Automated SWIFT Edge Detection Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trujillo, Christopher J.

    Throughout the world, scientists and engineers such as those at Los Alamos National Laboratory, perform research and testing unique only to applications aimed towards advancing technology, and understanding the nature of materials. With this testing, comes a need for advanced methods of data acquisition and most importantly, a means of analyzing and extracting the necessary information from such acquired data. In this thesis, I aim to produce an automated method implementing advanced image processing techniques and tools to analyze SWIFT image datasets for Detonator Technology at Los Alamos National Laboratory. Such an effective method for edge detection and point extractionmore » can prove to be advantageous in analyzing such unique datasets and provide for consistency in producing results.« less

  16. PAT: From Western solid dosage forms to Chinese materia medica preparations using NIR-CI.

    PubMed

    Zhou, Luwei; Xu, Manfei; Wu, Zhisheng; Shi, Xinyuan; Qiao, Yanjiang

    2016-01-01

    Near-infrared chemical imaging (NIR-CI) is an emerging technology that combines traditional near-infrared spectroscopy with chemical imaging. Therefore, NIR-CI can extract spectral information from pharmaceutical products and simultaneously visualize the spatial distribution of chemical components. The rapid and non-destructive features of NIR-CI make it an attractive process analytical technology (PAT) for identifying and monitoring critical control parameters during the pharmaceutical manufacturing process. This review mainly focuses on the pharmaceutical applications of NIR-CI in each unit operation during the manufacturing processes, from the Western solid dosage forms to the Chinese materia medica preparations. Finally, future applications of chemical imaging in the pharmaceutical industry are discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Machine learning for Big Data analytics in plants.

    PubMed

    Ma, Chuang; Zhang, Hao Helen; Wang, Xiangfeng

    2014-12-01

    Rapid advances in high-throughput genomic technology have enabled biology to enter the era of 'Big Data' (large datasets). The plant science community not only needs to build its own Big-Data-compatible parallel computing and data management infrastructures, but also to seek novel analytical paradigms to extract information from the overwhelming amounts of data. Machine learning offers promising computational and analytical solutions for the integrative analysis of large, heterogeneous and unstructured datasets on the Big-Data scale, and is gradually gaining popularity in biology. This review introduces the basic concepts and procedures of machine-learning applications and envisages how machine learning could interface with Big Data technology to facilitate basic research and biotechnology in the plant sciences. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. [Have you eaten any DNA today? Science communication during Science and Technology Week in Brazil].

    PubMed

    Possik, Patricia Abrão; Shumiski, Lívia Cantisani; Corrêa, Elisete Marcia; Maia, Roberta de Assis; Medaglia, Adriana; Mourão, Lucivana Prata de Souza; Pereira, Jairo Marques Campos; Persuhn, Darlene Camati; Rufier, Myrthes; Santos, Marcelo; Sobreira, Marise; Elblink, Marcia Triunfol

    2013-11-30

    During the first National Science and Technology Week held in 2004, science centers and museums, universities and schools engaged in activities with the idea of divulging science to the people. Demonstrations of the extraction of DNA from fruits were conducted in supermarkets in 11 Brazilian cities by two institutions, DNA Vai à Escola and Conselho de Informação e Biotecnologia. This article describes the formation of a national network of people interested in communicating information about genetics to the lay public and the implementation of a low-cost science communication activity in different parts of the country simultaneously. It also analyzes the impact caused by this initiative and the perceptions of those involved in its organization.

  19. The development of a dynamic software for the user interaction from the geographic information system environment with the database of the calibration site of the satellite remote electro-optic sensors

    NASA Astrophysics Data System (ADS)

    Zyelyk, Ya. I.; Semeniv, O. V.

    2015-12-01

    The state of the problem of the post-launch calibration of the satellite electro-optic remote sensors and its solutions in Ukraine is analyzed. The database is improved and dynamic services for user interaction with database from the environment of open geographical information system Quantum GIS for information support of calibration activities are created. A dynamic application under QGIS is developed, implementing these services in the direction of the possibility of data entering, editing and extraction from the database, using the technology of object-oriented programming and of modern complex program design patterns. The functional and algorithmic support of this dynamic software and its interface are developed.

  20. [Extraction and purification technologies of total flavonoids from Aconitum tanguticum].

    PubMed

    Li, Yan-Rong; Yan, Li-Xin; Feng, Wei-Hong; Li, Chun; Wang, Zhi-Min

    2014-04-01

    To optimize the extraction and purification technologies of total flavonoids from Aconitum tanguticum whole plant. With the content of total flavonoids as index, the optimum extraction conditions for the concentration, volume of alcohol, extracting time and times were selected by orthogonal optimized; Comparing the adsorption quantity (mg/g) and resolution (%), four kinds of macroporous adsorption resins including D101, AB-8, X-5 and XAD-16 were investigated for the enrichment ability of total flavonoids from Aconitum tanguticum; Concentration and pH value of sample, sampling amount, elution solvent and loading and elution velocity for the optimum adsorption resin were determined. The content of total flavonoids in Aconitum tanguticum was about 4.39%; The optimum extraction technique was 70% alcohol reflux extraction for three times,each time for one hour, the ratio of material and liquid was 1:10 (w/v); The optimum purification technology was: using XAD-16 macroporous resin, the initial concentration of total flavonoids of Aconitum tanguticum was 8 mg/mL, the sampling amount was 112 mg/g dry resin, the pH value was 5, the loading velocity was 3 mL/min, the elution solvent was 70% ethanol and the elution velocity was 5 mL/min. Under the optimum conditions, the average content of total flavonoids was raised from 4.39% to 46.19%. The optimum extraction and purification technologies for total flavonoids of Aconitum tanguticum were suitable for industrial production for its simplicity and responsibility.

  1. Collaboration spotting for dental science.

    PubMed

    Leonardi, E; Agocs, A; Fragkiskos, S; Kasfikis, N; Le Goff, J M; Cristalli, M P; Luzzi, V; Polimeni, A

    2014-10-06

    The goal of the Collaboration Spotting project is to create an automatic system to collect information about publications and patents related to a given technology, to identify the key players involved, and to highlight collaborations and related technologies. The collected information can be visualized in a web browser as interactive graphical maps showing in an intuitive way the players and their collaborations (Sociogram) and the relations among the technologies (Technogram). We propose to use the system to study technologies related to Dental Science. In order to create a Sociogram, we create a logical filter based on a set of keywords related to the technology under study. This filter is used to extract a list of publications from the Web of Science™ database. The list is validated by an expert in the technology and sent to CERN where it is inserted in the Collaboration Spotting database. Here, an automatic software system uses the data to generate the final maps. We studied a set of recent technologies related to bone regeneration procedures of oro--maxillo--facial critical size defects, namely the use of Porous HydroxyApatite (HA) as a bone substitute alone (bone graft) or as a tridimensional support (scaffold) for insemination and differentiation ex--vivo of Mesenchymal Stem Cells. We produced the Sociograms for these technologies and the resulting maps are now accessible on--line. The Collaboration Spotting system allows the automatic creation of interactive maps to show the current and historical state of research on a specific technology. These maps are an ideal tool both for researchers who want to assess the state--of--the--art in a given technology, and for research organizations who want to evaluate their contribution to the technological development in a given field. We demonstrated that the system can be used for Dental Science and produced the maps for an initial set of technologies in this field. We now plan to enlarge the set of mapped technologies in order to make the Collaboration Spotting system a useful reference tool for Dental Science research.

  2. Collaboration Spotting for oral medicine.

    PubMed

    Leonardi, E; Agocs, A; Fragkiskos, S; Kasfikis, N; Le Goff, J M; Cristalli, M P; Luzzi, V; Polimeni, A

    2014-09-01

    The goal of the Collaboration Spotting project is to create an automatic system to collect information about publications and patents related to a given technology, to identify the key players involved, and to highlight collaborations and related technologies. The collected information can be visualized in a web browser as interactive graphical maps showing in an intuitive way the players and their collaborations (Sociogram) and the relations among the technologies (Technogram). We propose to use the system to study technologies related to oral medicine. In order to create a sociogram, we create a logical filter based on a set of keywords related to the technology under study. This filter is used to extract a list of publications from the Web of Science™ database. The list is validated by an expert in the technology and sent to CERN where it is inserted in the Collaboration Spotting database. Here, an automatic software system uses the data to generate the final maps. We studied a set of recent technologies related to bone regeneration procedures of oro-maxillo-facial critical size defects, namely the use of porous hydroxyapatite (HA) as a bone substitute alone (bone graft) or as a tridimensional support (scaffold) for insemination and differentiation ex vivo of mesenchymal stem cells. We produced the sociograms for these technologies and the resulting maps are now accessible on-line. The Collaboration Spotting system allows the automatic creation of interactive maps to show the current and historical state of research on a specific technology. These maps are an ideal tool both for researchers who want to assess the state-of-the-art in a given technology, and for research organizations who want to evaluate their contribution to the technological development in a given field. We demonstrated that the system can be used in oral medicine as is produced the maps for an initial set of technologies in this field. We now plan to enlarge the set of mapped technologies in order to make the Collaboration Spotting system a useful reference tool for oral medicine research.

  3. Nanopipettes as Monitoring Probes for the Single Living Cell: State of the Art and Future Directions in Molecular Biology.

    PubMed

    Bulbul, Gonca; Chaves, Gepoliano; Olivier, Joseph; Ozel, Rifat Emrah; Pourmand, Nader

    2018-06-06

    Examining the behavior of a single cell within its natural environment is valuable for understanding both the biological processes that control the function of cells and how injury or disease lead to pathological change of their function. Single-cell analysis can reveal information regarding the causes of genetic changes, and it can contribute to studies on the molecular basis of cell transformation and proliferation. By contrast, whole tissue biopsies can only yield information on a statistical average of several processes occurring in a population of different cells. Electrowetting within a nanopipette provides a nanobiopsy platform for the extraction of cellular material from single living cells. Additionally, functionalized nanopipette sensing probes can differentiate analytes based on their size, shape or charge density, making the technology uniquely suited to sensing changes in single-cell dynamics. In this review, we highlight the potential of nanopipette technology as a non-destructive analytical tool to monitor single living cells, with particular attention to integration into applications in molecular biology.

  4. Simultaneous reconstruction of 3D refractive index, temperature, and intensity distribution of combustion flame by double computed tomography technologies based on spatial phase-shifting method

    NASA Astrophysics Data System (ADS)

    Guo, Zhenyan; Song, Yang; Yuan, Qun; Wulan, Tuya; Chen, Lei

    2017-06-01

    In this paper, a transient multi-parameter three-dimensional (3D) reconstruction method is proposed to diagnose and visualize a combustion flow field. Emission and transmission tomography based on spatial phase-shifted technology are combined to reconstruct, simultaneously, the various physical parameter distributions of a propane flame. Two cameras triggered by the internal trigger mode capture the projection information of the emission and moiré tomography, respectively. A two-step spatial phase-shifting method is applied to extract the phase distribution in the moiré fringes. By using the filtered back-projection algorithm, we reconstruct the 3D refractive-index distribution of the combustion flow field. Finally, the 3D temperature distribution of the flame is obtained from the refractive index distribution using the Gladstone-Dale equation. Meanwhile, the 3D intensity distribution is reconstructed based on the radiation projections from the emission tomography. Therefore, the structure and edge information of the propane flame are well visualized.

  5. The Large Area Crop Inventory Experiment /LACIE/ - A summary of three years' experience

    NASA Technical Reports Server (NTRS)

    Erb, R. B.; Moore, B. H.

    1979-01-01

    Aims, history and schedule of the Large Area Crop Inventory Experiment (LACIE) conducted by NASA, USDA and NOAA from 1974-1977 are described. The LACIE experiment designed to research, develop, apply and evaluate a technology to monitor wheat production in important regions throughout the world (U.S., Canada, USSR, Brasil) utilized quantitative multispectral data collected by Landsat in concert with current weather data and historical information. The experiment successfully exploited computer data and mathematical models to extract timely corp information. A follow-on activities for the early 1980's is planned focusing especially on the early warning of changes affecting production and quality of renewable resources and commodity production forecast.

  6. Electronic health indicators in the selected countries: Are these indicators the best?

    PubMed Central

    Afshari, Somaye; Khorasani, Elahe; Yarmohammadian, Mohammad Hossein; Atighechian, Golrokh; Darab, Mohsen Ghaffari

    2013-01-01

    Background: Many changes have been made in different sciences by developing and advancing information and communication technology in last two decades. E-health is a very broad term that includes many different activities related to the use of electronic devices, software as well as hardware in health organizations. Aims: The aim of this study is comparing electronic health indicators in the selected countries and discussion on the best indicators. Settings and Design: This study has chosen 12 countries randomly based on the regional division of the WHO. The relevant numbers of health indicators and general indicators and information technology indicators are extracted of these countries. We use data from the Bitarf's comparative study, which is conducted by the Iranian Supreme Council of Information Technology in 2007. Materials and Methods: By using Pearson correlation test, the relations between health general indicators and IT indicators are studied. Statistical Analysis Used: Data was analyzed based on the research objectives using SPSS software and in accordance with research questions Pearson correlation test were used. Results: The findings show that there is a positive relation between indicators related to IT and “Total per capita health, healthy life expectancy, percent literacy”. Furthermore, there is a mutual relation between IT indicators and “mortality indicator”. Conclusion: This study showed differences between selective indicators among different countries. The modern world, with its technological advances, is not powerless in the face of these geographic and health disparity challenges. Researchers must not rely on the available indicators. They must consider indicators like e-business companies, electronic data internet, medical supplies, health electronic record, health information system, etc., In future, continuous studies in this field, to provide the exact and regular reports of amount of using of these indicators through different countries must be necessary. PMID:24083281

  7. Electronic health indicators in the selected countries: Are these indicators the best?

    PubMed

    Afshari, Somaye; Khorasani, Elahe; Yarmohammadian, Mohammad Hossein; Atighechian, Golrokh; Darab, Mohsen Ghaffari

    2013-01-01

    Many changes have been made in different sciences by developing and advancing information and communication technology in last two decades. E-health is a very broad term that includes many different activities related to the use of electronic devices, software as well as hardware in health organizations. The aim of this study is comparing electronic health indicators in the selected countries and discussion on the best indicators. This study has chosen 12 countries randomly based on the regional division of the WHO. The relevant numbers of health indicators and general indicators and information technology indicators are extracted of these countries. We use data from the Bitarf's comparative study, which is conducted by the Iranian Supreme Council of Information Technology in 2007. By using Pearson correlation test, the relations between health general indicators and IT indicators are studied. Data was analyzed based on the research objectives using SPSS software and in accordance with research questions Pearson correlation test were used. The findings show that there is a positive relation between indicators related to IT and "Total per capita health, healthy life expectancy, percent literacy". Furthermore, there is a mutual relation between IT indicators and "mortality indicator". This study showed differences between selective indicators among different countries. The modern world, with its technological advances, is not powerless in the face of these geographic and health disparity challenges. Researchers must not rely on the available indicators. They must consider indicators like e-business companies, electronic data internet, medical supplies, health electronic record, health information system, etc., In future, continuous studies in this field, to provide the exact and regular reports of amount of using of these indicators through different countries must be necessary.

  8. Application of digital mapping technology to the display of hydrologic information; a proof-of-concept test in the Fox-Wolf River Basin, Wisconsin

    USGS Publications Warehouse

    Moore, G.K.; Baten, L.G.; Allord, G.J.; Robinove, C.J.

    1983-01-01

    The Fox-Wolf River basin in east-central Wisconsin was selected to test concepts for a water-resources information system using digital mapping technology. This basin of 16,800 sq km is typical of many areas in the country. Fifty digital data sets were included in the Fox-Wolf information system. Many data sets were digitized from 1:500,000 scale maps and overlays. Some thematic data were acquired from WATSTORE and other digital data files. All data were geometrically transformed into a Lambert Conformal Conic map projection and converted to a raster format with a 1-km resolution. The result of this preliminary processing was a group of spatially registered, digital data sets in map form. Parameter evaluation, areal stratification, data merging, and data integration were used to achieve the processing objectives and to obtain analysis results for the Fox-Wolf basin. Parameter evaluation includes the visual interpretation of single data sets and digital processing to obtain new derived data sets. In the areal stratification stage, masks were used to extract from one data set all features that are within a selected area on another data set. Most processing results were obtained by data merging. Merging is the combination of two or more data sets into a composite product, in which the contribution of each original data set is apparent and can be extracted from the composite. One processing result was also obtained by data integration. Integration is the combination of two or more data sets into a single new product, from which the original data cannot be separated or calculated. (USGS)

  9. Information extraction from multi-institutional radiology reports.

    PubMed

    Hassanpour, Saeed; Langlotz, Curtis P

    2016-01-01

    The radiology report is the most important source of clinical imaging information. It documents critical information about the patient's health and the radiologist's interpretation of medical findings. It also communicates information to the referring physicians and records that information for future clinical and research use. Although efforts to structure some radiology report information through predefined templates are beginning to bear fruit, a large portion of radiology report information is entered in free text. The free text format is a major obstacle for rapid extraction and subsequent use of information by clinicians, researchers, and healthcare information systems. This difficulty is due to the ambiguity and subtlety of natural language, complexity of described images, and variations among different radiologists and healthcare organizations. As a result, radiology reports are used only once by the clinician who ordered the study and rarely are used again for research and data mining. In this work, machine learning techniques and a large multi-institutional radiology report repository are used to extract the semantics of the radiology report and overcome the barriers to the re-use of radiology report information in clinical research and other healthcare applications. We describe a machine learning system to annotate radiology reports and extract report contents according to an information model. This information model covers the majority of clinically significant contents in radiology reports and is applicable to a wide variety of radiology study types. Our automated approach uses discriminative sequence classifiers for named-entity recognition to extract and organize clinically significant terms and phrases consistent with the information model. We evaluated our information extraction system on 150 radiology reports from three major healthcare organizations and compared its results to a commonly used non-machine learning information extraction method. We also evaluated the generalizability of our approach across different organizations by training and testing our system on data from different organizations. Our results show the efficacy of our machine learning approach in extracting the information model's elements (10-fold cross-validation average performance: precision: 87%, recall: 84%, F1 score: 85%) and its superiority and generalizability compared to the common non-machine learning approach (p-value<0.05). Our machine learning information extraction approach provides an effective automatic method to annotate and extract clinically significant information from a large collection of free text radiology reports. This information extraction system can help clinicians better understand the radiology reports and prioritize their review process. In addition, the extracted information can be used by researchers to link radiology reports to information from other data sources such as electronic health records and the patient's genome. Extracted information also can facilitate disease surveillance, real-time clinical decision support for the radiologist, and content-based image retrieval. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Combinatorial life cycle assessment to inform process design of industrial production of algal biodiesel.

    PubMed

    Brentner, Laura B; Eckelman, Matthew J; Zimmerman, Julie B

    2011-08-15

    The use of algae as a feedstock for biodiesel production is a rapidly growing industry, in the United States and globally. A life cycle assessment (LCA) is presented that compares various methods, either proposed or under development, for algal biodiesel to inform the most promising pathways for sustainable full-scale production. For this analysis, the system is divided into five distinct process steps: (1) microalgae cultivation, (2) harvesting and/or dewatering, (3) lipid extraction, (4) conversion (transesterification) into biodiesel, and (5) byproduct management. A number of technology options are considered for each process step and various technology combinations are assessed for their life cycle environmental impacts. The optimal option for each process step is selected yielding a best case scenario, comprised of a flat panel enclosed photobioreactor and direct transesterification of algal cells with supercritical methanol. For a functional unit of 10 GJ biodiesel, the best case production system yields a cumulative energy demand savings of more than 65 GJ, reduces water consumption by 585 m(3) and decreases greenhouse gas emissions by 86% compared to a base case scenario typical of early industrial practices, highlighting the importance of technological innovation in algae processing and providing guidance on promising production pathways.

  11. Review of Extracting Information From the Social Web for Health Personalization

    PubMed Central

    Karlsen, Randi; Bonander, Jason

    2011-01-01

    In recent years the Web has come into its own as a social platform where health consumers are actively creating and consuming Web content. Moreover, as the Web matures, consumers are gaining access to personalized applications adapted to their health needs and interests. The creation of personalized Web applications relies on extracted information about the users and the content to personalize. The Social Web itself provides many sources of information that can be used to extract information for personalization apart from traditional Web forms and questionnaires. This paper provides a review of different approaches for extracting information from the Social Web for health personalization. We reviewed research literature across different fields addressing the disclosure of health information in the Social Web, techniques to extract that information, and examples of personalized health applications. In addition, the paper includes a discussion of technical and socioethical challenges related to the extraction of information for health personalization. PMID:21278049

  12. Interrogating trees as archives of sulphur deposition

    NASA Astrophysics Data System (ADS)

    Wynn, P. M.; Loader, N. J.; Fairchild, I. J.

    2012-04-01

    A principal driver of climatic variability over the past 1,000 years and essential forcing mechanism for climate, are the changes in atmospheric composition resulting from sulphur aerosols. Natural and anthropogenic aerosols released into the atmosphere disrupt the radiative balance through backscattering and absorption of incoming solar radiation and increase cloud albedo by acting as condensation nuclei. Understanding the impact of sulphur emissions upon climate beyond the last few hundred years however is not straightforward and natural archives of environmental information must be explored. Tree-rings represent one such archive as they are widely distributed and preserve environmental information within a precisely dateable, annually resolved timescale. Until recently the sulphur contained within tree-rings has largely remained beyond the reach of environmental scientists and climate modelers owing to difficulties associated with the extraction of a robust signal and uncertainties regarding post-depositional mobility. Our recent work using synchrotron radiation has established that the majority of non-labile sulphur in two conifer species is preserved within the cellular structure of the woody tissue after uptake and demonstrates an increasing trend in sulphur concentration during the 20th century and during known volcanic events. Due to the clear isotopic distinction between marine (+21), geological (+10 to +30), atmospheric pollution (-3 to +9 ) and volcanic sources of sulphur (0 to +5), isotopic ratios provide a diagnostic tool with which changes in the source of atmospheric sulphur can be detected in a more reliable fashion than concentration alone. Sulphur isotopes should thereby provide a fingerprint of short lived events including volcanic activity when extracted at high resolution and in conjunction with high resolution S concentrations defining the event. Here we present methodologies associated with extracting the sulphur isotopic signal from tree-rings using both elemental analyser isotope ratio mass spectrometry and ion probe technology. Preliminary data indicate success at extracting the sulphur isotopic signal from woody tissues at 2-3 year resolution. In conjunction with analytical developments in ion probe technology, high resolution records of localised sulphur forcing from tree-ring archives, including volcanic activity, no longer seem too far beyond the reach of climate scientists.

  13. A smart way to identify and extract repeated patterns of a layout

    NASA Astrophysics Data System (ADS)

    Wei, Fang; Gu, Tingting; Chu, Zhihao; Zhang, Chenming; Chen, Han; Zhu, Jun; Hu, Xinyi; Du, Chunshan; Wan, Qijian; Liu, Zhengfang

    2018-03-01

    As integrated circuits (IC) technology moves forward, manufacturing process is facing more and more challenges. Optical proximity correction (OPC) has been playing an important role in the whole manufacturing process. In the deep sub-micron technology, OPC engineers not only need to guarantee the layout designs to be manufacturable but also take a more precise control of the critical patterns to ensure a high performance circuit. One of the tasks that would like to be performed is the consistency checking as the identical patterns under identical context should have identical OPC results in theory, like SRAM regions. Consistency checking is essentially a technique of repeated patterns identification, extraction and derived patterns (i.e. OPC results) comparison. The layout passing to the OPC team may not have enough design hierarchical information either because the original designs may have undergone several layout processing steps or some other unknown reasons. This paper presents a generic way to identify and extract repeated layout structures in SRAM regions purely based on layout pattern analysis through Calibre Pattern Matching and Calibre equation-based DRC (eqDRC). Without Pattern Matching and eqDRC, it will take lots of effort to manually get it done by trial and error, it is almost impossible to automate the pattern analysis process. Combining Pattern Matching and eqDRC opens a new way to implement this flow. The repeated patterns must have some fundamental features for measurement of pitches in the horizontal and vertical direction separately by Calibre eqDRC and meanwhile can be a helper to generate some anchor points which will be the starting points for Pattern Matching to capture patterns. The informative statistical report from the pattern search tells the match counts individually for each patterns captured. Experiment shows that this is a smart way of identifying and extracting repeated structures effectively. The OPC results are the derived layers on these repeated structures, by running pattern search using design layers as pattern layers and OPC results as marker layers, it is an easy job to compare the consistency.

  14. Extracting semantically enriched events from biomedical literature

    PubMed Central

    2012-01-01

    Background Research into event-based text mining from the biomedical literature has been growing in popularity to facilitate the development of advanced biomedical text mining systems. Such technology permits advanced search, which goes beyond document or sentence-based retrieval. However, existing event-based systems typically ignore additional information within the textual context of events that can determine, amongst other things, whether an event represents a fact, hypothesis, experimental result or analysis of results, whether it describes new or previously reported knowledge, and whether it is speculated or negated. We refer to such contextual information as meta-knowledge. The automatic recognition of such information can permit the training of systems allowing finer-grained searching of events according to the meta-knowledge that is associated with them. Results Based on a corpus of 1,000 MEDLINE abstracts, fully manually annotated with both events and associated meta-knowledge, we have constructed a machine learning-based system that automatically assigns meta-knowledge information to events. This system has been integrated into EventMine, a state-of-the-art event extraction system, in order to create a more advanced system (EventMine-MK) that not only extracts events from text automatically, but also assigns five different types of meta-knowledge to these events. The meta-knowledge assignment module of EventMine-MK performs with macro-averaged F-scores in the range of 57-87% on the BioNLP’09 Shared Task corpus. EventMine-MK has been evaluated on the BioNLP’09 Shared Task subtask of detecting negated and speculated events. Our results show that EventMine-MK can outperform other state-of-the-art systems that participated in this task. Conclusions We have constructed the first practical system that extracts both events and associated, detailed meta-knowledge information from biomedical literature. The automatically assigned meta-knowledge information can be used to refine search systems, in order to provide an extra search layer beyond entities and assertions, dealing with phenomena such as rhetorical intent, speculations, contradictions and negations. This finer grained search functionality can assist in several important tasks, e.g., database curation (by locating new experimental knowledge) and pathway enrichment (by providing information for inference). To allow easy integration into text mining systems, EventMine-MK is provided as a UIMA component that can be used in the interoperable text mining infrastructure, U-Compare. PMID:22621266

  15. Extracting semantically enriched events from biomedical literature.

    PubMed

    Miwa, Makoto; Thompson, Paul; McNaught, John; Kell, Douglas B; Ananiadou, Sophia

    2012-05-23

    Research into event-based text mining from the biomedical literature has been growing in popularity to facilitate the development of advanced biomedical text mining systems. Such technology permits advanced search, which goes beyond document or sentence-based retrieval. However, existing event-based systems typically ignore additional information within the textual context of events that can determine, amongst other things, whether an event represents a fact, hypothesis, experimental result or analysis of results, whether it describes new or previously reported knowledge, and whether it is speculated or negated. We refer to such contextual information as meta-knowledge. The automatic recognition of such information can permit the training of systems allowing finer-grained searching of events according to the meta-knowledge that is associated with them. Based on a corpus of 1,000 MEDLINE abstracts, fully manually annotated with both events and associated meta-knowledge, we have constructed a machine learning-based system that automatically assigns meta-knowledge information to events. This system has been integrated into EventMine, a state-of-the-art event extraction system, in order to create a more advanced system (EventMine-MK) that not only extracts events from text automatically, but also assigns five different types of meta-knowledge to these events. The meta-knowledge assignment module of EventMine-MK performs with macro-averaged F-scores in the range of 57-87% on the BioNLP'09 Shared Task corpus. EventMine-MK has been evaluated on the BioNLP'09 Shared Task subtask of detecting negated and speculated events. Our results show that EventMine-MK can outperform other state-of-the-art systems that participated in this task. We have constructed the first practical system that extracts both events and associated, detailed meta-knowledge information from biomedical literature. The automatically assigned meta-knowledge information can be used to refine search systems, in order to provide an extra search layer beyond entities and assertions, dealing with phenomena such as rhetorical intent, speculations, contradictions and negations. This finer grained search functionality can assist in several important tasks, e.g., database curation (by locating new experimental knowledge) and pathway enrichment (by providing information for inference). To allow easy integration into text mining systems, EventMine-MK is provided as a UIMA component that can be used in the interoperable text mining infrastructure, U-Compare.

  16. Chelation technology: a promising green approach for resource management and waste minimization.

    PubMed

    Chauhan, Garima; Pant, K K; Nigam, K D P

    2015-01-01

    Green chemical engineering recognises the concept of developing innovative environmentally benign technologies to protect human health and ecosystems. In order to explore this concept for minimizing industrial waste and for reducing the environmental impact of hazardous chemicals, new greener approaches need to be adopted for the extraction of heavy metals from industrial waste. In this review, a range of conventional processes and new green approaches employed for metal extraction are discussed in brief. Chelation technology, a modern research trend, has shown its potential to develop sustainable technology for metal extraction from various metal-contaminated sites. However, the interaction mechanism of ligands with metals and the ecotoxicological risk associated with the increased bioavailability of heavy metals due to the formation of metal-chelant complexes is still not sufficiently explicated in the literature. Therefore, a need was felt to provide a comprehensive state-of-the-art review of all aspects associated with chelation technology to promote this process as a green chemical engineering approach. This article elucidates the mechanism and thermodynamics associated with metal-ligand complexation in order to have a better understanding of the metal extraction process. The effects of various process parameters on the formation and stability of complexes have been elaborately discussed with respect to optimizing the chelation efficiency. The non-biodegradable attribute of ligands is another important aspect which is currently of concern. Therefore, biotechnological approaches and computational tools have been assessed in this review to illustrate the possibility of ligand degradation, which will help the readers to look for new environmentally safe mobilizing agents. In addition, emerging trends and opportunities in the field of chelation technology have been summarized and the diverse applicability of chelation technology in metal extraction from contaminated sites has also been reviewed.

  17. Requirement of scientific documentation for the development of Naturopathy.

    PubMed

    Rastogi, Rajiv

    2006-01-01

    Past few decades have witnessed explosion of knowledge in almost every field. This has resulted not only in the advancement of the subjects in particular but also have influenced the growth of various allied subjects. The present paper explains about the advancement of science through efforts made in specific areas and also through discoveries in different allied fields having an indirect influence upon the subject in proper. In Naturopathy this seems that though nothing particular is added to the basic thoughts or fundamental principles of the subject yet the entire treatment understanding is revolutionised under the influence of scientific discoveries of past few decades. Advent of information technology has further added to the boom of knowledge and many times this seems impossible to utilize these informations for the good of human being because these are not logically arranged in our minds. In the above background, the author tries to define documentation stating that we have today ocean of information and knowledge about various things- living or dead, plants, animals or human beings; the geographical conditions or changing weather and environment. What required to be done is to extract the relevant knowledge and information required to enrich the subject. The author compares documentation with churning of milk to extract butter. Documentation, in fact, is churning of ocean of information to extract the specific, most appropriate, relevant and defined information and knowledge related to the particular subject . The paper besides discussing the definition of documentation, highlights the areas of Naturopathy requiring an urgent necessity to make proper documentations. Paper also discusses the present status of Naturopathy in India, proposes short-term and long-term goals to be achieved and plans the strategies for achieving them. The most important aspect of the paper is due understanding of the limitations of Naturopathy but a constant effort to improve the same with the growth made in various discipline of science so far.

  18. The Quickest, Lowest-cost Lunar Resource Assessment Program: Integrated High-tech Earth-based Astronomy

    NASA Technical Reports Server (NTRS)

    Pieters, Carle M.

    1992-01-01

    Science and technology applications for the Moon have not fully kept pace with technical advancements in sensor development and analytical information extraction capabilities. Appropriate unanswered questions for the Moon abound, but until recently there has been little motivation to link sophisticated technical capabilities with specific measurement and analysis projects. Over the last decade enormous technical progress has been made in the development of (1) CCD photometric array detectors; (2) visible to near-infrared imaging spectrometers; (3)infrared spectroscopy; (4) high-resolution dual-polarization radar imaging at 3.5, 12, and 70 cm; and equally important (5) data analysis and information extraction techniques using compact powerful computers. Parts of each of these have been tested separately, but there has been no programmatic effort to develop and optimize instruments to meet lunar science and resource assessment needs (e.g., specific wavelength range, resolution, etc.) nor to coordinate activities so that the symbiotic relation between different kinds of data can be fully realized. No single type of remotely acquired data completely characterizes the lunar environment, but there has been little opportunity for integration of diverse advanced sensor data for the Moon. Two examples of technology concepts for lunar measurements are given. Using VIS/near-IR spectroscopy, the mineral composition of surface material can be derived from visible and near-infrared radiation reflected from the surface. The surface and subsurface scattering properties of the Moon can be analyzed using radar backscattering imaging.

  19. Challenges in using electronic health record data for CER: experience of 4 learning organizations and solutions applied.

    PubMed

    Bayley, K Bruce; Belnap, Tom; Savitz, Lucy; Masica, Andrew L; Shah, Nilay; Fleming, Neil S

    2013-08-01

    To document the strengths and challenges of using electronic health records (EHRs) for comparative effectiveness research (CER). A replicated case study of comparative effectiveness in hypertension treatment was conducted across 4 health systems, with instructions to extract data and document problems encountered using a specified list of required data elements. Researchers at each health system documented successes and challenges, and suggested solutions for addressing challenges. Data challenges fell into 5 categories: missing data, erroneous data, uninterpretable data, inconsistencies among providers and over time, and data stored in noncoded text notes. Suggested strategies to address these issues include data validation steps, use of surrogate markers, natural language processing, and statistical techniques. A number of EHR issues can hamper the extraction of valid data for cross-health system comparative effectiveness studies. Our case example cautions against a blind reliance on EHR data as a single definitive data source. Nevertheless, EHR data are superior to administrative or claims data alone, and are cheaper and timelier than clinical trials or manual chart reviews. All 4 participating health systems are pursuing pathways to more effectively use EHR data for CER.A partnership between clinicians, researchers, and information technology specialists is encouraged as a way to capitalize on the wealth of information contained in the EHR. Future developments in both technology and care delivery hold promise for improvement in the ability to use EHR data for CER.

  20. Research of information classification and strategy intelligence extract algorithm based on military strategy hall

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Li, Dehua; Yang, Jie

    2007-12-01

    Constructing virtual international strategy environment needs many kinds of information, such as economy, politic, military, diploma, culture, science, etc. So it is very important to build an information auto-extract, classification, recombination and analysis management system with high efficiency as the foundation and component of military strategy hall. This paper firstly use improved Boost algorithm to classify obtained initial information, then use a strategy intelligence extract algorithm to extract strategy intelligence from initial information to help strategist to analysis information.

  1. Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning.

    PubMed

    Norouzzadeh, Mohammad Sadegh; Nguyen, Anh; Kosmala, Margaret; Swanson, Alexandra; Palmer, Meredith S; Packer, Craig; Clune, Jeff

    2018-06-19

    Having accurate, detailed, and up-to-date information about the location and behavior of animals in the wild would improve our ability to study and conserve ecosystems. We investigate the ability to automatically, accurately, and inexpensively collect such data, which could help catalyze the transformation of many fields of ecology, wildlife biology, zoology, conservation biology, and animal behavior into "big data" sciences. Motion-sensor "camera traps" enable collecting wildlife pictures inexpensively, unobtrusively, and frequently. However, extracting information from these pictures remains an expensive, time-consuming, manual task. We demonstrate that such information can be automatically extracted by deep learning, a cutting-edge type of artificial intelligence. We train deep convolutional neural networks to identify, count, and describe the behaviors of 48 species in the 3.2 million-image Snapshot Serengeti dataset. Our deep neural networks automatically identify animals with >93.8% accuracy, and we expect that number to improve rapidly in years to come. More importantly, if our system classifies only images it is confident about, our system can automate animal identification for 99.3% of the data while still performing at the same 96.6% accuracy as that of crowdsourced teams of human volunteers, saving >8.4 y (i.e., >17,000 h at 40 h/wk) of human labeling effort on this 3.2 million-image dataset. Those efficiency gains highlight the importance of using deep neural networks to automate data extraction from camera-trap images, reducing a roadblock for this widely used technology. Our results suggest that deep learning could enable the inexpensive, unobtrusive, high-volume, and even real-time collection of a wealth of information about vast numbers of animals in the wild. Copyright © 2018 the Author(s). Published by PNAS.

  2. Microencapsulation by solvent extraction/evaporation: reviewing the state of the art of microsphere preparation process technology.

    PubMed

    Freitas, Sergio; Merkle, Hans P; Gander, Bruno

    2005-02-02

    The therapeutic benefit of microencapsulated drugs and vaccines brought forth the need to prepare such particles in larger quantities and in sufficient quality suitable for clinical trials and commercialisation. Very commonly, microencapsulation processes are based on the principle of so-called "solvent extraction/evaporation". While initial lab-scale experiments are frequently performed in simple beaker/stirrer setups, clinical trials and market introduction require more sophisticated technologies, allowing for economic, robust, well-controllable and aseptic production of microspheres. To this aim, various technologies have been examined for microsphere preparation, among them are static mixing, extrusion through needles, membranes and microfabricated microchannel devices, dripping using electrostatic forces and ultrasonic jet excitation. This article reviews the current state of the art in solvent extraction/evaporation-based microencapsulation technologies. Its focus is on process-related aspects, as described in the scientific and patent literature. Our findings will be outlined according to the four major substeps of microsphere preparation by solvent extraction/evaporation, namely, (i) incorporation of the bioactive compound, (ii) formation of the microdroplets, (iii) solvent removal and (iv) harvesting and drying the particles. Both, well-established and more advanced technologies will be reviewed.

  3. ENVIRONMENTAL TECHNOLOGY VERIFICATION REPORT, PCB DETECTION TECHNOLOGY, HYBRIZYME DELFIA TM ASSAY

    EPA Science Inventory

    The DELFIA PCB Assay is a solid-phase time-resolved fluoroimmunoassay based on the sequential addition of sample extract and europium-labeled PCB tracer to a monoclonal antibody reagent specific for PCBs. In this assay, the antibody reagent and sample extract are added to a strip...

  4. [Remote sensing monitoring and screening for urban black and odorous water body: A review.

    PubMed

    Shen, Qian; Zhu, Li; Cao, Hong Ye

    2017-10-01

    Continuous improvement of urban water environment and overall control of black and odorous water body are not merely national strategic needs with the action plan for prevention and treatment of water pollution, but also the hot issues attracting the attention of people. Most previous researches concentrated on the study of cause, evaluation and treatment measures of this phenomenon, and there are few researches on the monitoring using remote sensing, which is often a strain to meet the national needs of operational monitoring. This paper mainly summarized the urgent research problems, mainly including the identification and classification standard, research on the key technologies, and the frame of remote sensing screening systems for the urban black and odorous water body. The main key technologies were concluded too, including the high spatial resolution image preprocessing and extraction technique for black and odorous water body, the extraction of water information in city zones, the classification of the black and odorous water, and the identification and classification technique based on satellite-sky-ground remote sensing. This paper summarized the research progress and put forward research ideas of monitoring and screening urban black and odorous water body via high spatial resolution remote sensing technology, which would be beneficial to having an overall grasp of spatial distribution and improvement progress of black and odorous water body, and provide strong technical support for controlling urban black and odorous water body.

  5. High-Resolution Remote Sensing Image Building Extraction Based on Markov Model

    NASA Astrophysics Data System (ADS)

    Zhao, W.; Yan, L.; Chang, Y.; Gong, L.

    2018-04-01

    With the increase of resolution, remote sensing images have the characteristics of increased information load, increased noise, more complex feature geometry and texture information, which makes the extraction of building information more difficult. To solve this problem, this paper designs a high resolution remote sensing image building extraction method based on Markov model. This method introduces Contourlet domain map clustering and Markov model, captures and enhances the contour and texture information of high-resolution remote sensing image features in multiple directions, and further designs the spectral feature index that can characterize "pseudo-buildings" in the building area. Through the multi-scale segmentation and extraction of image features, the fine extraction from the building area to the building is realized. Experiments show that this method can restrain the noise of high-resolution remote sensing images, reduce the interference of non-target ground texture information, and remove the shadow, vegetation and other pseudo-building information, compared with the traditional pixel-level image information extraction, better performance in building extraction precision, accuracy and completeness.

  6. LiDAR Vegetation Investigation and Signature Analysis System (LVISA)

    NASA Astrophysics Data System (ADS)

    Höfle, Bernhard; Koenig, Kristina; Griesbaum, Luisa; Kiefer, Andreas; Hämmerle, Martin; Eitel, Jan; Koma, Zsófia

    2015-04-01

    Our physical environment undergoes constant changes in space and time with strongly varying triggers, frequencies, and magnitudes. Monitoring these environmental changes is crucial to improve our scientific understanding of complex human-environmental interactions and helps us to respond to environmental change by adaptation or mitigation. The three-dimensional (3D) description of the Earth surface features and the detailed monitoring of surface processes using 3D spatial data have gained increasing attention within the last decades, such as in climate change research (e.g., glacier retreat), carbon sequestration (e.g., forest biomass monitoring), precision agriculture and natural hazard management. In all those areas, 3D data have helped to improve our process understanding by allowing quantifying the structural properties of earth surface features and their changes over time. This advancement has been fostered by technological developments and increased availability of 3D sensing systems. In particular, LiDAR (light detection and ranging) technology, also referred to as laser scanning, has made significant progress and has evolved into an operational tool in environmental research and geosciences. The main result of LiDAR measurements is a highly spatially resolved 3D point cloud. Each point within the LiDAR point cloud has a XYZ coordinate associated with it and often additional information such as the strength of the returned backscatter. The point cloud provided by LiDAR contains rich geospatial, structural, and potentially biochemical information about the surveyed objects. To deal with the inherently unorganized datasets and the large data volume (frequently millions of XYZ coordinates) of LiDAR datasets, a multitude of algorithms for automatic 3D object detection (e.g., of single trees) and physical surface description (e.g., biomass) have been developed. However, so far the exchange of datasets and approaches (i.e., extraction algorithms) among LiDAR users lacks behind. We propose a novel concept, the LiDAR Vegetation Investigation and Signature Analysis System (LVISA), which shall enhance sharing of i) reference datasets of single vegetation objects with rich reference data (e.g., plant species, basic plant morphometric information) and ii) approaches for information extraction (e.g., single tree detection, tree species classification based on waveform LiDAR features). We will build an extensive LiDAR data repository for supporting the development and benchmarking of LiDAR-based object information extraction. The LiDAR Vegetation Investigation and Signature Analysis System (LVISA) uses international web service standards (Open Geospatial Consortium, OGC) for geospatial data access and also analysis (e.g., OGC Web Processing Services). This will allow the research community identifying plant object specific vegetation features from LiDAR data, while accounting for differences in LiDAR systems (e.g., beam divergence), settings (e.g., point spacing), and calibration techniques. It is the goal of LVISA to develop generic 3D information extraction approaches, which can be seamlessly transferred to other datasets, timestamps and also extraction tasks. The current prototype of LVISA can be visited and tested online via http://uni-heidelberg.de/lvisa. Video tutorials provide a quick overview and entry into the functionality of LVISA. We will present the current advances of LVISA and we will highlight future research and extension of LVISA, such as integrating low-cost LiDAR data and datasets acquired by highly temporal scanning of vegetation (e.g., continuous measurements). Everybody is invited to join the LVISA development and share datasets and analysis approaches in an interoperable way via the web-based LVISA geoportal.

  7. Experience of the ARGO autonomous vehicle

    NASA Astrophysics Data System (ADS)

    Bertozzi, Massimo; Broggi, Alberto; Conte, Gianni; Fascioli, Alessandra

    1998-07-01

    This paper presents and discusses the first results obtained by the GOLD (Generic Obstacle and Lane Detection) system as an automatic driver of ARGO. ARGO is a Lancia Thema passenger car equipped with a vision-based system that allows to extract road and environmental information from the acquired scene. By means of stereo vision, obstacles on the road are detected and localized, while the processing of a single monocular image allows to extract the road geometry in front of the vehicle. The generality of the underlying approach allows to detect generic obstacles (without constraints on shape, color, or symmetry) and to detect lane markings even in dark and in strong shadow conditions. The hardware system consists of a PC Pentium 200 Mhz with MMX technology and a frame-grabber board able to acquire 3 b/w images simultaneously; the result of the processing (position of obstacles and geometry of the road) is used to drive an actuator on the steering wheel, while debug information are presented to the user on an on-board monitor and a led-based control panel.

  8. Accurate Fall Detection in a Top View Privacy Preserving Configuration.

    PubMed

    Ricciuti, Manola; Spinsante, Susanna; Gambi, Ennio

    2018-05-29

    Fall detection is one of the most investigated themes in the research on assistive solutions for aged people. In particular, a false-alarm-free discrimination between falls and non-falls is indispensable, especially to assist elderly people living alone. Current technological solutions designed to monitor several types of activities in indoor environments can guarantee absolute privacy to the people that decide to rely on them. Devices integrating RGB and depth cameras, such as the Microsoft Kinect, can ensure privacy and anonymity, since the depth information is considered to extract only meaningful information from video streams. In this paper, we propose an accurate fall detection method investigating the depth frames of the human body using a single device in a top-view configuration, with the subjects located under the device inside a room. Features extracted from depth frames train a classifier based on a binary support vector machine learning algorithm. The dataset includes 32 falls and 8 activities considered for comparison, for a total of 800 sequences performed by 20 adults. The system showed an accuracy of 98.6% and only one false positive.

  9. Extracting More Information from Passive Optical Tracking Observations for Reliable Orbit Element Generation

    NASA Astrophysics Data System (ADS)

    Bennett, J.; Gehly, S.

    2016-09-01

    This paper presents results from a preliminary method for extracting more orbital information from low rate passive optical tracking data. An improvement in the accuracy of the observation data yields more accurate and reliable orbital elements. A comparison between the orbit propagations from the orbital element generated using the new data processing method is compared with the one generated from the raw observation data for several objects. Optical tracking data collected by EOS Space Systems, located on Mount Stromlo, Australia, is fitted to provide a new orbital element. The element accuracy is determined from a comparison between the predicted orbit and subsequent tracking data or reference orbit if available. The new method is shown to result in a better orbit prediction which has important implications in conjunction assessments and the Space Environment Research Centre space object catalogue. The focus is on obtaining reliable orbital solutions from sparse data. This work forms part of the collaborative effort of the Space Environment Management Cooperative Research Centre which is developing new technologies and strategies to preserve the space environment (www.serc.org.au).

  10. Integration and Beyond

    PubMed Central

    Stead, William W.; Miller, Randolph A.; Musen, Mark A.; Hersh, William R.

    2000-01-01

    The vision of integrating information—from a variety of sources, into the way people work, to improve decisions and process—is one of the cornerstones of biomedical informatics. Thoughts on how this vision might be realized have evolved as improvements in information and communication technologies, together with discoveries in biomedical informatics, and have changed the art of the possible. This review identified three distinct generations of “integration” projects. First-generation projects create a database and use it for multiple purposes. Second-generation projects integrate by bringing information from various sources together through enterprise information architecture. Third-generation projects inter-relate disparate but accessible information sources to provide the appearance of integration. The review suggests that the ideas developed in the earlier generations have not been supplanted by ideas from subsequent generations. Instead, the ideas represent a continuum of progress along the three dimensions of workflow, structure, and extraction. PMID:10730596

  11. Experimental resource pulses influence social-network dynamics and the potential for information flow in tool-using crows

    PubMed Central

    St Clair, James J. H.; Burns, Zackory T.; Bettaney, Elaine M.; Morrissey, Michael B.; Otis, Brian; Ryder, Thomas B.; Fleischer, Robert C.; James, Richard; Rutz, Christian

    2015-01-01

    Social-network dynamics have profound consequences for biological processes such as information flow, but are notoriously difficult to measure in the wild. We used novel transceiver technology to chart association patterns across 19 days in a wild population of the New Caledonian crow—a tool-using species that may socially learn, and culturally accumulate, tool-related information. To examine the causes and consequences of changing network topology, we manipulated the environmental availability of the crows' preferred tool-extracted prey, and simulated, in silico, the diffusion of information across field-recorded time-ordered networks. Here we show that network structure responds quickly to environmental change and that novel information can potentially spread rapidly within multi-family communities, especially when tool-use opportunities are plentiful. At the same time, we report surprisingly limited social contact between neighbouring crow communities. Such scale dependence in information-flow dynamics is likely to influence the evolution and maintenance of material cultures. PMID:26529116

  12. Overview of the Cancer Genetics and Pathway Curation tasks of BioNLP Shared Task 2013

    PubMed Central

    2015-01-01

    Background Since their introduction in 2009, the BioNLP Shared Task events have been instrumental in advancing the development of methods and resources for the automatic extraction of information from the biomedical literature. In this paper, we present the Cancer Genetics (CG) and Pathway Curation (PC) tasks, two event extraction tasks introduced in the BioNLP Shared Task 2013. The CG task focuses on cancer, emphasizing the extraction of physiological and pathological processes at various levels of biological organization, and the PC task targets reactions relevant to the development of biomolecular pathway models, defining its extraction targets on the basis of established pathway representations and ontologies. Results Six groups participated in the CG task and two groups in the PC task, together applying a wide range of extraction approaches including both established state-of-the-art systems and newly introduced extraction methods. The best-performing systems achieved F-scores of 55% on the CG task and 53% on the PC task, demonstrating a level of performance comparable to the best results achieved in similar previously proposed tasks. Conclusions The results indicate that existing event extraction technology can generalize to meet the novel challenges represented by the CG and PC task settings, suggesting that extraction methods are capable of supporting the construction of knowledge bases on the molecular mechanisms of cancer and the curation of biomolecular pathway models. The CG and PC tasks continue as open challenges for all interested parties, with data, tools and resources available from the shared task homepage. PMID:26202570

  13. A rapid extraction of landslide disaster information research based on GF-1 image

    NASA Astrophysics Data System (ADS)

    Wang, Sai; Xu, Suning; Peng, Ling; Wang, Zhiyi; Wang, Na

    2015-08-01

    In recent years, the landslide disasters occurred frequently because of the seismic activity. It brings great harm to people's life. It has caused high attention of the state and the extensive concern of society. In the field of geological disaster, landslide information extraction based on remote sensing has been controversial, but high resolution remote sensing image can improve the accuracy of information extraction effectively with its rich texture and geometry information. Therefore, it is feasible to extract the information of earthquake- triggered landslides with serious surface damage and large scale. Taking the Wenchuan county as the study area, this paper uses multi-scale segmentation method to extract the landslide image object through domestic GF-1 images and DEM data, which uses the estimation of scale parameter tool to determine the optimal segmentation scale; After analyzing the characteristics of landslide high-resolution image comprehensively and selecting spectrum feature, texture feature, geometric features and landform characteristics of the image, we can establish the extracting rules to extract landslide disaster information. The extraction results show that there are 20 landslide whose total area is 521279.31 .Compared with visual interpretation results, the extraction accuracy is 72.22%. This study indicates its efficient and feasible to extract earthquake landslide disaster information based on high resolution remote sensing and it provides important technical support for post-disaster emergency investigation and disaster assessment.

  14. On the problem of zinc extraction from the slags of lead heat

    NASA Astrophysics Data System (ADS)

    Kozyrev, V. V.; Besser, A. D.; Paretskii, V. M.

    2013-12-01

    The possibilities of zinc extraction from the slags of lead heat are studied as applied to the ZAO Karat-TsM lead plant to be built for processing ore lead concentrates. The process of zinc extraction into commercial fumes using the technology of slag fuming by natural gas developed in Gintsvetmet is recommended for this purpose. Technological rules are developed for designing a commercial fuming plant, as applied to the conditions of the ZAO Karat-TsM plant.

  15. DARPA challenge: developing new technologies for brain and spinal injuries

    NASA Astrophysics Data System (ADS)

    Macedonia, Christian; Zamisch, Monica; Judy, Jack; Ling, Geoffrey

    2012-06-01

    The repair of traumatic injuries to the central nervous system remains among the most challenging and exciting frontiers in medicine. In both traumatic brain injury and spinal cord injuries, the ultimate goals are to minimize damage and foster recovery. Numerous DARPA initiatives are in progress to meet these goals. The PREventing Violent Explosive Neurologic Trauma program focuses on the characterization of non-penetrating brain injuries resulting from explosive blast, devising predictive models and test platforms, and creating strategies for mitigation and treatment. To this end, animal models of blast induced brain injury are being established, including swine and non-human primates. Assessment of brain injury in blast injured humans will provide invaluable information on brain injury associated motor and cognitive dysfunctions. The Blast Gauge effort provided a device to measure warfighter's blast exposures which will contribute to diagnosing the level of brain injury. The program Cavitation as a Damage Mechanism for Traumatic Brain Injury from Explosive Blast developed mathematical models that predict stresses, strains, and cavitation induced from blast exposures, and is devising mitigation technologies to eliminate injuries resulting from cavitation. The Revolutionizing Prosthetics program is developing an avant-garde prosthetic arm that responds to direct neural control and provides sensory feedback through electrical stimulation. The Reliable Neural-Interface Technology effort will devise technologies to optimally extract information from the nervous system to control next generation prosthetic devices with high fidelity. The emerging knowledge and technologies arising from these DARPA programs will significantly improve the treatment of brain and spinal cord injured patients.

  16. Multivariate analysis and extraction of parameters in resistive RAMs using the Quantum Point Contact model

    NASA Astrophysics Data System (ADS)

    Roldán, J. B.; Miranda, E.; González-Cordero, G.; García-Fernández, P.; Romero-Zaliz, R.; González-Rodelas, P.; Aguilera, A. M.; González, M. B.; Jiménez-Molinos, F.

    2018-01-01

    A multivariate analysis of the parameters that characterize the reset process in Resistive Random Access Memory (RRAM) has been performed. The different correlations obtained can help to shed light on the current components that contribute in the Low Resistance State (LRS) of the technology considered. In addition, a screening method for the Quantum Point Contact (QPC) current component is presented. For this purpose, the second derivative of the current has been obtained using a novel numerical method which allows determining the QPC model parameters. Once the procedure is completed, a whole Resistive Switching (RS) series of thousands of curves is studied by means of a genetic algorithm. The extracted QPC parameter distributions are characterized in depth to get information about the filamentary pathways associated with LRS in the low voltage conduction regime.

  17. Road marking features extraction using the VIAPIX® system

    NASA Astrophysics Data System (ADS)

    Kaddah, W.; Ouerhani, Y.; Alfalou, A.; Desthieux, M.; Brosseau, C.; Gutierrez, C.

    2016-07-01

    Precise extraction of road marking features is a critical task for autonomous urban driving, augmented driver assistance, and robotics technologies. In this study, we consider an autonomous system allowing us lane detection for marked urban roads and analysis of their features. The task is to relate the georeferencing of road markings from images obtained using the VIAPIX® system. Based on inverse perspective mapping and color segmentation to detect all white objects existing on this road, the present algorithm enables us to examine these images automatically and rapidly and also to get information on road marks, their surface conditions, and their georeferencing. This algorithm allows detecting all road markings and identifying some of them by making use of a phase-only correlation filter (POF). We illustrate this algorithm and its robustness by applying it to a variety of relevant scenarios.

  18. Multi-Filter String Matching and Human-Centric Entity Matching for Information Extraction

    ERIC Educational Resources Information Center

    Sun, Chong

    2012-01-01

    More and more information is being generated in text documents, such as Web pages, emails and blogs. To effectively manage this unstructured information, one broadly used approach includes locating relevant content in documents, extracting structured information and integrating the extracted information for querying, mining or further analysis. In…

  19. Material and Energy Requirement for Rare Earth Production

    NASA Astrophysics Data System (ADS)

    Talens Peiró, Laura; Villalba Méndez, Gara

    2013-10-01

    The use of rare earth metals (REMs) for new applications in renewable and communication technologies has increased concern about future supply as well as environmental burdens associated with the extraction, use, and disposal (losses) of these metals. Although there are several reports describing and quantifying the production and use of REM, there is still a lack of quantitative data about the material and energy requirements for their extraction and refining. Such information remains difficult to acquire as China is still supplying over 95% of the world REM supply. This article attempts to estimate the material and energy requirements for the production of REM based on the theoretical chemical reactions and thermodynamics. The results show the material and energy requirement varies greatly depending on the type of mineral ore, production facility, and beneficiation process selected. They also show that the greatest loss occurs during mining (25-50%) and beneficiation (10-30%) of RE minerals. We hope that the material and energy balances presented in this article will be of use in life cycle analysis, resource accounting, and other industrial ecology tools used to quantify the environmental consequences of meeting REM demand for new technology products.

  20. [Studies on technology of supercritical-CO2 fluid extraction for volatile oils and saikosaponins in Bupleurum chinense DC].

    PubMed

    Ge, F H; Li, Y; Xie, J M; Li, Q; Ma, G J; Chen, Y H; Lin, Y C; Li, X F

    2000-03-01

    To study the technology of supercritical-CO2 fluid extraction (SFE-CO2) for the volatile oils and saikosaponins in Bupleurum chinense. Exploring the effects of pressure, temperature, extraction time, flow rate of CO2 and entrainers on the yield of the oils and saikosaponin-contained extracts; determining the optimum conditions for SFE-CO2; analyzing the oils by GC/MS and comparing the technology of SFE-CO2 with that of traditional steam distillation. The optimum extraction conditions turned out to be--for volatile oils: pressure (EP) = 20 MPa, temperature (ET) = 30 degrees C, isolator I pressure (1P-I) = 12 MPa, temperature(1T-I) = 65 degrees C, isolator II pressure (1P-II) = 6 MPa, temperature (1T-II) = 40 degrees C, extraction time = 4 hours, and CO2 flow rate = 10-20 kg.(h.kg)-1 crude drug; for saikosaponins: EP = 30 MPa, ET = 65 degrees C, 1P I = 12 MPa, 1T I = 55 degrees C, 1P II = 6 MPa, 1T II = 43 degrees C, extraction time = 3 hours, entrainer = 60% ethanol, and CO2 flow rate = 20-25 kg.(h.kg)-1 crude drug. SFE-CO2 excels the traditional steam distillation in raising yield and reducing extraction time. The oils are composed of 22 constituents including caproaldehyde, and the saikosaponins can only be extracted with the help of entrainers under higher pressure and temperature.

  1. [Study on the extraction technology and hypoglycemic activity of lectin from Trichosanthes kirilowi].

    PubMed

    Li, Qiong; Ye, Xiao-Li; Zeng, Hong; Chen, Xin; Li, Xue-Gang

    2012-03-01

    To extract lectins from Trichosanthes kirilowi and study their hypoglycemic activity. The optimal extraction process included the following parameters were conformed by optimization analysis,lectins extracted from Trichosanthes kirilowi was achieved by ammonium sulfate precipitation; The agglutinate activity was determined by using the agglutination test with 5% human blood cells. Human hepatocarcinoma cell HepG2 and the alloxan-induced diabetic mice model were used to assess hypoglycemic activity of Lectin in Trichosanthes kirilowi. The agglutination indexes of lectins extraction buffer were 32; The cell and mice tests indicated that the lectins exhibited hypoglycemic activity in the 70% saturation. The optimum extraction technology is as follows: extraction with PBS, the material-water ratio is 1:30, the extraction time is 24 h, while the concentration of sodium chloride is 0 mol/L and pH is 7.2. Precipitate lectins by ammonium sulfate in the 70% saturation, centrifugal speed is 10 000 tracted from Trichosanthes kirilowi exposes proper hypoglycemic activity.

  2. Role of modifier in microwave assisted extraction of oleanolic acid from Gymnema sylvestre: application of green extraction technology for botanicals.

    PubMed

    Mandal, Vivekananda; Dewanjee, Saikat; Mandal, Subhash C

    2009-08-01

    This work highlights the development of a green extraction technology for botanicals with the use of microwave energy. Taking into consideration the extensive time involved in conventional extraction methods, coupled with usage of large volumes of organic solvent and energy resources, an ecofriendly green method that can overcome the above problems has been developed. The work compares the effect of sample pretreatment with untreated sample for improved yield of oleanolic acid from Gymnema sylvestre leaves. The pretreated sample with water produced 0.71% w/w oleanolic acid in one extraction cycle with 500 W microwave power, 25 mL methanol and only an 8 min extraction time. On the other hand, a conventional heat reflux extraction for 6 hours could produce only 0.62% w/w oleanolic acid. The detailed mechanism of extraction has been studied through scanning electron micrographs. The environmental impact of the proposed green method has also been evaluated.

  3. Proposed correlation of modern processing principles for Ayurvedic herbal drug manufacturing: A systematic review.

    PubMed

    Jain, Rahi; Venkatasubramanian, Padma

    2014-01-01

    Quality Ayurvedic herbal medicines are potential, low-cost solutions for addressing contemporary healthcare needs of both Indian and global community. Correlating Ayurvedic herbal preparations with modern processing principles (MPPs) can help develop new and use appropriate technology for scaling up production of the medicines, which is necessary to meet the growing demand. Understanding the fundamental Ayurvedic principles behind formulation and processing is also important for improving the dosage forms. Even though Ayurvedic industry has adopted technologies from food, chemical and pharmaceutical industries, there is no systematic study to correlate the traditional and modern processing methods. This study is an attempt to provide a possible correlation between the Ayurvedic processing methods and MPPs. A systematic literature review was performed to identify the Ayurvedic processing methods by collecting information from English editions of classical Ayurveda texts on medicine preparation methods. Correlation between traditional and MPPs was done based on the techniques used in Ayurvedic drug processing. It was observed that in Ayurvedic medicine preparations there were two major types of processes, namely extraction, and separation. Extraction uses membrane rupturing and solute diffusion principles, while separation uses volatility, adsorption, and size-exclusion principles. The study provides systematic documentation of methods used in Ayurveda for herbal drug preparation along with its interpretation in terms of MPPs. This is the first step which can enable improving or replacing traditional techniques. New technologies or use of existing technologies can be used to improve the dosage forms and scaling up while maintaining the Ayurvedic principles similar to traditional techniques.

  4. Extracting Useful Semantic Information from Large Scale Corpora of Text

    ERIC Educational Resources Information Center

    Mendoza, Ray Padilla, Jr.

    2012-01-01

    Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…

  5. The extraction of bitumen from western oil sands. Annual report, July 1991--July 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oblad, A.G.; Bunger, J.W.; Dahlstrom, D.A.

    1992-08-01

    The University of Utah tar sand research and development program is concerned with research and development on Utah is extensive oil sands deposits. The program has been intended to develop a scientific and technological base required for eventual commercial recovery of the heavy oils from oil sands and processing these oils to produce synthetic crude oil and other products such as asphalt. The overall program is based on mining the oil sand, processing the mined sand to recover the heavy oils and upgrading them to products. Multiple deposits are being investigated since it is believed that a large scale (approximatelymore » 20,000 bbl/day) plant would require the use of resources from more than one deposit. The tasks or projects in the program are organized according to the following classification: Recovery technologies which includes thermal recovery methods, water extraction methods, and solvent extraction methods; upgrading and processing technologies which covers hydrotreating, hydrocracking, and hydropyrolysis; solvent extraction; production of specialty products; and environmental aspects of the production and processing technologies. These tasks are covered in this report.« less

  6. Multi-clues image retrieval based on improved color invariants

    NASA Astrophysics Data System (ADS)

    Liu, Liu; Li, Jian-Xun

    2012-05-01

    At present, image retrieval has a great progress in indexing efficiency and memory usage, which mainly benefits from the utilization of the text retrieval technology, such as the bag-of-features (BOF) model and the inverted-file structure. Meanwhile, because the robust local feature invariants are selected to establish BOF, the retrieval precision of BOF is enhanced, especially when it is applied to a large-scale database. However, these local feature invariants mainly consider the geometric variance of the objects in the images, and thus the color information of the objects fails to be made use of. Because of the development of the information technology and Internet, the majority of our retrieval objects is color images. Therefore, retrieval performance can be further improved through proper utilization of the color information. We propose an improved method through analyzing the flaw of shadow-shading quasi-invariant. The response and performance of shadow-shading quasi-invariant for the object edge with the variance of lighting are enhanced. The color descriptors of the invariant regions are extracted and integrated into BOF based on the local feature. The robustness of the algorithm and the improvement of the performance are verified in the final experiments.

  7. Secure searching of biomarkers through hybrid homomorphic encryption scheme.

    PubMed

    Kim, Miran; Song, Yongsoo; Cheon, Jung Hee

    2017-07-26

    As genome sequencing technology develops rapidly, there has lately been an increasing need to keep genomic data secure even when stored in the cloud and still used for research. We are interested in designing a protocol for the secure outsourcing matching problem on encrypted data. We propose an efficient method to securely search a matching position with the query data and extract some information at the position. After decryption, only a small amount of comparisons with the query information should be performed in plaintext state. We apply this method to find a set of biomarkers in encrypted genomes. The important feature of our method is to encode a genomic database as a single element of polynomial ring. Since our method requires a single homomorphic multiplication of hybrid scheme for query computation, it has the advantage over the previous methods in parameter size, computation complexity, and communication cost. In particular, the extraction procedure not only prevents leakage of database information that has not been queried by user but also reduces the communication cost by half. We evaluate the performance of our method and verify that the computation on large-scale personal data can be securely and practically outsourced to a cloud environment during data analysis. It takes about 3.9 s to search-and-extract the reference and alternate sequences at the queried position in a database of size 4M. Our solution for finding a set of biomarkers in DNA sequences shows the progress of cryptographic techniques in terms of their capability can support real-world genome data analysis in a cloud environment.

  8. The Application of Chinese High-Spatial Remote Sensing Satellite Image in Land Law Enforcement Information Extraction

    NASA Astrophysics Data System (ADS)

    Wang, N.; Yang, R.

    2018-04-01

    Chinese high -resolution (HR) remote sensing satellites have made huge leap in the past decade. Commercial satellite datasets, such as GF-1, GF-2 and ZY-3 images, the panchromatic images (PAN) resolution of them are 2 m, 1 m and 2.1 m and the multispectral images (MS) resolution are 8 m, 4 m, 5.8 m respectively have been emerged in recent years. Chinese HR satellite imagery has been free downloaded for public welfare purposes using. Local government began to employ more professional technician to improve traditional land management technology. This paper focused on analysing the actual requirements of the applications in government land law enforcement in Guangxi Autonomous Region. 66 counties in Guangxi Autonomous Region were selected for illegal land utilization spot extraction with fusion Chinese HR images. The procedure contains: A. Defines illegal land utilization spot type. B. Data collection, GF-1, GF-2, and ZY-3 datasets were acquired in the first half year of 2016 and other auxiliary data were collected in 2015. C. Batch process, HR images were collected for batch preprocessing through ENVI/IDL tool. D. Illegal land utilization spot extraction by visual interpretation. E. Obtaining attribute data with ArcGIS Geoprocessor (GP) model. F. Thematic mapping and surveying. Through analysing 42 counties results, law enforcement officials found 1092 illegal land using spots and 16 suspicious illegal mining spots. The results show that Chinese HR satellite images have great potential for feature information extraction and the processing procedure appears robust.

  9. a Tool for Crowdsourced Building Information Modeling Through Low-Cost Range Camera: Preliminary Demonstration and Potential

    NASA Astrophysics Data System (ADS)

    Capocchiano, F.; Ravanelli, R.; Crespi, M.

    2017-11-01

    Within the construction sector, Building Information Models (BIMs) are more and more used thanks to the several benefits that they offer in the design of new buildings and the management of the existing ones. Frequently, however, BIMs are not available for already built constructions, but, at the same time, the range camera technology provides nowadays a cheap, intuitive and effective tool for automatically collecting the 3D geometry of indoor environments. It is thus essential to find new strategies, able to perform the first step of the scan to BIM process, by extracting the geometrical information contained in the 3D models that are so easily collected through the range cameras. In this work, a new algorithm to extract planimetries from the 3D models of rooms acquired by means of a range camera is therefore presented. The algorithm was tested on two rooms, characterized by different shapes and dimensions, whose 3D models were captured with the Occipital Structure SensorTM. The preliminary results are promising: the developed algorithm is able to model effectively the 2D shape of the investigated rooms, with an accuracy level comprised in the range of 5 - 10 cm. It can be potentially used by non-expert users in the first step of the BIM generation, when the building geometry is reconstructed, for collecting crowdsourced indoor information in the frame of BIMs Volunteered Geographic Information (VGI) generation.

  10. Ultra-sensitive detection of leukemia by graphene

    NASA Astrophysics Data System (ADS)

    Akhavan, Omid; Ghaderi, Elham; Hashemi, Ehsan; Rahighi, Reza

    2014-11-01

    Graphene oxide nanoplatelets (GONPs) with extremely sharp edges (lateral dimensions ~20-200 nm and thicknesses <2 nm) were applied in extraction of the overexpressed guanine synthesized in the cytoplasm of leukemia cells. The blood serums containing the extracted guanine were used in differential pulse voltammetry (DPV) with reduced graphene oxide nanowall (rGONW) electrodes to develop fast and ultra-sensitive electrochemical detection of leukemia cells at leukemia fractions (LFs) of ~10-11 (as the lower detection limit). The stability of the DPV signals obtained by oxidation of the extracted guanine on the rGONWs was studied after 20 cycles. Without the guanine extraction, the DPV peaks relating to guanine oxidation of normal and abnormal cells overlapped at LFs <10-9, and consequently, the performance of rGONWs alone was limited at this level. As a benchmark, the DPV using glassy carbon electrodes was able to detect only LFs ~ 10-2. The ultra-sensitivity obtained by this combination method (guanine extraction by GONPs and then guanine oxidation by rGONWs) is five orders of magnitude better than the sensitivity of the best current technologies (e.g., specific mutations by polymerase chain reaction) which not only are expensive, but also require a few days for diagnosis.Graphene oxide nanoplatelets (GONPs) with extremely sharp edges (lateral dimensions ~20-200 nm and thicknesses <2 nm) were applied in extraction of the overexpressed guanine synthesized in the cytoplasm of leukemia cells. The blood serums containing the extracted guanine were used in differential pulse voltammetry (DPV) with reduced graphene oxide nanowall (rGONW) electrodes to develop fast and ultra-sensitive electrochemical detection of leukemia cells at leukemia fractions (LFs) of ~10-11 (as the lower detection limit). The stability of the DPV signals obtained by oxidation of the extracted guanine on the rGONWs was studied after 20 cycles. Without the guanine extraction, the DPV peaks relating to guanine oxidation of normal and abnormal cells overlapped at LFs <10-9, and consequently, the performance of rGONWs alone was limited at this level. As a benchmark, the DPV using glassy carbon electrodes was able to detect only LFs ~ 10-2. The ultra-sensitivity obtained by this combination method (guanine extraction by GONPs and then guanine oxidation by rGONWs) is five orders of magnitude better than the sensitivity of the best current technologies (e.g., specific mutations by polymerase chain reaction) which not only are expensive, but also require a few days for diagnosis. Electronic supplementary information (ESI) available. See DOI: 10.1039/C4NR04589K

  11. The Diagnostic Rhyme Test (DRT): An Air Force Implementation

    DTIC Science & Technology

    1978-05-01

    I Deputy for Electronic Technology (RADC/E-. I May 07 8____ Is MONSI0RIN,. ACENCY NAME A ADONESS,, dilletiot from, Cont-l,’t. oil ".) i Sf1ý.ujlT...page reader and extracts the following information: 1. Svste mn 11). 2. Listener ID. 3. Word List ID. 4. Speaker ID. 5. Page number. 6. The responses...Bome GNAW BOCK ROss No" sNOES OOZE POO" THOUGH CHOOSE THOSE POo COUGH CHEEP SINS THEE OIL ? KEEP THING ZEE JILT DANK NET FAD PINT BANK MET THAO TENT DOT

  12. Advanced ballistic range technology

    NASA Technical Reports Server (NTRS)

    Yates, Leslie A.

    1994-01-01

    The research conducted supported two facilities at NASA Ames Research Center: the Hypervelocity Free-Flight Aerodynamic Facility and the 16-Inch Shock Tunnel. During the grant period, a computerized film-reading system was developed, and five- and six-degree-of-freedom parameter-identification routines were written and successfully implemented. Studies of flow separation were conducted, and methods to extract phase shift information from finite-fringe interferograms were developed. Methods for constructing optical images from Computational Fluid Dynamics solutions were also developed, and these methods were used for one-to-one comparisons of experiment and computations.

  13. Analysing Customer Opinions with Text Mining Algorithms

    NASA Astrophysics Data System (ADS)

    Consoli, Domenico

    2009-08-01

    Knowing what the customer thinks of a particular product/service helps top management to introduce improvements in processes and products, thus differentiating the company from their competitors and gain competitive advantages. The customers, with their preferences, determine the success or failure of a company. In order to know opinions of the customers we can use technologies available from the web 2.0 (blog, wiki, forums, chat, social networking, social commerce). From these web sites, useful information must be extracted, for strategic purposes, using techniques of sentiment analysis or opinion mining.

  14. 3D-printed upper limb prostheses: a review.

    PubMed

    Ten Kate, Jelle; Smit, Gerwin; Breedveld, Paul

    2017-04-01

    This paper aims to provide an overview with quantitative information of existing 3D-printed upper limb prostheses. We will identify the benefits and drawbacks of 3D-printed devices to enable improvement of current devices based on the demands of prostheses users. A review was performed using Scopus, Web of Science and websites related to 3D-printing. Quantitative information on the mechanical and kinematic specifications and 3D-printing technology used was extracted from the papers and websites. The overview (58 devices) provides the general specifications, the mechanical and kinematic specifications of the devices and information regarding the 3D-printing technology used for hands. The overview shows prostheses for all different upper limb amputation levels with different types of control and a maximum material cost of $500. A large range of various prostheses have been 3D-printed, of which the majority are used by children. Evidence with respect to the user acceptance, functionality and durability of the 3D-printed hands is lacking. Contrary to what is often claimed, 3D-printing is not necessarily cheap, e.g., injection moulding can be cheaper. Conversely, 3D-printing provides a promising possibility for individualization, e.g., personalized socket, colour, shape and size, without the need for adjusting the production machine. Implications for rehabilitation Upper limb deficiency is a condition in which a part of the upper limb is missing as a result of a congenital limb deficiency of as a result of an amputation. A prosthetic hand can restore some of the functions of a missing limb and help the user in performing activities of daily living. Using 3D-printing technology is one of the solutions to manufacture hand prostheses. This overview provides information about the general, mechanical and kinematic specifications of all the devices and it provides the information about the 3D-printing technology used to print the hands.

  15. Classification of antecedents towards safety use of health information technology: A systematic review.

    PubMed

    Salahuddin, Lizawati; Ismail, Zuraini

    2015-11-01

    This paper provides a systematic review of safety use of health information technology (IT). The first objective is to identify the antecedents towards safety use of health IT by conducting systematic literature review (SLR). The second objective is to classify the identified antecedents based on the work system in Systems Engineering Initiative for Patient Safety (SEIPS) model and an extension of DeLone and McLean (D&M) information system (IS) success model. A systematic literature review (SLR) was conducted from peer-reviewed scholarly publications between January 2000 and July 2014. SLR was carried out and reported based on the preferred reporting items for systematic reviews and meta-analyses (PRISMA) statement. The related articles were identified by searching the articles published in Science Direct, Medline, EMBASE, and CINAHL databases. Data extracted from the resultant studies included are to be analysed based on the work system in Systems Engineering Initiative for Patient Safety (SEIPS) model, and also from the extended DeLone and McLean (D&M) information system (IS) success model. 55 articles delineated to be antecedents that influenced the safety use of health IT were included for review. Antecedents were identified and then classified into five key categories. The categories are (1) person, (2) technology, (3) tasks, (4) organization, and (5) environment. Specifically, person is attributed by competence while technology is associated to system quality, information quality, and service quality. Tasks are attributed by task-related stressor. Organisation is related to training, organisation resources, and teamwork. Lastly, environment is attributed by physical layout, and noise. This review provides evidence that the antecedents for safety use of health IT originated from both social and technical aspects. However, inappropriate health IT usage potentially increases the incidence of errors and produces new safety risks. The review cautions future implementation and adoption of health IT to carefully consider the complex interactions between social and technical elements propound in healthcare settings. Copyright © 2015. Published by Elsevier Ireland Ltd.

  16. Invasive brain-machine interfaces: a survey of paralyzed patients' attitudes, knowledge and methods of information retrieval.

    PubMed

    Lahr, Jacob; Schwartz, Christina; Heimbach, Bernhard; Aertsen, Ad; Rickert, Jörn; Ball, Tonio

    2015-08-01

    Brain-machine interfaces (BMI) are an emerging therapeutic option that can allow paralyzed patients to gain control over assistive technology devices (ATDs). BMI approaches can be broadly classified into invasive (based on intracranially implanted electrodes) and noninvasive (based on skin electrodes or extracorporeal sensors). Invasive BMIs have a favorable signal-to-noise ratio, and thus allow for the extraction of more information than noninvasive BMIs, but they are also associated with the risks related to neurosurgical device implantation. Current noninvasive BMI approaches are typically concerned, among other issues, with long setup times and/or intensive training. Recent studies have investigated the attitudes of paralyzed patients eligible for BMIs, particularly patients affected by amyotrophic lateral sclerosis (ALS). These studies indicate that paralyzed patients are indeed interested in BMIs. Little is known, however, about the degree of knowledge among paralyzed patients concerning BMI approaches or about how patients retrieve information on ATDs. Furthermore, it is not yet clear if paralyzed patients would accept intracranial implantation of BMI electrodes with the premise of decoding improvements, and what the attitudes of a broader range of patients with diseases such as stroke or spinal cord injury are towards this new kind of treatment. Using a questionnaire, we surveyed 131 paralyzed patients for their opinions on invasive BMIs and their attitude toward invasive BMI treatment options. The majority of the patients knew about and had a positive attitude toward invasive BMI approaches. The group of ALS patients was especially open to the concept of BMIs. The acceptance of invasive BMI technology depended on the improvements expected from the technology. Furthermore, the survey revealed that for paralyzed patients, the Internet is an important source of information on ATDs. Websites tailored to prospective BMI users should be further developed to provide reliable information to patients, and also to help to link prospective BMI users with researchers involved in the development of BMI technology.

  17. Invasive brain-machine interfaces: a survey of paralyzed patients’ attitudes, knowledge and methods of information retrieval

    NASA Astrophysics Data System (ADS)

    Lahr, Jacob; Schwartz, Christina; Heimbach, Bernhard; Aertsen, Ad; Rickert, Jörn; Ball, Tonio

    2015-08-01

    Objective. Brain-machine interfaces (BMI) are an emerging therapeutic option that can allow paralyzed patients to gain control over assistive technology devices (ATDs). BMI approaches can be broadly classified into invasive (based on intracranially implanted electrodes) and noninvasive (based on skin electrodes or extracorporeal sensors). Invasive BMIs have a favorable signal-to-noise ratio, and thus allow for the extraction of more information than noninvasive BMIs, but they are also associated with the risks related to neurosurgical device implantation. Current noninvasive BMI approaches are typically concerned, among other issues, with long setup times and/or intensive training. Recent studies have investigated the attitudes of paralyzed patients eligible for BMIs, particularly patients affected by amyotrophic lateral sclerosis (ALS). These studies indicate that paralyzed patients are indeed interested in BMIs. Little is known, however, about the degree of knowledge among paralyzed patients concerning BMI approaches or about how patients retrieve information on ATDs. Furthermore, it is not yet clear if paralyzed patients would accept intracranial implantation of BMI electrodes with the premise of decoding improvements, and what the attitudes of a broader range of patients with diseases such as stroke or spinal cord injury are towards this new kind of treatment. Approach. Using a questionnaire, we surveyed 131 paralyzed patients for their opinions on invasive BMIs and their attitude toward invasive BMI treatment options. Main results. The majority of the patients knew about and had a positive attitude toward invasive BMI approaches. The group of ALS patients was especially open to the concept of BMIs. The acceptance of invasive BMI technology depended on the improvements expected from the technology. Furthermore, the survey revealed that for paralyzed patients, the Internet is an important source of information on ATDs. Significance. Websites tailored to prospective BMI users should be further developed to provide reliable information to patients, and also to help to link prospective BMI users with researchers involved in the development of BMI technology.

  18. Perceptual learning and human expertise

    NASA Astrophysics Data System (ADS)

    Kellman, Philip J.; Garrigan, Patrick

    2009-06-01

    We consider perceptual learning: experience-induced changes in the way perceivers extract information. Often neglected in scientific accounts of learning and in instruction, perceptual learning is a fundamental contributor to human expertise and is crucial in domains where humans show remarkable levels of attainment, such as language, chess, music, and mathematics. In Section 2, we give a brief history and discuss the relation of perceptual learning to other forms of learning. We consider in Section 3 several specific phenomena, illustrating the scope and characteristics of perceptual learning, including both discovery and fluency effects. We describe abstract perceptual learning, in which structural relationships are discovered and recognized in novel instances that do not share constituent elements or basic features. In Section 4, we consider primary concepts that have been used to explain and model perceptual learning, including receptive field change, selection, and relational recoding. In Section 5, we consider the scope of perceptual learning, contrasting recent research, focused on simple sensory discriminations, with earlier work that emphasized extraction of invariance from varied instances in more complex tasks. Contrary to some recent views, we argue that perceptual learning should not be confined to changes in early sensory analyzers. Phenomena at various levels, we suggest, can be unified by models that emphasize discovery and selection of relevant information. In a final section, we consider the potential role of perceptual learning in educational settings. Most instruction emphasizes facts and procedures that can be verbalized, whereas expertise depends heavily on implicit pattern recognition and selective extraction skills acquired through perceptual learning. We consider reasons why perceptual learning has not been systematically addressed in traditional instruction, and we describe recent successful efforts to create a technology of perceptual learning in areas such as aviation, mathematics, and medicine. Research in perceptual learning promises to advance scientific accounts of learning, and perceptual learning technology may offer similar promise in improving education.

  19. Neurocontrol and fuzzy logic: Connections and designs

    NASA Technical Reports Server (NTRS)

    Werbos, Paul J.

    1991-01-01

    Artificial neural networks (ANNs) and fuzzy logic are complementary technologies. ANNs extract information from systems to be learned or controlled, while fuzzy techniques mainly use verbal information from experts. Ideally, both sources of information should be combined. For example, one can learn rules in a hybrid fashion, and then calibrate them for better whole-system performance. ANNs offer universal approximation theorems, pedagogical advantages, very high-throughput hardware, and links to neurophysiology. Neurocontrol - the use of ANNs to directly control motors or actuators, etc. - uses five generalized designs, related to control theory, which can work on fuzzy logic systems as well as ANNs. These designs can copy what experts do instead of what they say, learn to track trajectories, generalize adaptive control, and maximize performance or minimize cost over time, even in noisy environments. Design tradeoffs and future directions are discussed throughout.

  20. Use of keyword hierarchies to interpret gene expression patterns.

    PubMed

    Masys, D R; Welsh, J B; Lynn Fink, J; Gribskov, M; Klacansky, I; Corbeil, J

    2001-04-01

    High-density microarray technology permits the quantitative and simultaneous monitoring of thousands of genes. The interpretation challenge is to extract relevant information from this large amount of data. A growing variety of statistical analysis approaches are available to identify clusters of genes that share common expression characteristics, but provide no information regarding the biological similarities of genes within clusters. The published literature provides a potential source of information to assist in interpretation of clustering results. We describe a data mining method that uses indexing terms ('keywords') from the published literature linked to specific genes to present a view of the conceptual similarity of genes within a cluster or group of interest. The method takes advantage of the hierarchical nature of Medical Subject Headings used to index citations in the MEDLINE database, and the registry numbers applied to enzymes.

  1. Versatile electrophoresis-based self-test platform.

    PubMed

    Guijt, Rosanne M

    2015-03-01

    Lab on a Chip technology offers the possibility to extract chemical information from a complex sample in a simple, automated way without the need for a laboratory setting. In the health care sector, this chemical information could be used as a diagnostic tool for example to inform dosing. In this issue, the research underpinning a family of electrophoresis-based point-of-care devices for self-testing of ionic analytes in various sample matrices is described [Electrophoresis 2015, 36, 712-721.]. Hardware, software, and methodological chances made to improve the overall analytical performance in terms of accuracy, precision, detection limit, and reliability are discussed. In addition to the main focus of lithium monitoring, new applications including the use of the platform for veterinary purposes, sodium, and for creatinine measurements are included. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. MEMS product engineering: methodology and tools

    NASA Astrophysics Data System (ADS)

    Ortloff, Dirk; Popp, Jens; Schmidt, Thilo; Hahn, Kai; Mielke, Matthias; Brück, Rainer

    2011-03-01

    The development of MEMS comprises the structural design as well as the definition of an appropriate manufacturing process. Technology constraints have a considerable impact on the device design and vice-versa. Product design and technology development are therefore concurrent tasks. Based on a comprehensive methodology the authors introduce a software environment that links commercial design tools from both area into a common design flow. In this paper emphasis is put on automatic low threshold data acquisition. The intention is to collect and categorize development data for further developments with minimum overhead and minimum disturbance of established business processes. As a first step software tools that automatically extract data from spreadsheets or file-systems and put them in context with existing information are presented. The developments are currently carried out in a European research project.

  3. A Forensically Sound Adversary Model for Mobile Devices.

    PubMed

    Do, Quang; Martini, Ben; Choo, Kim-Kwang Raymond

    2015-01-01

    In this paper, we propose an adversary model to facilitate forensic investigations of mobile devices (e.g. Android, iOS and Windows smartphones) that can be readily adapted to the latest mobile device technologies. This is essential given the ongoing and rapidly changing nature of mobile device technologies. An integral principle and significant constraint upon forensic practitioners is that of forensic soundness. Our adversary model specifically considers and integrates the constraints of forensic soundness on the adversary, in our case, a forensic practitioner. One construction of the adversary model is an evidence collection and analysis methodology for Android devices. Using the methodology with six popular cloud apps, we were successful in extracting various information of forensic interest in both the external and internal storage of the mobile device.

  4. A Forensically Sound Adversary Model for Mobile Devices

    PubMed Central

    Choo, Kim-Kwang Raymond

    2015-01-01

    In this paper, we propose an adversary model to facilitate forensic investigations of mobile devices (e.g. Android, iOS and Windows smartphones) that can be readily adapted to the latest mobile device technologies. This is essential given the ongoing and rapidly changing nature of mobile device technologies. An integral principle and significant constraint upon forensic practitioners is that of forensic soundness. Our adversary model specifically considers and integrates the constraints of forensic soundness on the adversary, in our case, a forensic practitioner. One construction of the adversary model is an evidence collection and analysis methodology for Android devices. Using the methodology with six popular cloud apps, we were successful in extracting various information of forensic interest in both the external and internal storage of the mobile device. PMID:26393812

  5. Automatic Extraction of Road Markings from Mobile Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Ma, H.; Pei, Z.; Wei, Z.; Zhong, R.

    2017-09-01

    Road markings as critical feature in high-defination maps, which are Advanced Driver Assistance System (ADAS) and self-driving technology required, have important functions in providing guidance and information to moving cars. Mobile laser scanning (MLS) system is an effective way to obtain the 3D information of the road surface, including road markings, at highway speeds and at less than traditional survey costs. This paper presents a novel method to automatically extract road markings from MLS point clouds. Ground points are first filtered from raw input point clouds using neighborhood elevation consistency method. The basic assumption of the method is that the road surface is smooth. Points with small elevation-difference between neighborhood are considered to be ground points. Then ground points are partitioned into a set of profiles according to trajectory data. The intensity histogram of points in each profile is generated to find intensity jumps in certain threshold which inversely to laser distance. The separated points are used as seed points to region grow based on intensity so as to obtain road mark of integrity. We use the point cloud template-matching method to refine the road marking candidates via removing the noise clusters with low correlation coefficient. During experiment with a MLS point set of about 2 kilometres in a city center, our method provides a promising solution to the road markings extraction from MLS data.

  6. Millimeter wave scattering characteristics and radar cross section measurements of common roadway objects

    NASA Astrophysics Data System (ADS)

    Zoratti, Paul K.; Gilbert, R. Kent; Majewski, Ronald; Ference, Jack

    1995-12-01

    Development of automotive collision warning systems has progressed rapidly over the past several years. A key enabling technology for these systems is millimeter-wave radar. This paper addresses a very critical millimeter-wave radar sensing issue for automotive radar, namely the scattering characteristics of common roadway objects such as vehicles, roadsigns, and bridge overpass structures. The data presented in this paper were collected on ERIM's Fine Resolution Radar Imaging Rotary Platform Facility and processed with ERIM's image processing tools. The value of this approach is that it provides system developers with a 2D radar image from which information about individual point scatterers `within a single target' can be extracted. This information on scattering characteristics will be utilized to refine threat assessment processing algorithms and automotive radar hardware configurations. (1) By evaluating the scattering characteristics identified in the radar image, radar signatures as a function of aspect angle for common roadway objects can be established. These signatures will aid in the refinement of threat assessment processing algorithms. (2) Utilizing ERIM's image manipulation tools, total RCS and RCS as a function of range and azimuth can be extracted from the radar image data. This RCS information will be essential in defining the operational envelope (e.g. dynamic range) within which any radar sensor hardware must be designed.

  7. Membrane contactor assisted extraction/reaction process employing ionic liquids

    DOEpatents

    Lin, Yupo J [Naperville, IL; Snyder, Seth W [Lincolnwood, IL

    2012-02-07

    The present invention relates to a functionalized membrane contactor extraction/reaction system and method for extracting target species from multi-phase solutions utilizing ionic liquids. One preferred embodiment of the invented method and system relates to an extraction/reaction system wherein the ionic liquid extraction solutions act as both extraction solutions and reaction mediums, and allow simultaneous separation/reactions not possible with prior art technology.

  8. The architecture of the management system of complex steganographic information

    NASA Astrophysics Data System (ADS)

    Evsutin, O. O.; Meshcheryakov, R. V.; Kozlova, A. S.; Solovyev, T. M.

    2017-01-01

    The aim of the study is to create a wide area information system that allows one to control processes of generation, embedding, extraction, and detection of steganographic information. In this paper, the following problems are considered: the definition of the system scope and the development of its architecture. For creation of algorithmic maintenance of the system, classic methods of steganography are used to embed information. Methods of mathematical statistics and computational intelligence are used to identify the embedded information. The main result of the paper is the development of the architecture of the management system of complex steganographic information. The suggested architecture utilizes cloud technology in order to provide service using the web-service via the Internet. It is meant to provide streams of multimedia data processing that are streams with many sources of different types. The information system, built in accordance with the proposed architecture, will be used in the following areas: hidden transfer of documents protected by medical secrecy in telemedicine systems; copyright protection of online content in public networks; prevention of information leakage caused by insiders.

  9. A extract method of mountainous area settlement place information from GF-1 high resolution optical remote sensing image under semantic constraints

    NASA Astrophysics Data System (ADS)

    Guo, H., II

    2016-12-01

    Spatial distribution information of mountainous area settlement place is of great significance to the earthquake emergency work because most of the key earthquake hazardous areas of china are located in the mountainous area. Remote sensing has the advantages of large coverage and low cost, it is an important way to obtain the spatial distribution information of mountainous area settlement place. At present, fully considering the geometric information, spectral information and texture information, most studies have applied object-oriented methods to extract settlement place information, In this article, semantic constraints is to be added on the basis of object-oriented methods. The experimental data is one scene remote sensing image of domestic high resolution satellite (simply as GF-1), with a resolution of 2 meters. The main processing consists of 3 steps, the first is pretreatment, including ortho rectification and image fusion, the second is Object oriented information extraction, including Image segmentation and information extraction, the last step is removing the error elements under semantic constraints, in order to formulate these semantic constraints, the distribution characteristics of mountainous area settlement place must be analyzed and the spatial logic relation between settlement place and other objects must be considered. The extraction accuracy calculation result shows that the extraction accuracy of object oriented method is 49% and rise up to 86% after the use of semantic constraints. As can be seen from the extraction accuracy, the extract method under semantic constraints can effectively improve the accuracy of mountainous area settlement place information extraction. The result shows that it is feasible to extract mountainous area settlement place information form GF-1 image, so the article proves that it has a certain practicality to use domestic high resolution optical remote sensing image in earthquake emergency preparedness.

  10. A framework for automatic feature extraction from airborne light detection and ranging data

    NASA Astrophysics Data System (ADS)

    Yan, Jianhua

    Recent advances in airborne Light Detection and Ranging (LIDAR) technology allow rapid and inexpensive measurements of topography over large areas. Airborne LIDAR systems usually return a 3-dimensional cloud of point measurements from reflective objects scanned by the laser beneath the flight path. This technology is becoming a primary method for extracting information of different kinds of geometrical objects, such as high-resolution digital terrain models (DTMs), buildings and trees, etc. In the past decade, LIDAR gets more and more interest from researchers in the field of remote sensing and GIS. Compared to the traditional data sources, such as aerial photography and satellite images, LIDAR measurements are not influenced by sun shadow and relief displacement. However, voluminous data pose a new challenge for automated extraction the geometrical information from LIDAR measurements because many raster image processing techniques cannot be directly applied to irregularly spaced LIDAR points. In this dissertation, a framework is proposed to filter out information about different kinds of geometrical objects, such as terrain and buildings from LIDAR automatically. They are essential to numerous applications such as flood modeling, landslide prediction and hurricane animation. The framework consists of several intuitive algorithms. Firstly, a progressive morphological filter was developed to detect non-ground LIDAR measurements. By gradually increasing the window size and elevation difference threshold of the filter, the measurements of vehicles, vegetation, and buildings are removed, while ground data are preserved. Then, building measurements are identified from no-ground measurements using a region growing algorithm based on the plane-fitting technique. Raw footprints for segmented building measurements are derived by connecting boundary points and are further simplified and adjusted by several proposed operations to remove noise, which is caused by irregularly spaced LIDAR measurements. To reconstruct 3D building models, the raw 2D topology of each building is first extracted and then further adjusted. Since the adjusting operations for simple building models do not work well on 2D topology, 2D snake algorithm is proposed to adjust 2D topology. The 2D snake algorithm consists of newly defined energy functions for topology adjusting and a linear algorithm to find the minimal energy value of 2D snake problems. Data sets from urbanized areas including large institutional, commercial, and small residential buildings were employed to test the proposed framework. The results demonstrated that the proposed framework achieves a very good performance.

  11. Monitoring System for Storm Readiness and Recovery of Test Facilities: Integrated System Health Management (ISHM) Approach

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando; Morris, Jon; Turowski, Mark; Franzl, Richard; Walker, Mark; Kapadia, Ravi; Venkatesh, Meera; Schmalzel, John

    2010-01-01

    Severe weather events are likely occurrences on the Mississippi Gulf Coast. It is important to rapidly diagnose and mitigate the effects of storms on Stennis Space Center's rocket engine test complex to avoid delays to critical test article programs, reduce costs, and maintain safety. An Integrated Systems Health Management (ISHM) approach and technologies are employed to integrate environmental (weather) monitoring, structural modeling, and the suite of available facility instrumentation to provide information for readiness before storms, rapid initial damage assessment to guide mitigation planning, and then support on-going assurance as repairs are effected and finally support recertification. The system is denominated Katrina Storm Monitoring System (KStorMS). Integrated Systems Health Management (ISHM) describes a comprehensive set of capabilities that provide insight into the behavior the health of a system. Knowing the status of a system allows decision makers to effectively plan and execute their mission. For example, early insight into component degradation and impending failures provides more time to develop work around strategies and more effectively plan for maintenance. Failures of system elements generally occur over time. Information extracted from sensor data, combined with system-wide knowledge bases and methods for information extraction and fusion, inference, and decision making, can be used to detect incipient failures. If failures do occur, it is critical to detect and isolate them, and suggest an appropriate course of action. ISHM enables determining the condition (health) of every element in a complex system-of-systems or SoS (detect anomalies, diagnose causes, predict future anomalies), and provide data, information, and knowledge (DIaK) to control systems for safe and effective operation. ISHM capability is achieved by using a wide range of technologies that enable anomaly detection, diagnostics, prognostics, and advise for control: (1) anomaly detection algorithms and strategies, (2) fusion of DIaK for anomaly detection (model-based, numerical, statistical, empirical, expert-based, qualitative, etc.), (3) diagnostics/prognostics strategies and methods, (4) user interface, (5) advanced control strategies, (6) integration architectures/frameworks, (7) embedding of intelligence. Many of these technologies are mature, and they are being used in the KStorMS. The paper will describe the design, implementation, and operation of the KStorMS; and discuss further evolution to support other needs such as condition-based maintenance (CBM).

  12. Multiple Solvent Extraction System with Flow Injection Technology.

    DTIC Science & Technology

    1981-09-30

    encounters a back extraction step where the direction of the extraction is from organic to aqueous solvent. Thus it is advantageous to incorporate both...stainless steel ( Alltech Associates, Arlington Heights, IQ) and prepared from a single section of 180 cmn in length. The Section 2 mixing and extraction

  13. Mining residential water and electricity demand data in Southern California to inform demand management strategies

    NASA Astrophysics Data System (ADS)

    Cominola, A.; Spang, E. S.; Giuliani, M.; Castelletti, A.; Loge, F. J.; Lund, J. R.

    2016-12-01

    Demand side management strategies are key to meet future water and energy demands in urban contexts, promote water and energy efficiency in the residential sector, provide customized services and communications to consumers, and reduce utilities' costs. Smart metering technologies allow gathering high temporal and spatial resolution water and energy consumption data and support the development of data-driven models of consumers' behavior. Modelling and predicting resource consumption behavior is essential to inform demand management. Yet, analyzing big, smart metered, databases requires proper data mining and modelling techniques, in order to extract useful information supporting decision makers to spot end uses towards which water and energy efficiency or conservation efforts should be prioritized. In this study, we consider the following research questions: (i) how is it possible to extract representative consumers' personalities out of big smart metered water and energy data? (ii) are residential water and energy consumption profiles interconnected? (iii) Can we design customized water and energy demand management strategies based on the knowledge of water- energy demand profiles and other user-specific psychographic information? To address the above research questions, we contribute a data-driven approach to identify and model routines in water and energy consumers' behavior. We propose a novel customer segmentation procedure based on data-mining techniques. Our procedure consists of three steps: (i) extraction of typical water-energy consumption profiles for each household, (ii) profiles clustering based on their similarity, and (iii) evaluation of the influence of candidate explanatory variables on the identified clusters. The approach is tested onto a dataset of smart metered water and energy consumption data from over 1000 households in South California. Our methodology allows identifying heterogeneous groups of consumers from the studied sample, as well as characterizing them with respect to consumption profiles features and socio- demographic information. Results show how such better understanding of the considered users' community allows spotting potentially interesting areas for water and energy demand management interventions.

  14. Using Data Crawlers and Semantic Web to Build Financial XBRL Data Generators: The SONAR Extension Approach

    PubMed Central

    Rodríguez-García, Miguel Ángel; Rodríguez-González, Alejandro; Valencia-García, Rafael; Gómez-Berbís, Juan Miguel

    2014-01-01

    Precise, reliable and real-time financial information is critical for added-value financial services after the economic turmoil from which markets are still struggling to recover. Since the Web has become the most significant data source, intelligent crawlers based on Semantic Technologies have become trailblazers in the search of knowledge combining natural language processing and ontology engineering techniques. In this paper, we present the SONAR extension approach, which will leverage the potential of knowledge representation by extracting, managing, and turning scarce and disperse financial information into well-classified, structured, and widely used XBRL format-oriented knowledge, strongly supported by a proof-of-concept implementation and a thorough evaluation of the benefits of the approach. PMID:24587726

  15. Theory research of seam recognition and welding torch pose control based on machine vision

    NASA Astrophysics Data System (ADS)

    Long, Qiang; Zhai, Peng; Liu, Miao; He, Kai; Wang, Chunyang

    2017-03-01

    At present, the automation requirement of the welding become higher, so a method of the welding information extraction by vision sensor is proposed in this paper, and the simulation with the MATLAB has been conducted. Besides, in order to improve the quality of robot automatic welding, an information retrieval method for welding torch pose control by visual sensor is attempted. Considering the demands of welding technology and engineering habits, the relative coordinate systems and variables are strictly defined, and established the mathematical model of the welding pose, and verified its feasibility by using the MATLAB simulation in the paper, these works lay a foundation for the development of welding off-line programming system with high precision and quality.

  16. Difficulty of distinguishing product states locally

    NASA Astrophysics Data System (ADS)

    Croke, Sarah; Barnett, Stephen M.

    2017-01-01

    Nonlocality without entanglement is a rather counterintuitive phenomenon in which information may be encoded entirely in product (unentangled) states of composite quantum systems in such a way that local measurement of the subsystems is not enough for optimal decoding. For simple examples of pure product states, the gap in performance is known to be rather small when arbitrary local strategies are allowed. Here we restrict to local strategies readily achievable with current technology: those requiring neither a quantum memory nor joint operations. We show that even for measurements on pure product states, there can be a large gap between such strategies and theoretically optimal performance. Thus, even in the absence of entanglement, physically realizable local strategies can be far from optimal for extracting quantum information.

  17. Using data crawlers and semantic Web to build financial XBRL data generators: the SONAR extension approach.

    PubMed

    Rodríguez-García, Miguel Ángel; Rodríguez-González, Alejandro; Colomo-Palacios, Ricardo; Valencia-García, Rafael; Gómez-Berbís, Juan Miguel; García-Sánchez, Francisco

    2014-01-01

    Precise, reliable and real-time financial information is critical for added-value financial services after the economic turmoil from which markets are still struggling to recover. Since the Web has become the most significant data source, intelligent crawlers based on Semantic Technologies have become trailblazers in the search of knowledge combining natural language processing and ontology engineering techniques. In this paper, we present the SONAR extension approach, which will leverage the potential of knowledge representation by extracting, managing, and turning scarce and disperse financial information into well-classified, structured, and widely used XBRL format-oriented knowledge, strongly supported by a proof-of-concept implementation and a thorough evaluation of the benefits of the approach.

  18. Development of Saudi e-health literacy scale for chronic diseases in Saudi Arabia: using integrated health literacy dimensions.

    PubMed

    Zakaria, Nasriah; AlFakhry, Ohoud; Matbuli, Abeer; Alzahrani, Asma; Arab, Noha Samir Sadiq; Madani, Alaa; Alshehri, Noura; Albarrak, Ahmed I

    2018-05-01

    Health literacy has become a global issue, and it is important that patients and individuals are able to use information technology to access health information and educational services. The research objective is to develop a Saudi e-health literacy scale (SeHL) for measuring e-health literacy among Saudis suffering from non-communicable diseases (NCD). Overall, 14 relevant papers in related interdisciplinary fields were reviewed to select the most useful literacy dimensions. From these articles, we extracted the most common dimensions used to measure e-health literacy across the disciplines. Multiple workshops with multidisciplinary team members reviewed and evaluated items for SeHL. Four key aspects of e-health literacy-use of technology/media, information-seeking, usefulness and confidence-were identified and integrated as e-health literacy dimensions. These will be used to measure e-health literacy among Saudi patients with NCDs. A translation from Arabic to English was performed in order to ensure that translation process was accurate. A SeHL scale was developed to measure e-health literacy among Saudi patients. By understanding e-health literacy levels, we will be able to create a patient-education system to be used by patients in Saudi Arabia. As information technology is increasingly used by people of all ages all over the world, e-health literacy has been identified as a key factor in determining health outcomes. To date, no comprehensive scale exists to assess e-health literacy levels among speakers of Arabic, particularly among people with NCD such as diabetes, cardiovascular diseases and hypertension.

  19. Efficient extraction strategies of tea (Camellia sinensis) biomolecules.

    PubMed

    Banerjee, Satarupa; Chatterjee, Jyotirmoy

    2015-06-01

    Tea is a popular daily beverage worldwide. Modulation and modifications of its basic components like catechins, alkaloids, proteins and carbohydrate during fermentation or extraction process changes organoleptic, gustatory and medicinal properties of tea. Through these processes increase or decrease in yield of desired components are evident. Considering the varied impacts of parameters in tea production, storage and processes that affect the yield, extraction of tea biomolecules at optimized condition is thought to be challenging. Implementation of technological advancements in green chemistry approaches can minimize the deviation retaining maximum qualitative properties in environment friendly way. Existed extraction processes with optimization parameters of tea have been discussed in this paper including its prospects and limitations. This exhaustive review of various extraction parameters, decaffeination process of tea and large scale cost effective isolation of tea components with aid of modern technology can assist people to choose extraction condition of tea according to necessity.

  20. A malware detection scheme based on mining format information.

    PubMed

    Bai, Jinrong; Wang, Junfeng; Zou, Guozhong

    2014-01-01

    Malware has become one of the most serious threats to computer information system and the current malware detection technology still has very significant limitations. In this paper, we proposed a malware detection approach by mining format information of PE (portable executable) files. Based on in-depth analysis of the static format information of the PE files, we extracted 197 features from format information of PE files and applied feature selection methods to reduce the dimensionality of the features and achieve acceptable high performance. When the selected features were trained using classification algorithms, the results of our experiments indicate that the accuracy of the top classification algorithm is 99.1% and the value of the AUC is 0.998. We designed three experiments to evaluate the performance of our detection scheme and the ability of detecting unknown and new malware. Although the experimental results of identifying new malware are not perfect, our method is still able to identify 97.6% of new malware with 1.3% false positive rates.

  1. A Malware Detection Scheme Based on Mining Format Information

    PubMed Central

    Bai, Jinrong; Wang, Junfeng; Zou, Guozhong

    2014-01-01

    Malware has become one of the most serious threats to computer information system and the current malware detection technology still has very significant limitations. In this paper, we proposed a malware detection approach by mining format information of PE (portable executable) files. Based on in-depth analysis of the static format information of the PE files, we extracted 197 features from format information of PE files and applied feature selection methods to reduce the dimensionality of the features and achieve acceptable high performance. When the selected features were trained using classification algorithms, the results of our experiments indicate that the accuracy of the top classification algorithm is 99.1% and the value of the AUC is 0.998. We designed three experiments to evaluate the performance of our detection scheme and the ability of detecting unknown and new malware. Although the experimental results of identifying new malware are not perfect, our method is still able to identify 97.6% of new malware with 1.3% false positive rates. PMID:24991639

  2. Data Fusion for Enhanced Aircraft Engine Prognostics and Health Management

    NASA Technical Reports Server (NTRS)

    Volponi, Al

    2005-01-01

    Aircraft gas-turbine engine data is available from a variety of sources, including on-board sensor measurements, maintenance histories, and component models. An ultimate goal of Propulsion Health Management (PHM) is to maximize the amount of meaningful information that can be extracted from disparate data sources to obtain comprehensive diagnostic and prognostic knowledge regarding the health of the engine. Data fusion is the integration of data or information from multiple sources for the achievement of improved accuracy and more specific inferences than can be obtained from the use of a single sensor alone. The basic tenet underlying the data/ information fusion concept is to leverage all available information to enhance diagnostic visibility, increase diagnostic reliability and reduce the number of diagnostic false alarms. This report describes a basic PHM data fusion architecture being developed in alignment with the NASA C-17 PHM Flight Test program. The challenge of how to maximize the meaningful information extracted from disparate data sources to obtain enhanced diagnostic and prognostic information regarding the health and condition of the engine is the primary goal of this endeavor. To address this challenge, NASA Glenn Research Center, NASA Dryden Flight Research Center, and Pratt & Whitney have formed a team with several small innovative technology companies to plan and conduct a research project in the area of data fusion, as it applies to PHM. Methodologies being developed and evaluated have been drawn from a wide range of areas including artificial intelligence, pattern recognition, statistical estimation, and fuzzy logic. This report will provide a chronology and summary of the work accomplished under this research contract.

  3. Autism, Context/Noncontext Information Processing, and Atypical Development

    PubMed Central

    Skoyles, John R.

    2011-01-01

    Autism has been attributed to a deficit in contextual information processing. Attempts to understand autism in terms of such a defect, however, do not include more recent computational work upon context. This work has identified that context information processing depends upon the extraction and use of the information hidden in higher-order (or indirect) associations. Higher-order associations underlie the cognition of context rather than that of situations. This paper starts by examining the differences between higher-order and first-order (or direct) associations. Higher-order associations link entities not directly (as with first-order ones) but indirectly through all the connections they have via other entities. Extracting this information requires the processing of past episodes as a totality. As a result, this extraction depends upon specialised extraction processes separate from cognition. This information is then consolidated. Due to this difference, the extraction/consolidation of higher-order information can be impaired whilst cognition remains intact. Although not directly impaired, cognition will be indirectly impaired by knock on effects such as cognition compensating for absent higher-order information with information extracted from first-order associations. This paper discusses the implications of this for the inflexible, literal/immediate, and inappropriate information processing of autistic individuals. PMID:22937255

  4. Identifying the Critical Time Period for Information Extraction when Recognizing Sequences of Play

    ERIC Educational Resources Information Center

    North, Jamie S.; Williams, A. Mark

    2008-01-01

    The authors attempted to determine the critical time period for information extraction when recognizing play sequences in soccer. Although efforts have been made to identify the perceptual information underpinning such decisions, no researchers have attempted to determine "when" this information may be extracted from the display. The authors…

  5. Can we replace curation with information extraction software?

    PubMed

    Karp, Peter D

    2016-01-01

    Can we use programs for automated or semi-automated information extraction from scientific texts as practical alternatives to professional curation? I show that error rates of current information extraction programs are too high to replace professional curation today. Furthermore, current IEP programs extract single narrow slivers of information, such as individual protein interactions; they cannot extract the large breadth of information extracted by professional curators for databases such as EcoCyc. They also cannot arbitrate among conflicting statements in the literature as curators can. Therefore, funding agencies should not hobble the curation efforts of existing databases on the assumption that a problem that has stymied Artificial Intelligence researchers for more than 60 years will be solved tomorrow. Semi-automated extraction techniques appear to have significantly more potential based on a review of recent tools that enhance curator productivity. But a full cost-benefit analysis for these tools is lacking. Without such analysis it is possible to expend significant effort developing information-extraction tools that automate small parts of the overall curation workflow without achieving a significant decrease in curation costs.Database URL. © The Author(s) 2016. Published by Oxford University Press.

  6. [Study on the extraction of the total alkaloids from Caulopyhllum robustum].

    PubMed

    Li, Yi-ping; Yang, Guang-de; He, Lang-chong

    2007-02-01

    To study the technological parameters of the extraction process of the total alkaloids from Caulopyhllum robstum. Taspine, whiVh is main component of the total alkaloids from Caulopyhllum robustum, was selected as an evaluating marker and determined by HPLC. The orthogonal test was used to optimize extracting conditions in the process of acid water extraction. Then the optimized conditions for purification using cation exchange resin were investigated. The optimized conditions in the process of acid water extraction were 1% hydrochloric acid as much as seven times of the medicine amount for 24hs and three times. Then the extraction of acid water was purified with a column of macroporous cation exchange resin LSD001 at 2 ml/min of flow rate, then eluted with 10BV of 4% aqueous ammonia ethanol. The extraction ratio of the total alkaloids was 1. 35% and the content of taspine of the total alkaloids was 6. 80%. This technology is simply, cheap effective and feasible for manufacture in great scale.

  7. Wireless AE Event and Environmental Monitoring for Wind Turbine Blades at Low Sampling Rates

    NASA Astrophysics Data System (ADS)

    Bouzid, Omar M.; Tian, Gui Y.; Cumanan, K.; Neasham, J.

    Integration of acoustic wireless technology in structural health monitoring (SHM) applications introduces new challenges due to requirements of high sampling rates, additional communication bandwidth, memory space, and power resources. In order to circumvent these challenges, this chapter proposes a novel solution through building a wireless SHM technique in conjunction with acoustic emission (AE) with field deployment on the structure of a wind turbine. This solution requires a low sampling rate which is lower than the Nyquist rate. In addition, features extracted from aliased AE signals instead of reconstructing the original signals on-board the wireless nodes are exploited to monitor AE events, such as wind, rain, strong hail, and bird strike in different environmental conditions in conjunction with artificial AE sources. Time feature extraction algorithm, in addition to the principal component analysis (PCA) method, is used to extract and classify the relevant information, which in turn is used to classify or recognise a testing condition that is represented by the response signals. This proposed novel technique yields a significant data reduction during the monitoring process of wind turbine blades.

  8. Extracting valley-ridge lines from point-cloud-based 3D fingerprint models.

    PubMed

    Pang, Xufang; Song, Zhan; Xie, Wuyuan

    2013-01-01

    3D fingerprinting is an emerging technology with the distinct advantage of touchless operation. More important, 3D fingerprint models contain more biometric information than traditional 2D fingerprint images. However, current approaches to fingerprint feature detection usually must transform the 3D models to a 2D space through unwrapping or other methods, which might introduce distortions. A new approach directly extracts valley-ridge features from point-cloud-based 3D fingerprint models. It first applies the moving least-squares method to fit a local paraboloid surface and represent the local point cloud area. It then computes the local surface's curvatures and curvature tensors to facilitate detection of the potential valley and ridge points. The approach projects those points to the most likely valley-ridge lines, using statistical means such as covariance analysis and cross correlation. To finally extract the valley-ridge lines, it grows the polylines that approximate the projected feature points and removes the perturbations between the sampled points. Experiments with different 3D fingerprint models demonstrate this approach's feasibility and performance.

  9. Sediment certified reference materials for the determination of polychlorinated biphenyls and organochlorine pesticides from the National Metrology Institute of Japan (NMIJ).

    PubMed

    Numata, Masahiko; Yarita, Takashi; Aoyagi, Yoshie; Tsuda, Yoko; Yamazaki, Misako; Takatsu, Akiko; Ishikawa, Keiichiro; Chiba, Koichi; Okamaoto, Kensaku

    2007-04-01

    Two marine sediment certified reference materials, NMIJ CRM 7304-a and 7305-a, have been issued by the National Metrology Institute of Japan in the National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) for the determination of polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs). The raw materials of the CRMs were collected from a bay near industrial activity in Japan. Characterization of these CRMs was conducted by NMIJ, where the sediments were analyzed using multiple analytical methods such as pressurized liquid extraction (PLE), microwave-assisted extraction (MAE), saponification, Soxhlet extraction, supercritical fluid extraction (SFE), and ultrasonic extraction; the target compounds were determined by one of the primary methods of measurements, isotope dilution-mass spectrometry (ID-MS). Certified values have been provided for 14 PCB congeners (PCB numbers 3, 15, 28, 31, 70, 101, 105, 138, 153, 170, 180, 194, 206, 209) and 4 OCPs (gamma-HCH, 4,4'-DDT, 4,4'-DDE, 4,4'-DDD) in both CRMs. NMIJ CRM 7304-a has concentrations of the contaminants that are a factor of 2-15 greater than in CRM 7305-a. Both CRMs have information values for PCB homolog concentrations determined by collaborative analysis using a Japanese official method for determination of PCBs. The total PCB concentrations in the CRMs are approximately 920 and 86 microg kg(-1) dry mass respectively.

  10. Extraction of drainage networks from large terrain datasets using high throughput computing

    NASA Astrophysics Data System (ADS)

    Gong, Jianya; Xie, Jibo

    2009-02-01

    Advanced digital photogrammetry and remote sensing technology produces large terrain datasets (LTD). How to process and use these LTD has become a big challenge for GIS users. Extracting drainage networks, which are basic for hydrological applications, from LTD is one of the typical applications of digital terrain analysis (DTA) in geographical information applications. Existing serial drainage algorithms cannot deal with large data volumes in a timely fashion, and few GIS platforms can process LTD beyond the GB size. High throughput computing (HTC), a distributed parallel computing mode, is proposed to improve the efficiency of drainage networks extraction from LTD. Drainage network extraction using HTC involves two key issues: (1) how to decompose the large DEM datasets into independent computing units and (2) how to merge the separate outputs into a final result. A new decomposition method is presented in which the large datasets are partitioned into independent computing units using natural watershed boundaries instead of using regular 1-dimensional (strip-wise) and 2-dimensional (block-wise) decomposition. Because the distribution of drainage networks is strongly related to watershed boundaries, the new decomposition method is more effective and natural. The method to extract natural watershed boundaries was improved by using multi-scale DEMs instead of single-scale DEMs. A HTC environment is employed to test the proposed methods with real datasets.

  11. Development of Hospital-based Data Sets as a Vehicle for Implementation of a National Electronic Health Record

    PubMed Central

    Keikha, Leila; Farajollah, Seyede Sedigheh Seied; Safdari, Reza; Ghazisaeedi, Marjan; Mohammadzadeh, Niloofar

    2018-01-01

    Background In developing countries such as Iran, international standards offer good sources to survey and use for appropriate planning in the domain of electronic health records (EHRs). Therefore, in this study, HL7 and ASTM standards were considered as the main sources from which to extract EHR data. Objective The objective of this study was to propose a hospital data set for a national EHR consisting of data classes and data elements by adjusting data sets extracted from the standards and paper-based records. Method This comparative study was carried out in 2017 by studying the contents of the paper-based records approved by the health ministry in Iran and the international ASTM and HL7 standards in order to extract a minimum hospital data set for a national EHR. Results As a result of studying the standards and paper-based records, a total of 526 data elements in 174 classes were extracted. An examination of the data indicated that the highest number of extracted data came from the free text elements, both in the paper-based records and in the standards related to the administrative data. The major sources of data extracted from ASTM and HL7 were the E1384 and Hl7V.x standards, respectively. In the paper-based records, data were extracted from 19 forms sporadically. Discussion By declaring the confidentiality of information, the ASTM standards acknowledge the issue of confidentiality of information as one of the main challenges of EHR development, and propose new types of admission, such as teleconference, tele-video, and home visit, which are inevitable with the advent of new technology for providing healthcare and treating diseases. Data related to finance and insurance, which were scattered in different categories by three organizations, emerged as the financial category. Documenting the role and responsibility of the provider by adding the authenticator/signature data element was deemed essential. Conclusion Not only using well-defined and standardized data, but also adapting EHR systems to the local facilities and the existing social and cultural conditions, will facilitate the development of structured data sets. PMID:29618962

  12. Development of Hospital-based Data Sets as a Vehicle for Implementation of a National Electronic Health Record.

    PubMed

    Keikha, Leila; Farajollah, Seyede Sedigheh Seied; Safdari, Reza; Ghazisaeedi, Marjan; Mohammadzadeh, Niloofar

    2018-01-01

    In developing countries such as Iran, international standards offer good sources to survey and use for appropriate planning in the domain of electronic health records (EHRs). Therefore, in this study, HL7 and ASTM standards were considered as the main sources from which to extract EHR data. The objective of this study was to propose a hospital data set for a national EHR consisting of data classes and data elements by adjusting data sets extracted from the standards and paper-based records. This comparative study was carried out in 2017 by studying the contents of the paper-based records approved by the health ministry in Iran and the international ASTM and HL7 standards in order to extract a minimum hospital data set for a national EHR. As a result of studying the standards and paper-based records, a total of 526 data elements in 174 classes were extracted. An examination of the data indicated that the highest number of extracted data came from the free text elements, both in the paper-based records and in the standards related to the administrative data. The major sources of data extracted from ASTM and HL7 were the E1384 and Hl7V.x standards, respectively. In the paper-based records, data were extracted from 19 forms sporadically. By declaring the confidentiality of information, the ASTM standards acknowledge the issue of confidentiality of information as one of the main challenges of EHR development, and propose new types of admission, such as teleconference, tele-video, and home visit, which are inevitable with the advent of new technology for providing healthcare and treating diseases. Data related to finance and insurance, which were scattered in different categories by three organizations, emerged as the financial category. Documenting the role and responsibility of the provider by adding the authenticator/signature data element was deemed essential. Not only using well-defined and standardized data, but also adapting EHR systems to the local facilities and the existing social and cultural conditions, will facilitate the development of structured data sets.

  13. Extraction of CT dose information from DICOM metadata: automated Matlab-based approach.

    PubMed

    Dave, Jaydev K; Gingold, Eric L

    2013-01-01

    The purpose of this study was to extract exposure parameters and dose-relevant indexes of CT examinations from information embedded in DICOM metadata. DICOM dose report files were identified and retrieved from a PACS. An automated software program was used to extract from these files information from the structured elements in the DICOM metadata relevant to exposure. Extracting information from DICOM metadata eliminated potential errors inherent in techniques based on optical character recognition, yielding 100% accuracy.

  14. Accurate facade feature extraction method for buildings from three-dimensional point cloud data considering structural information

    NASA Astrophysics Data System (ADS)

    Wang, Yongzhi; Ma, Yuqing; Zhu, A.-xing; Zhao, Hui; Liao, Lixia

    2018-05-01

    Facade features represent segmentations of building surfaces and can serve as a building framework. Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images, this study describes the creation of a highly accurate building facade feature extraction method from 3D PCD with a focus on structural information. The new extraction method involves three major steps: image feature extraction, exploration of the mapping method between the image features and 3D PCD, and optimization of the initial 3D PCD facade features considering structural information. Results show that the new method can extract the 3D PCD facade features of buildings more accurately and continuously. The new method is validated using a case study. In addition, the effectiveness of the new method is demonstrated by comparing it with the range image-extraction method and the optical image-extraction method in the absence of structural information. The 3D PCD facade features extracted by the new method can be applied in many fields, such as 3D building modeling and building information modeling.

  15. Construction of a database for published phase II/III drug intervention clinical trials for the period 2009-2014 comprising 2,326 records, 90 disease categories, and 939 drug entities.

    PubMed

    Jeong, Sohyun; Han, Nayoung; Choi, Boyoon; Sohn, Minji; Song, Yun-Kyoung; Chung, Myeon-Woo; Na, Han-Sung; Ji, Eunhee; Kim, Hyunah; Rhew, Ki Yon; Kim, Therasa; Kim, In-Wha; Oh, Jung Mi

    2016-06-01

    To construct a database of published clinical drug trials suitable for use 1) as a research tool in accessing clinical trial information and 2) in evidence-based decision-making by regulatory professionals, clinical research investigators, and medical practitioners. Comprehensive information obtained from a search of design elements and results of clinical trials in peer reviewed journals using PubMed (http://www.ncbi.nlm.ih.gov/pubmed). The methodology to develop a structured database was devised by a panel composed of experts in medical, pharmaceutical, information technology, and members of Ministry of Food and Drug Safety (MFDS) using a step by step approach. A double-sided system consisting of user mode and manager mode served as the framework for the database; elements of interest from each trial were entered via secure manager mode enabling the input information to be accessed in a user-friendly manner (user mode). Information regarding methodology used and results of drug treatment were extracted as detail elements of each data set and then inputted into the web-based database system. Comprehensive information comprising 2,326 clinical trial records, 90 disease states, and 939 drugs entities and concerning study objectives, background, methods used, results, and conclusion could be extracted from published information on phase II/III drug intervention clinical trials appearing in SCI journals within the last 10 years. The extracted data was successfully assembled into a clinical drug trial database with easy access suitable for use as a research tool. The clinically most important therapeutic categories, i.e., cancer, cardiovascular, respiratory, neurological, metabolic, urogenital, gastrointestinal, psychological, and infectious diseases were covered by the database. Names of test and control drugs, details on primary and secondary outcomes and indexed keywords could also be retrieved and built into the database. The construction used in the database enables the user to sort and download targeted information as a Microsoft Excel spreadsheet. Because of the comprehensive and standardized nature of the clinical drug trial database and its ease of access it should serve as valuable information repository and research tool for accessing clinical trial information and making evidence-based decisions by regulatory professionals, clinical research investigators, and medical practitioners.

  16. A Unique Master's Program in Combined Nuclear Technology and Nuclear Chemistry at Chalmers University of Technology, Sweden

    NASA Astrophysics Data System (ADS)

    Skarnemark, Gunnar; Allard, Stefan; Ekberg, Christian; Nordlund, Anders

    2009-08-01

    The need for engineers and scientists who can ensure safe and secure use of nuclear energy is large in Sweden and internationally. Chalmers University of Technology is therefore launching a new 2-year master's program in Nuclear Engineering, with start from the autumn of 2009. The program is open to Swedish and foreign students. The program starts with compulsory courses dealing with the basics of nuclear chemistry and physics, radiation protection, nuclear power and reactors, nuclear fuel supply, nuclear waste management and nuclear safety and security. There are also compulsory courses in nuclear industry applications and sustainable energy futures. The subsequent elective courses can be chosen freely but there is also a possibility to choose informal tracks that concentrate on nuclear chemistry or reactor technology and physics. The nuclear chemistry track comprises courses in e.g. chemistry of lanthanides, actinides and transactinides, solvent extraction, radioecology and radioanalytical chemistry and radiopharmaceuticals. The program is finished with a one semester thesis project. This is probably a unique master program in the sense of its combination of deep courses in both nuclear technology and nuclear chemistry.

  17. A Pilot Study on Developing a Standardized and Sensitive School Violence Risk Assessment with Manual Annotation.

    PubMed

    Barzman, Drew H; Ni, Yizhao; Griffey, Marcus; Patel, Bianca; Warren, Ashaki; Latessa, Edward; Sorter, Michael

    2017-09-01

    School violence has increased over the past decade and innovative, sensitive, and standardized approaches to assess school violence risk are needed. In our current feasibility study, we initialized a standardized, sensitive, and rapid school violence risk approach with manual annotation. Manual annotation is the process of analyzing a student's transcribed interview to extract relevant information (e.g., key words) to school violence risk levels that are associated with students' behaviors, attitudes, feelings, use of technology (social media and video games), and other activities. In this feasibility study, we first implemented school violence risk assessments to evaluate risk levels by interviewing the student and parent separately at the school or the hospital to complete our novel school safety scales. We completed 25 risk assessments, resulting in 25 transcribed interviews of 12-18 year olds from 15 schools in Ohio and Kentucky. We then analyzed structured professional judgments, language, and patterns associated with school violence risk levels by using manual annotation and statistical methodology. To analyze the student interviews, we initiated the development of an annotation guideline to extract key information that is associated with students' behaviors, attitudes, feelings, use of technology and other activities. Statistical analysis was applied to associate the significant categories with students' risk levels to identify key factors which will help with developing action steps to reduce risk. In a future study, we plan to recruit more subjects in order to fully develop the manual annotation which will result in a more standardized and sensitive approach to school violence assessments.

  18. [Technologies for Complex Intelligent Clinical Data Analysis].

    PubMed

    Baranov, A A; Namazova-Baranova, L S; Smirnov, I V; Devyatkin, D A; Shelmanov, A O; Vishneva, E A; Antonova, E V; Smirnov, V I

    2016-01-01

    The paper presents the system for intelligent analysis of clinical information. Authors describe methods implemented in the system for clinical information retrieval, intelligent diagnostics of chronic diseases, patient's features importance and for detection of hidden dependencies between features. Results of the experimental evaluation of these methods are also presented. Healthcare facilities generate a large flow of both structured and unstructured data which contain important information about patients. Test results are usually retained as structured data but some data is retained in the form of natural language texts (medical history, the results of physical examination, and the results of other examinations, such as ultrasound, ECG or X-ray studies). Many tasks arising in clinical practice can be automated applying methods for intelligent analysis of accumulated structured array and unstructured data that leads to improvement of the healthcare quality. the creation of the complex system for intelligent data analysis in the multi-disciplinary pediatric center. Authors propose methods for information extraction from clinical texts in Russian. The methods are carried out on the basis of deep linguistic analysis. They retrieve terms of diseases, symptoms, areas of the body and drugs. The methods can recognize additional attributes such as "negation" (indicates that the disease is absent), "no patient" (indicates that the disease refers to the patient's family member, but not to the patient), "severity of illness", disease course", "body region to which the disease refers". Authors use a set of hand-drawn templates and various techniques based on machine learning to retrieve information using a medical thesaurus. The extracted information is used to solve the problem of automatic diagnosis of chronic diseases. A machine learning method for classification of patients with similar nosology and the methodfor determining the most informative patients'features are also proposed. Authors have processed anonymized health records from the pediatric center to estimate the proposed methods. The results show the applicability of the information extracted from the texts for solving practical problems. The records ofpatients with allergic, glomerular and rheumatic diseases were used for experimental assessment of the method of automatic diagnostic. Authors have also determined the most appropriate machine learning methods for classification of patients for each group of diseases, as well as the most informative disease signs. It has been found that using additional information extracted from clinical texts, together with structured data helps to improve the quality of diagnosis of chronic diseases. Authors have also obtained pattern combinations of signs of diseases. The proposed methods have been implemented in the intelligent data processing system for a multidisciplinary pediatric center. The experimental results show the availability of the system to improve the quality of pediatric healthcare.

  19. Subcritical Water Technology for Enhanced Extraction of Biochemical Compounds from Chlorella vulgaris

    PubMed Central

    Awaluddin, S. A.; Thiruvenkadam, Selvakumar; Izhar, Shamsul; Hiroyuki, Yoshida; Danquah, Michael K.; Harun, Razif

    2016-01-01

    Subcritical water extraction (SWE) technology has been used for the extraction of active compounds from different biomass materials with low process cost, mild operating conditions, short process times, and environmental sustainability. With the limited application of the technology to microalgal biomass, this work investigates parametrically the potential of subcritical water for high-yield extraction of biochemicals such as carbohydrates and proteins from microalgal biomass. The SWE process was optimized using central composite design (CCD) under varying process conditions of temperature (180–374°C), extraction time (1–20 min), biomass particulate size (38–250 μm), and microalgal biomass loading (5–40 wt.%). Chlorella vulgaris used in this study shows high volatile matter (83.5 wt.%) and carbon content (47.11 wt.%), giving advantage as a feedstock for biofuel production. The results showed maximum total carbohydrate content and protein yields of 14.2 g/100 g and 31.2 g/100 g, respectively, achieved under the process conditions of 277°C, 5% of microalgal biomass loading, and 5 min extraction time. Statistical analysis revealed that, of all the parameters investigated, temperature is the most critical during SWE of microalgal biomass for protein and carbohydrate production. PMID:27366748

  20. Drop-on-Demand Single Cell Isolation and Total RNA Analysis

    PubMed Central

    Moon, Sangjun; Kim, Yun-Gon; Dong, Lingsheng; Lombardi, Michael; Haeggstrom, Edward; Jensen, Roderick V.; Hsiao, Li-Li; Demirci, Utkan

    2011-01-01

    Technologies that rapidly isolate viable single cells from heterogeneous solutions have significantly contributed to the field of medical genomics. Challenges remain both to enable efficient extraction, isolation and patterning of single cells from heterogeneous solutions as well as to keep them alive during the process due to a limited degree of control over single cell manipulation. Here, we present a microdroplet based method to isolate and pattern single cells from heterogeneous cell suspensions (10% target cell mixture), preserve viability of the extracted cells (97.0±0.8%), and obtain genomic information from isolated cells compared to the non-patterned controls. The cell encapsulation process is both experimentally and theoretically analyzed. Using the isolated cells, we identified 11 stem cell markers among 1000 genes and compare to the controls. This automated platform enabling high-throughput cell manipulation for subsequent genomic analysis employs fewer handling steps compared to existing methods. PMID:21412416

  1. A signal processing framework for simultaneous detection of multiple environmental contaminants

    NASA Astrophysics Data System (ADS)

    Chakraborty, Subhadeep; Manahan, Michael P.; Mench, Matthew M.

    2013-11-01

    The possibility of large-scale attacks using chemical warfare agents (CWAs) has exposed the critical need for fundamental research enabling the reliable, unambiguous and early detection of trace CWAs and toxic industrial chemicals. This paper presents a unique approach for the identification and classification of simultaneously present multiple environmental contaminants by perturbing an electrochemical (EC) sensor with an oscillating potential for the extraction of statistically rich information from the current response. The dynamic response, being a function of the degree and mechanism of contamination, is then processed with a symbolic dynamic filter for the extraction of representative patterns, which are then classified using a trained neural network. The approach presented in this paper promises to extend the sensing power and sensitivity of these EC sensors by augmenting and complementing sensor technology with state-of-the-art embedded real-time signal processing capabilities.

  2. A review on "A Novel Technique for Image Steganography Based on Block-DCT and Huffman Encoding"

    NASA Astrophysics Data System (ADS)

    Das, Rig; Tuithung, Themrichon

    2013-03-01

    This paper reviews the embedding and extraction algorithm proposed by "A. Nag, S. Biswas, D. Sarkar and P. P. Sarkar" on "A Novel Technique for Image Steganography based on Block-DCT and Huffman Encoding" in "International Journal of Computer Science and Information Technology, Volume 2, Number 3, June 2010" [3] and shows that the Extraction of Secret Image is Not Possible for the algorithm proposed in [3]. 8 bit Cover Image of size is divided into non joint blocks and a two dimensional Discrete Cosine Transformation (2-D DCT) is performed on each of the blocks. Huffman Encoding is performed on an 8 bit Secret Image of size and each bit of the Huffman Encoded Bit Stream is embedded in the frequency domain by altering the LSB of the DCT coefficients of Cover Image blocks. The Huffman Encoded Bit Stream and Huffman Table

  3. Quantitative proteomics in biological research.

    PubMed

    Wilm, Matthias

    2009-10-01

    Proteomics has enabled the direct investigation of biological material, at first through the analysis of individual proteins, then of lysates from cell cultures, and finally of extracts from tissues and biopsies from entire organisms. Its latest manifestation - quantitative proteomics - allows deeper insight into biological systems. This article reviews the different methods used to extract quantitative information from mass spectra. It follows the technical developments aimed toward global proteomics, the attempt to characterize every expressed protein in a cell by at least one peptide. When applications of the technology are discussed, the focus is placed on yeast biology. In particular, differential quantitative proteomics, the comparison between an experiment and its control, is very discriminating for proteins involved in the process being studied. When trying to understand biological processes on a molecular level, differential quantitative proteomics tends to give a clearer picture than global transcription analyses. As a result, MS has become an even more indispensable tool for biochemically motivated biological research.

  4. Understanding Unintended Consequences and Health Information Technology:

    PubMed Central

    Randell, R.; Borycki, E. M.

    2016-01-01

    Summary Objective No framework exists to identify and study unintended consequences (UICs) with a focus on organizational and social issues (OSIs). To address this shortcoming, we conducted a literature review to develop a framework for considering UICs and health information technology (HIT) from the perspective of OSIs. Methods A literature review was conducted for the period 2000-2015 using the search terms “unintended consequences” and “health information technology”. 67 papers were screened, of which 18 met inclusion criteria. Data extraction was focused on the types of technologies studied, types of UICs identified, and methods of data collection and analysis used. A thematic analysis was used to identify themes related to UICs. Results We identified two overarching themes. One was the definition and terminology of how people classify and discuss UICs. Second was OSIs and UICs. For the OSI theme, we also identified four sub-themes: process change and evolution, individual-collaborative interchange, context of use, and approaches to model, study, and understand UICs. Conclusions While there is a wide body of research on UICs, there is a lack of overall consensus on how they should be classified and reported, limiting our ability to understand the implications of UICs and how to manage them. More mixed-methods research and better proactive identification of UICs remain priorities. Our findings and framework of OSI considerations for studying UICs and HIT extend existing work on HIT and UICs by focusing on organizational and social issues. PMID:27830231

  5. Development of an Information Fusion System for Engine Diagnostics and Health Management

    NASA Technical Reports Server (NTRS)

    Volponi, Allan J.; Brotherton, Tom; Luppold, Robert; Simon, Donald L.

    2004-01-01

    Aircraft gas-turbine engine data are available from a variety of sources including on-board sensor measurements, maintenance histories, and component models. An ultimate goal of Propulsion Health Management (PHM) is to maximize the amount of meaningful information that can be extracted from disparate data sources to obtain comprehensive diagnostic and prognostic knowledge regarding the health of the engine. Data Fusion is the integration of data or information from multiple sources, to achieve improved accuracy and more specific inferences than can be obtained from the use of a single sensor alone. The basic tenet underlying the data/information fusion concept is to leverage all available information to enhance diagnostic visibility, increase diagnostic reliability and reduce the number of diagnostic false alarms. This paper describes a basic PHM Data Fusion architecture being developed in alignment with the NASA C17 Propulsion Health Management (PHM) Flight Test program. The challenge of how to maximize the meaningful information extracted from disparate data sources to obtain enhanced diagnostic and prognostic information regarding the health and condition of the engine is the primary goal of this endeavor. To address this challenge, NASA Glenn Research Center (GRC), NASA Dryden Flight Research Center (DFRC) and Pratt & Whitney (P&W) have formed a team with several small innovative technology companies to plan and conduct a research project in the area of data fusion as applied to PHM. Methodologies being developed and evaluated have been drawn from a wide range of areas including artificial intelligence, pattern recognition, statistical estimation, and fuzzy logic. This paper will provide a broad overview of this work, discuss some of the methodologies employed and give some illustrative examples.

  6. Development of technical means for directional hydraulic fracturing with shearing loading of borehole walls

    NASA Astrophysics Data System (ADS)

    Rybalkin, LA; Patutin, AV; Patutin, DV

    2018-03-01

    During the process of mineral deposits’ mining one of the most important conditions for safe and economically profitable work of a mining enterprise is obtaining timely information on the stress state of the developed massif. One of the most common methods of remote study of the geomechanical state of the rock massif is hydraulic fracturing of the formation. Directional hydraulic fracturing is a type of the method employed to form cracks across production wells. This technology was most widely used in the gas industry to extract gas from shale formations. In mining, this technology is used to set up filtration screens, to integrate degassing, to soften the hard roof of coal seams. Possible practical appliance is the expansion of the application field of this technology to intensify the production of viscous oil, to leach non-ferrous metals, to create in the rock massif anti-filtration screens for various purposes, as well as to measure stresses acting along the wells.

  7. Reinforcement learning in computer vision

    NASA Astrophysics Data System (ADS)

    Bernstein, A. V.; Burnaev, E. V.

    2018-04-01

    Nowadays, machine learning has become one of the basic technologies used in solving various computer vision tasks such as feature detection, image segmentation, object recognition and tracking. In many applications, various complex systems such as robots are equipped with visual sensors from which they learn state of surrounding environment by solving corresponding computer vision tasks. Solutions of these tasks are used for making decisions about possible future actions. It is not surprising that when solving computer vision tasks we should take into account special aspects of their subsequent application in model-based predictive control. Reinforcement learning is one of modern machine learning technologies in which learning is carried out through interaction with the environment. In recent years, Reinforcement learning has been used both for solving such applied tasks as processing and analysis of visual information, and for solving specific computer vision problems such as filtering, extracting image features, localizing objects in scenes, and many others. The paper describes shortly the Reinforcement learning technology and its use for solving computer vision problems.

  8. The integrated design and archive of space-borne signal processing and compression coding

    NASA Astrophysics Data System (ADS)

    He, Qiang-min; Su, Hao-hang; Wu, Wen-bo

    2017-10-01

    With the increasing demand of users for the extraction of remote sensing image information, it is very urgent to significantly enhance the whole system's imaging quality and imaging ability by using the integrated design to achieve its compact structure, light quality and higher attitude maneuver ability. At this present stage, the remote sensing camera's video signal processing unit and image compression and coding unit are distributed in different devices. The volume, weight and consumption of these two units is relatively large, which unable to meet the requirements of the high mobility remote sensing camera. This paper according to the high mobility remote sensing camera's technical requirements, designs a kind of space-borne integrated signal processing and compression circuit by researching a variety of technologies, such as the high speed and high density analog-digital mixed PCB design, the embedded DSP technology and the image compression technology based on the special-purpose chips. This circuit lays a solid foundation for the research of the high mobility remote sensing camera.

  9. Real-Time On-Board Processing Validation of MSPI Ground Camera Images

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Werne, Thomas A.; Bekker, Dmitriy L.

    2010-01-01

    The Earth Sciences Decadal Survey identifies a multiangle, multispectral, high-accuracy polarization imager as one requirement for the Aerosol-Cloud-Ecosystem (ACE) mission. JPL has been developing a Multiangle SpectroPolarimetric Imager (MSPI) as a candidate to fill this need. A key technology development needed for MSPI is on-board signal processing to calculate polarimetry data as imaged by each of the 9 cameras forming the instrument. With funding from NASA's Advanced Information Systems Technology (AIST) Program, JPL is solving the real-time data processing requirements to demonstrate, for the first time, how signal data at 95 Mbytes/sec over 16-channels for each of the 9 multiangle cameras in the spaceborne instrument can be reduced on-board to 0.45 Mbytes/sec. This will produce the intensity and polarization data needed to characterize aerosol and cloud microphysical properties. Using the Xilinx Virtex-5 FPGA including PowerPC440 processors we have implemented a least squares fitting algorithm that extracts intensity and polarimetric parameters in real-time, thereby substantially reducing the image data volume for spacecraft downlink without loss of science information.

  10. Big data analytics as a service infrastructure: challenges, desired properties and solutions

    NASA Astrophysics Data System (ADS)

    Martín-Márquez, Manuel

    2015-12-01

    CERN's accelerator complex generates a very large amount of data. A large volumen of heterogeneous data is constantly generated from control equipment and monitoring agents. These data must be stored and analysed. Over the decades, CERN's researching and engineering teams have applied different approaches, techniques and technologies for this purpose. This situation has minimised the necessary collaboration and, more relevantly, the cross data analytics over different domains. These two factors are essential to unlock hidden insights and correlations between the underlying processes, which enable better and more efficient daily-based accelerator operations and more informed decisions. The proposed Big Data Analytics as a Service Infrastructure aims to: (1) integrate the existing developments; (2) centralise and standardise the complex data analytics needs for CERN's research and engineering community; (3) deliver real-time, batch data analytics and information discovery capabilities; and (4) provide transparent access and Extract, Transform and Load (ETL), mechanisms to the various and mission-critical existing data repositories. This paper presents the desired objectives and properties resulting from the analysis of CERN's data analytics requirements; the main challenges: technological, collaborative and educational and; potential solutions.

  11. Research on the transfer learning of the vehicle logo recognition

    NASA Astrophysics Data System (ADS)

    Zhao, Wei

    2017-08-01

    The Convolutional Neural Network of Deep Learning has been a huge success in the field of image intelligent transportation system can effectively solve the traffic safety, congestion, vehicle management and other problems of traffic in the city. Vehicle identification is a vital part of intelligent transportation, and the effective information in vehicles is of great significance to vehicle identification. With the traffic system on the vehicle identification technology requirements are getting higher and higher, the vehicle as an important type of vehicle information, because it should not be removed, difficult to change and other features for vehicle identification provides an important method. The current vehicle identification recognition (VLR) is mostly used to extract the characteristics of the method of classification, which for complex classification of its generalization ability to be some constraints, if the use of depth learning technology, you need a lot of training samples. In this paper, the method of convolution neural network based on transfer learning can solve this problem effectively, and it has important practical application value in the task of vehicle mark recognition.

  12. Enzyme assisted extraction of biomolecules as an approach to novel extraction technology: A review.

    PubMed

    Nadar, Shamraja S; Rao, Priyanka; Rathod, Virendra K

    2018-06-01

    An interest in the development of extraction techniques of biomolecules from various natural sources has increased in recent years due to their potential applications particularly for food and nutraceutical purposes. The presence of polysaccharides such as hemicelluloses, starch, pectin inside the cell wall, reduces the extraction efficiency of conventional extraction techniques. Conventional techniques also suffer from low extraction yields, time inefficiency and inferior extract quality due to traces of organic solvents present in them. Hence, there is a need of the green and novel extraction methods to recover biomolecules. The present review provides a holistic insight to various aspects related to enzyme aided extraction. Applications of enzymes in the recovery of various biomolecules such as polyphenols, oils, polysaccharides, flavours and colorants have been highlighted. Additionally, the employment of hyphenated extraction technologies can overcome some of the major drawbacks of enzyme based extraction such as longer extraction time and immoderate use of solvents. This review also includes hyphenated intensification techniques by coupling conventional methods with ultrasound, microwave, high pressure and supercritical carbon dioxide. The last section gives an insight on application of enzyme immobilization as a strategy for large scale extraction. Immobilization of enzymes on magnetic nanoparticles can be employed to enhance the operational performance of the system by multiple use of expensive enzymes making them industrially and economically feasible. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. The impact of clinical leadership on health information technology adoption: systematic review.

    PubMed

    Ingebrigtsen, Tor; Georgiou, Andrew; Clay-Williams, Robyn; Magrabi, Farah; Hordern, Antonia; Prgomet, Mirela; Li, Julie; Westbrook, Johanna; Braithwaite, Jeffrey

    2014-06-01

    To conduct a systematic review to examine evidence of associations between clinical leadership and successful information technology (IT) adoption in healthcare organisations. We searched Medline, Embase, Cinahl, and Business Source Premier for articles published between January 2000 to May 2013 with keywords and subject terms related to: (1) the setting--healthcare provider organisations; (2) the technology--health information technology; (3) the process--adoption; and (4) the intervention--leadership. We identified 3121 unique citations, of which 32 met our criteria and were included in the review. Data extracted from the included studies were assessed in light of two frameworks: Bassellier et al.'s IT competence framework; and Avgar et al.'s health IT adoption framework. The results demonstrate important associations between the attributes of clinical leaders and IT adoption. Clinical leaders who have technical informatics skills and prior experience with IT project management are likely to develop a vision that comprises a long-term commitment to the use of IT. Leaders who possess such a vision believe in the value of IT, are motivated to adopt it, and can maintain confidence and stability through the adversities that IT adoptions often entail. This leads to proactive leadership behaviours and partnerships with IT professionals that are associated with successful organisational and clinical outcomes. This review provides evidence that clinical leaders can positively contribute to successful IT adoption in healthcare organisations. Clinical leaders who aim for improvements in the processes and quality of care should cultivate the necessary IT competencies, establish mutual partnerships with IT professionals, and execute proactive IT behaviours to achieve successful IT adoption. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. Recent patents on the extraction of carotenoids.

    PubMed

    Riggi, Ezio

    2010-01-01

    This article reviews the patents that have been presented during the last decade related to the extraction of carotenoids from various forms of organic matter (fruit, vegetables, animals), with an emphasis on the methods and mechanisms exploited by these technologies, and on technical solutions for the practical problems related to these technologies. I present and classify 29 methods related to the extraction processes (physical, mechanical, chemical, and enzymatic). The large number of processes for extraction by means of supercritical fluids and the growing number of large-scale industrial plants suggest a positive trend towards using this technique that is currently slowed by its cost. This trend should be reinforced by growing restrictions imposed on the use of most organic solvents for extraction of food products and by increasingly strict waste management regulations that are indirectly promoting the use of extraction processes that leave the residual (post-extraction) matrix substantially free from solvents and compounds that must subsequently be removed or treated. None of the reviewed approaches is the best answer for every extractable compound and source, so each should be considered as one of several alternatives, including the use of a combination of extraction approaches.

  15. Longitudinal Analysis of New Information Types in Clinical Notes

    PubMed Central

    Zhang, Rui; Pakhomov, Serguei; Melton, Genevieve B.

    2014-01-01

    It is increasingly recognized that redundant information in clinical notes within electronic health record (EHR) systems is ubiquitous, significant, and may negatively impact the secondary use of these notes for research and patient care. We investigated several automated methods to identify redundant versus relevant new information in clinical reports. These methods may provide a valuable approach to extract clinically pertinent information and further improve the accuracy of clinical information extraction systems. In this study, we used UMLS semantic types to extract several types of new information, including problems, medications, and laboratory information. Automatically identified new information highly correlated with manual reference standard annotations. Methods to identify different types of new information can potentially help to build up more robust information extraction systems for clinical researchers as well as aid clinicians and researchers in navigating clinical notes more effectively and quickly identify information pertaining to changes in health states. PMID:25717418

  16. Towards the ophthalmology patentome: a comprehensive patent database of ocular drugs and biomarkers.

    PubMed

    Mucke, Hermann A M; Mucke, Eva; Mucke, Peter M

    2013-01-01

    We are currently building a database of all patent documents that contain substantial information related to pharmacology, drug delivery, tissue technology, and molecular diagnostics in ophthalmology. The goal is to establish a 'patentome', a body of cleaned and annotated data where all text-based, chemistry and pharmacology information can be accessed and mined in its context. We provide metrics on patent convention treaty documents, which demonstrate that ocular-related patenting has shown stronger growth than general patent cooperation treaty patenting during the past 25 years, and, while the majority of applications of this type have always provided substantial biological data, both data support and objections by patent examiners have been increasing since 2006-2007. Separately, we present a case study of chemistry information extraction from patents published during the 1950s and 1970s, which reveal compounds with corneal anesthesia potential that were never published in the peer-reviewed literature.

  17. Probing hydrogen positions in hydrous compounds: information from parametric neutron powder diffraction studies.

    PubMed

    Ting, Valeska P; Henry, Paul F; Schmidtmann, Marc; Wilson, Chick C; Weller, Mark T

    2012-05-21

    We demonstrate the extent to which modern detector technology, coupled with a high flux constant wavelength neutron source, can be used to obtain high quality diffraction data from short data collections, allowing the refinement of the full structures (including hydrogen positions) of hydrous compounds from in situ neutron powder diffraction measurements. The in situ thermodiffractometry and controlled humidity studies reported here reveal that important information on the reorientations of structural water molecules with changing conditions can be easily extracted, providing insight into the effects of hydrogen bonding on bulk physical properties. Using crystalline BaCl2·2H2O as an example system, we analyse the structural changes in the compound and its dehydration intermediates with changing temperature and humidity levels to demonstrate the quality of the dynamic structural information on the hydrogen atoms and associated hydrogen bonding that can be obtained without resorting to sample deuteration.

  18. Metadata-Driven SOA-Based Application for Facilitation of Real-Time Data Warehousing

    NASA Astrophysics Data System (ADS)

    Pintar, Damir; Vranić, Mihaela; Skočir, Zoran

    Service-oriented architecture (SOA) has already been widely recognized as an effective paradigm for achieving integration of diverse information systems. SOA-based applications can cross boundaries of platforms, operation systems and proprietary data standards, commonly through the usage of Web Services technology. On the other side, metadata is also commonly referred to as a potential integration tool given the fact that standardized metadata objects can provide useful information about specifics of unknown information systems with which one has interest in communicating with, using an approach commonly called "model-based integration". This paper presents the result of research regarding possible synergy between those two integration facilitators. This is accomplished with a vertical example of a metadata-driven SOA-based business process that provides ETL (Extraction, Transformation and Loading) and metadata services to a data warehousing system in need of a real-time ETL support.

  19. Cutting Silica Aerogel for Particle Extraction

    NASA Technical Reports Server (NTRS)

    Tsou, P.; Brownlee, D. E.; Glesias, R.; Grigoropoulos, C. P.; Weschler, M.

    2005-01-01

    The detailed laboratory analyses of extraterrestrial particles have revolutionized our knowledge of planetary bodies in the last three decades. This knowledge of chemical composition, morphology, mineralogy, and isotopics of particles cannot be provided by remote sensing. In order to acquire these detail information in the laboratories, the samples need be intact, unmelted. Such intact capture of hypervelocity particles has been developed in 1996. Subsequently silica aerogel was introduced as the preferred medium for intact capturing of hypervelocity particles and later showed it to be particularly suitable for the space environment. STARDUST, the 4th NASA Discovery mission to capture samples from 81P/Wild 2 and contemporary interstellar dust, is the culmination of these new technologies. In early laboratory experiments of launching hypervelocity projectiles into aerogel, there was the need to cut aerogel to isolate or extract captured particles/tracks. This is especially challenging for space captures, since there will be many particles/tracks of wide ranging scales closely located, even collocated. It is critical to isolate and extract one particle without compromising its neighbors since the full significance of a particle is not known until it is extracted and analyzed. To date, three basic techniques have been explored: mechanical cutting, lasers cutting and ion beam milling. We report the current findings.

  20. Application of Ontology Technology in Health Statistic Data Analysis.

    PubMed

    Guo, Minjiang; Hu, Hongpu; Lei, Xingyun

    2017-01-01

    Research Purpose: establish health management ontology for analysis of health statistic data. Proposed Methods: this paper established health management ontology based on the analysis of the concepts in China Health Statistics Yearbook, and used protégé to define the syntactic and semantic structure of health statistical data. six classes of top-level ontology concepts and their subclasses had been extracted and the object properties and data properties were defined to establish the construction of these classes. By ontology instantiation, we can integrate multi-source heterogeneous data and enable administrators to have an overall understanding and analysis of the health statistic data. ontology technology provides a comprehensive and unified information integration structure of the health management domain and lays a foundation for the efficient analysis of multi-source and heterogeneous health system management data and enhancement of the management efficiency.

Top