Sample records for based information technique

  1. A case-based reasoning tool for breast cancer knowledge management with data mining concepts and techniques

    NASA Astrophysics Data System (ADS)

    Demigha, Souâd.

    2016-03-01

    The paper presents a Case-Based Reasoning Tool for Breast Cancer Knowledge Management to improve breast cancer screening. To develop this tool, we combine both concepts and techniques of Case-Based Reasoning (CBR) and Data Mining (DM). Physicians and radiologists ground their diagnosis on their expertise (past experience) based on clinical cases. Case-Based Reasoning is the process of solving new problems based on the solutions of similar past problems and structured as cases. CBR is suitable for medical use. On the other hand, existing traditional hospital information systems (HIS), Radiological Information Systems (RIS) and Picture Archiving Information Systems (PACS) don't allow managing efficiently medical information because of its complexity and heterogeneity. Data Mining is the process of mining information from a data set and transform it into an understandable structure for further use. Combining CBR to Data Mining techniques will facilitate diagnosis and decision-making of medical experts.

  2. Statistical approach for selection of biologically informative genes.

    PubMed

    Das, Samarendra; Rai, Anil; Mishra, D C; Rai, Shesh N

    2018-05-20

    Selection of informative genes from high dimensional gene expression data has emerged as an important research area in genomics. Many gene selection techniques have been proposed so far are either based on relevancy or redundancy measure. Further, the performance of these techniques has been adjudged through post selection classification accuracy computed through a classifier using the selected genes. This performance metric may be statistically sound but may not be biologically relevant. A statistical approach, i.e. Boot-MRMR, was proposed based on a composite measure of maximum relevance and minimum redundancy, which is both statistically sound and biologically relevant for informative gene selection. For comparative evaluation of the proposed approach, we developed two biological sufficient criteria, i.e. Gene Set Enrichment with QTL (GSEQ) and biological similarity score based on Gene Ontology (GO). Further, a systematic and rigorous evaluation of the proposed technique with 12 existing gene selection techniques was carried out using five gene expression datasets. This evaluation was based on a broad spectrum of statistically sound (e.g. subject classification) and biological relevant (based on QTL and GO) criteria under a multiple criteria decision-making framework. The performance analysis showed that the proposed technique selects informative genes which are more biologically relevant. The proposed technique is also found to be quite competitive with the existing techniques with respect to subject classification and computational time. Our results also showed that under the multiple criteria decision-making setup, the proposed technique is best for informative gene selection over the available alternatives. Based on the proposed approach, an R Package, i.e. BootMRMR has been developed and available at https://cran.r-project.org/web/packages/BootMRMR. This study will provide a practical guide to select statistical techniques for selecting informative genes from high dimensional expression data for breeding and system biology studies. Published by Elsevier B.V.

  3. Change detection from remotely sensed images: From pixel-based to object-based approaches

    NASA Astrophysics Data System (ADS)

    Hussain, Masroor; Chen, Dongmei; Cheng, Angela; Wei, Hui; Stanley, David

    2013-06-01

    The appetite for up-to-date information about earth's surface is ever increasing, as such information provides a base for a large number of applications, including local, regional and global resources monitoring, land-cover and land-use change monitoring, and environmental studies. The data from remote sensing satellites provide opportunities to acquire information about land at varying resolutions and has been widely used for change detection studies. A large number of change detection methodologies and techniques, utilizing remotely sensed data, have been developed, and newer techniques are still emerging. This paper begins with a discussion of the traditionally pixel-based and (mostly) statistics-oriented change detection techniques which focus mainly on the spectral values and mostly ignore the spatial context. This is succeeded by a review of object-based change detection techniques. Finally there is a brief discussion of spatial data mining techniques in image processing and change detection from remote sensing data. The merits and issues of different techniques are compared. The importance of the exponential increase in the image data volume and multiple sensors and associated challenges on the development of change detection techniques are highlighted. With the wide use of very-high-resolution (VHR) remotely sensed images, object-based methods and data mining techniques may have more potential in change detection.

  4. Traditional versus rule-based programming techniques - Application to the control of optional flight information

    NASA Technical Reports Server (NTRS)

    Ricks, Wendell R.; Abbott, Kathy H.

    1987-01-01

    A traditional programming technique for controlling the display of optional flight information in a civil transport cockpit is compared to a rule-based technique for the same function. This application required complex decision logic and a frequently modified rule base. The techniques are evaluated for execution efficiency and implementation ease; the criterion used to calculate the execution efficiency is the total number of steps required to isolate hypotheses that were true and the criteria used to evaluate the implementability are ease of modification and verification and explanation capability. It is observed that the traditional program is more efficient than the rule-based program; however, the rule-based programming technique is more applicable for improving programmer productivity.

  5. Information hiding based on double random-phase encoding and public-key cryptography.

    PubMed

    Sheng, Yuan; Xin, Zhou; Alam, Mohammed S; Xi, Lu; Xiao-Feng, Li

    2009-03-02

    A novel information hiding method based on double random-phase encoding (DRPE) and Rivest-Shamir-Adleman (RSA) public-key cryptosystem is proposed. In the proposed technique, the inherent diffusion property of DRPE is cleverly utilized to make up the diffusion insufficiency of RSA public-key cryptography, while the RSA cryptosystem is utilized for simultaneous transmission of the cipher text and the two phase-masks, which is not possible under the DRPE technique. This technique combines the complementary advantages of the DPRE and RSA encryption techniques and brings security and convenience for efficient information transmission. Extensive numerical simulation results are presented to verify the performance of the proposed technique.

  6. The design and implementation of hydrographical information management system (HIMS)

    NASA Astrophysics Data System (ADS)

    Sui, Haigang; Hua, Li; Wang, Qi; Zhang, Anming

    2005-10-01

    With the development of hydrographical work and information techniques, the large variety of hydrographical information including electronic charts, documents and other materials are widely used, and the traditional management mode and techniques are unsuitable for the development of the Chinese Marine Safety Administration Bureau (CMSAB). How to manage all kinds of hydrographical information has become an important and urgent problem. A lot of advanced techniques including GIS, RS, spatial database management and VR techniques are introduced for solving these problems. Some design principles and key techniques of the HIMS including the mixed mode base on B/S, C/S and stand-alone computer mode, multi-source & multi-scale data organization and management, multi-source data integration and diverse visualization of digital chart, efficient security control strategies are illustrated in detail. Based on the above ideas and strategies, an integrated system named Hydrographical Information Management System (HIMS) was developed. And the HIMS has been applied in the Shanghai Marine Safety Administration Bureau and obtained good evaluation.

  7. A QoS Management Technique of Urgent Information Provision in ITS Services Using DSRC for Autonomous Base Stations

    NASA Astrophysics Data System (ADS)

    Shimura, Akitoshi; Aizono, Takeiki; Hiraiwa, Masashi; Sugano, Shigeki

    A QoS management technique based on an autonomous decentralized mobility system, which is an autonomous decentralized system enhanced to provide mobile stations with information about urgent roadway situations, is proposed in this paper. This technique enables urgent messages to be flexibly and quickly transmitted to mobile stations by multiple decentralized base stations using dedicated short range communication. It also supports the easy addition of additional base stations. Each station autonomously creates information-delivery communities based on the urgency of the messages it receives through the roadside network and the distances between the senders and receivers. Each station dynamically determines the urgency of messages according to the message content and the speed of the mobile stations. Evaluation of this technique applied to the Smart Gateway system, which provides driving-assistance services to mobile stations through dedicated short-range communication, demonstrated its effectiveness and that it is suitable for actual systems.

  8. Securing information display by use of visual cryptography.

    PubMed

    Yamamoto, Hirotsugu; Hayasaki, Yoshio; Nishida, Nobuo

    2003-09-01

    We propose a secure display technique based on visual cryptography. The proposed technique ensures the security of visual information. The display employs a decoding mask based on visual cryptography. Without the decoding mask, the displayed information cannot be viewed. The viewing zone is limited by the decoding mask so that only one person can view the information. We have developed a set of encryption codes to maintain the designed viewing zone and have demonstrated a display that provides a limited viewing zone.

  9. Genalogical approaches to ethical implications of informational assimilative integrated discovery systems (AIDS) in business

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pharhizgar, K.D.; Lunce, S.E.

    1994-12-31

    Development of knowledge-based technological acquisition techniques and customers` information profiles are known as assimilative integrated discovery systems (AIDS) in modern organizations. These systems have access through processing to both deep and broad domains of information in modern societies. Through these systems organizations and individuals can predict future trend probabilities and events concerning their customers. AIDSs are new techniques which produce new information which informants can use without the help of the knowledge sources because of the existence of highly sophisticated computerized networks. This paper has analyzed the danger and side effects of misuse of information through the illegal, unethical andmore » immoral access to the data-base in an integrated and assimilative information system as described above. Cognivistic mapping, pragmatistic informational design gathering, and holistic classifiable and distributive techniques are potentially abusive systems whose outputs can be easily misused by businesses when researching the firm`s customers.« less

  10. Remote sensing as a source of land cover information utilized in the universal soil loss equation

    NASA Technical Reports Server (NTRS)

    Morris-Jones, D. R.; Morgan, K. M.; Kiefer, R. W.; Scarpace, F. L.

    1979-01-01

    In this study, methods for gathering the land use/land cover information required by the USLE were investigated with medium altitude, multi-date color and color infrared 70-mm positive transparencies using human and computer-based interpretation techniques. Successful results, which compare favorably with traditional field study methods, were obtained within the test site watershed with airphoto data sources and human airphoto interpretation techniques. Computer-based interpretation techniques were not capable of identifying soil conservation practices but were successful to varying degrees in gathering other types of desired land use/land cover information.

  11. Developing a hybrid dictionary-based bio-entity recognition technique.

    PubMed

    Song, Min; Yu, Hwanjo; Han, Wook-Shin

    2015-01-01

    Bio-entity extraction is a pivotal component for information extraction from biomedical literature. The dictionary-based bio-entity extraction is the first generation of Named Entity Recognition (NER) techniques. This paper presents a hybrid dictionary-based bio-entity extraction technique. The approach expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. In addition, the proposed technique adopts text mining techniques in the merging stage of similar entities such as Part of Speech (POS) expansion, stemming, and the exploitation of the contextual cues to further improve the performance. The experimental results show that the proposed technique achieves the best or at least equivalent performance among compared techniques, GENIA, MESH, UMLS, and combinations of these three resources in F-measure. The results imply that the performance of dictionary-based extraction techniques is largely influenced by information resources used to build the dictionary. In addition, the edit distance algorithm shows steady performance with three different dictionaries in precision whereas the context-only technique achieves a high-end performance with three difference dictionaries in recall.

  12. Developing a hybrid dictionary-based bio-entity recognition technique

    PubMed Central

    2015-01-01

    Background Bio-entity extraction is a pivotal component for information extraction from biomedical literature. The dictionary-based bio-entity extraction is the first generation of Named Entity Recognition (NER) techniques. Methods This paper presents a hybrid dictionary-based bio-entity extraction technique. The approach expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. In addition, the proposed technique adopts text mining techniques in the merging stage of similar entities such as Part of Speech (POS) expansion, stemming, and the exploitation of the contextual cues to further improve the performance. Results The experimental results show that the proposed technique achieves the best or at least equivalent performance among compared techniques, GENIA, MESH, UMLS, and combinations of these three resources in F-measure. Conclusions The results imply that the performance of dictionary-based extraction techniques is largely influenced by information resources used to build the dictionary. In addition, the edit distance algorithm shows steady performance with three different dictionaries in precision whereas the context-only technique achieves a high-end performance with three difference dictionaries in recall. PMID:26043907

  13. Introduction to the JASIST Special Topic Issue on Web Retrieval and Mining: A Machine Learning Perspective.

    ERIC Educational Resources Information Center

    Chen, Hsinchun

    2003-01-01

    Discusses information retrieval techniques used on the World Wide Web. Topics include machine learning in information extraction; relevance feedback; information filtering and recommendation; text classification and text clustering; Web mining, based on data mining techniques; hyperlink structure; and Web size. (LRW)

  14. Optical information authentication using compressed double-random-phase-encoded images and quick-response codes.

    PubMed

    Wang, Xiaogang; Chen, Wen; Chen, Xudong

    2015-03-09

    In this paper, we develop a new optical information authentication system based on compressed double-random-phase-encoded images and quick-response (QR) codes, where the parameters of optical lightwave are used as keys for optical decryption and the QR code is a key for verification. An input image attached with QR code is first optically encoded in a simplified double random phase encoding (DRPE) scheme without using interferometric setup. From the single encoded intensity pattern recorded by a CCD camera, a compressed double-random-phase-encoded image, i.e., the sparse phase distribution used for optical decryption, is generated by using an iterative phase retrieval technique with QR code. We compare this technique to the other two methods proposed in literature, i.e., Fresnel domain information authentication based on the classical DRPE with holographic technique and information authentication based on DRPE and phase retrieval algorithm. Simulation results show that QR codes are effective on improving the security and data sparsity of optical information encryption and authentication system.

  15. Current trends in geomorphological mapping

    NASA Astrophysics Data System (ADS)

    Seijmonsbergen, A. C.

    2012-04-01

    Geomorphological mapping is a world currently in motion, driven by technological advances and the availability of new high resolution data. As a consequence, classic (paper) geomorphological maps which were the standard for more than 50 years are rapidly being replaced by digital geomorphological information layers. This is witnessed by the following developments: 1. the conversion of classic paper maps into digital information layers, mainly performed in a digital mapping environment such as a Geographical Information System, 2. updating the location precision and the content of the converted maps, by adding more geomorphological details, taken from high resolution elevation data and/or high resolution image data, 3. (semi) automated extraction and classification of geomorphological features from digital elevation models, broadly separated into unsupervised and supervised classification techniques and 4. New digital visualization / cartographic techniques and reading interfaces. Newly digital geomorphological information layers can be based on manual digitization of polygons using DEMs and/or aerial photographs, or prepared through (semi) automated extraction and delineation of geomorphological features. DEMs are often used as basis to derive Land Surface Parameter information which is used as input for (un) supervised classification techniques. Especially when using high-res data, object-based classification is used as an alternative to traditional pixel-based classifications, to cluster grid cells into homogeneous objects, which can be classified as geomorphological features. Classic map content can also be used as training material for the supervised classification of geomorphological features. In the classification process, rule-based protocols, including expert-knowledge input, are used to map specific geomorphological features or entire landscapes. Current (semi) automated classification techniques are increasingly able to extract morphometric, hydrological, and in the near future also morphogenetic information. As a result, these new opportunities have changed the workflows for geomorphological mapmaking, and their focus have shifted from field-based techniques to using more computer-based techniques: for example, traditional pre-field air-photo based maps are now replaced by maps prepared in a digital mapping environment, and designated field visits using mobile GIS / digital mapping devices now focus on gathering location information and attribute inventories and are strongly time efficient. The resulting 'modern geomorphological maps' are digital collections of geomorphological information layers consisting of georeferenced vector, raster and tabular data which are stored in a digital environment such as a GIS geodatabase, and are easily visualized as e.g. 'birds' eye' views, as animated 3D displays, on virtual globes, or stored as GeoPDF maps in which georeferenced attribute information can be easily exchanged over the internet. Digital geomorphological information layers are increasingly accessed via web-based services distributed through remote servers. Information can be consulted - or even build using remote geoprocessing servers - by the end user. Therefore, it will not only be the geomorphologist anymore, but also the professional end user that dictates the applied use of digital geomorphological information layers.

  16. Information integration and diagnosis analysis of equipment status and production quality for machining process

    NASA Astrophysics Data System (ADS)

    Zan, Tao; Wang, Min; Hu, Jianzhong

    2010-12-01

    Machining status monitoring technique by multi-sensors can acquire and analyze the machining process information to implement abnormity diagnosis and fault warning. Statistical quality control technique is normally used to distinguish abnormal fluctuations from normal fluctuations through statistical method. In this paper by comparing the advantages and disadvantages of the two methods, the necessity and feasibility of integration and fusion is introduced. Then an approach that integrates multi-sensors status monitoring and statistical process control based on artificial intelligent technique, internet technique and database technique is brought forward. Based on virtual instrument technique the author developed the machining quality assurance system - MoniSysOnline, which has been used to monitoring the grinding machining process. By analyzing the quality data and AE signal information of wheel dressing process the reason of machining quality fluctuation has been obtained. The experiment result indicates that the approach is suitable for the status monitoring and analyzing of machining process.

  17. Summary of Work for Joint Research Interchanges with DARWIN Integrated Product Team 1998

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus

    1999-01-01

    The intent of Stanford University's SciVis group is to develop technologies that enabled comparative analysis and visualization techniques for simulated and experimental flow fields. These techniques would then be made available under the Joint Research Interchange for potential injection into the DARWIN Workspace Environment (DWE). In the past, we have focused on techniques that exploited feature based comparisons such as shock and vortex extractions. Our current research effort focuses on finding a quantitative comparison of general vector fields based on topological features. Since the method relies on topological information, grid matching and vector alignment is not needed in the comparison. This is often a problem with many data comparison techniques. In addition, since only topology based information is stored and compared for each field, there is a significant compression of information that enables large databases to be quickly searched. This report will briefly (1) describe current technologies in the area of comparison techniques, (2) will describe the theory of our new method and finally (3) summarize a few of the results.

  18. Summary of Work for Joint Research Interchanges with DARWIN Integrated Product Team

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus

    1999-01-01

    The intent of Stanford University's SciVis group is to develop technologies that enabled comparative analysis and visualization techniques for simulated and experimental flow fields. These techniques would then be made available un- der the Joint Research Interchange for potential injection into the DARWIN Workspace Environment (DWE). In the past, we have focused on techniques that exploited feature based comparisons such as shock and vortex extractions. Our current research effort focuses on finding a quantitative comparison of general vector fields based on topological features. Since the method relies on topological information, grid matching an@ vector alignment is not needed in the comparison. This is often a problem with many data comparison techniques. In addition, since only topology based information is stored and compared for each field, there is a significant compression of information that enables large databases to be quickly searched. This report will briefly (1) describe current technologies in the area of comparison techniques, (2) will describe the theory of our new method and finally (3) summarize a few of the results.

  19. The Use of a Context-Based Information Retrieval Technique

    DTIC Science & Technology

    2009-07-01

    provided in context. Latent Semantic Analysis (LSA) is a statistical technique for inferring contextual and structural information, and previous studies...WAIS). 10 DSTO-TR-2322 1.4.4 Latent Semantic Analysis LSA, which is also known as latent semantic indexing (LSI), uses a statistical and...1.4.6 Language Models In contrast, natural language models apply algorithms that combine statistical information with semantic information. Semantic

  20. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2014-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  1. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan Walker

    2015-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  2. Using sentiment analysis to review patient satisfaction data located on the internet.

    PubMed

    Hopper, Anthony M; Uriyo, Maria

    2015-01-01

    The purpose of this paper is to test the usefulness of sentiment analysis and time-to-next-complaint methods in quantifying text-based information located on the internet. As important, the authors demonstrate how managers can use time-to-next-complaint techniques to organize sentiment analysis derived data into useful information, which can be shared with doctors and other staff. The authors used sentiment analysis to review patient feedback for a select group of gynecologists in Virginia. The authors utilized time-to-next-complaint methods along with other techniques to organize this data into meaningful information. The authors demonstrated that sentiment analysis and time-to-next-complaint techniques might be useful tools for healthcare managers who are interested in transforming web-based text into meaningful, quantifiable information. This study has several limitations. For one thing, neither the data set nor the techniques the authors used to analyze it will account for biases that resulted from selection issues related to gender, income, and culture, as well as from other socio-demographic concerns. Additionally, the authors lacked key data concerning patient volumes for the targeted physicians. Finally, it may be difficult to convince doctors to consider web-based comments as truthful, thereby preventing healthcare managers from using data located on the internet. The report illustrates some of the ways in which healthcare administrators can utilize sentiment analysis, along with time-to-next-complaint techniques, to mine web-based, patient comments for meaningful information. The paper is one of the first to illustrate ways in which administrators at clinics and physicians' offices can utilize sentiment analysis and time-to-next-complaint methods to analyze web-based patient comments.

  3. Correction techniques for depth errors with stereo three-dimensional graphic displays

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Holden, Anthony; Williams, Steven P.

    1992-01-01

    Three-dimensional (3-D), 'real-world' pictorial displays that incorporate 'true' depth cues via stereopsis techniques have proved effective for displaying complex information in a natural way to enhance situational awareness and to improve pilot/vehicle performance. In such displays, the display designer must map the depths in the real world to the depths available with the stereo display system. However, empirical data have shown that the human subject does not perceive the information at exactly the depth at which it is mathematically placed. Head movements can also seriously distort the depth information that is embedded in stereo 3-D displays because the transformations used in mapping the visual scene to the depth-viewing volume (DVV) depend intrinsically on the viewer location. The goal of this research was to provide two correction techniques; the first technique corrects the original visual scene to the DVV mapping based on human perception errors, and the second (which is based on head-positioning sensor input data) corrects for errors induced by head movements. Empirical data are presented to validate both correction techniques. A combination of the two correction techniques effectively eliminates the distortions of depth information embedded in stereo 3-D displays.

  4. Polarization-based material classification technique using passive millimeter-wave polarimetric imagery.

    PubMed

    Hu, Fei; Cheng, Yayun; Gui, Liangqi; Wu, Liang; Zhang, Xinyi; Peng, Xiaohui; Su, Jinlong

    2016-11-01

    The polarization properties of thermal millimeter-wave emission capture inherent information of objects, e.g., material composition, shape, and surface features. In this paper, a polarization-based material-classification technique using passive millimeter-wave polarimetric imagery is presented. Linear polarization ratio (LPR) is created to be a new feature discriminator that is sensitive to material type and to remove the reflected ambient radiation effect. The LPR characteristics of several common natural and artificial materials are investigated by theoretical and experimental analysis. Based on a priori information about LPR characteristics, the optimal range of incident angle and the classification criterion are discussed. Simulation and measurement results indicate that the presented classification technique is effective for distinguishing between metals and dielectrics. This technique suggests possible applications for outdoor metal target detection in open scenes.

  5. [Key informers. When and How?].

    PubMed

    Martín González, R

    2009-03-01

    When information obtained through duly designed and developed studies is not available, the solution to certain problems that affect the population or that respond to certain questions may be approached by using the information and experience provided by the so-called key informer. The key informer is defined as a person who is in contact with the community or with the problem to be studied, who is considered to have good knowledge of the situation and therefore who is considered an expert. The search for consensus is the basis to obtain information through the key informers. The techniques used have different characteristics based on whether the experts chosen meet together or not, whether they are guided or not, whether they interact with each other or not. These techniques include the survey, the Delphi technique, the nominal group technique, brainwriting, brainstorming, the Phillips 66 technique, the 6-3-5 technique, the community forum and the community impressions technique. Information provided by key informers through the search for consensus is relevant when this is not available or cannot be obtained by other methods. It has permitted the analysis of the existing neurological care model, elaboration of recommendations on visit times for the out-patient neurological care, and the elaboration of guidelines and recommendations for the management of prevalent neurological problems.

  6. Web image retrieval using an effective topic and content-based technique

    NASA Astrophysics Data System (ADS)

    Lee, Ching-Cheng; Prabhakara, Rashmi

    2005-03-01

    There has been an exponential growth in the amount of image data that is available on the World Wide Web since the early development of Internet. With such a large amount of information and image available and its usefulness, an effective image retrieval system is thus greatly needed. In this paper, we present an effective approach with both image matching and indexing techniques that improvise on existing integrated image retrieval methods. This technique follows a two-phase approach, integrating query by topic and query by example specification methods. In the first phase, The topic-based image retrieval is performed by using an improved text information retrieval (IR) technique that makes use of the structured format of HTML documents. This technique consists of a focused crawler that not only provides for the user to enter the keyword for the topic-based search but also, the scope in which the user wants to find the images. In the second phase, we use query by example specification to perform a low-level content-based image match in order to retrieve smaller and relatively closer results of the example image. From this, information related to the image feature is automatically extracted from the query image. The main objective of our approach is to develop a functional image search and indexing technique and to demonstrate that better retrieval results can be achieved.

  7. Introduction to Information Visualization (InfoVis) Techniques for Model-Based Systems Engineering

    NASA Technical Reports Server (NTRS)

    Sindiy, Oleg; Litomisky, Krystof; Davidoff, Scott; Dekens, Frank

    2013-01-01

    This paper presents insights that conform to numerous system modeling languages/representation standards. The insights are drawn from best practices of Information Visualization as applied to aerospace-based applications.

  8. The development of selected data base applications for the crustal dynamics data information system

    NASA Technical Reports Server (NTRS)

    Noll, C. E.

    1981-01-01

    The development of a data base and its accompanying software for the data information system of crustal dynamics project is described. Background information concerning this project, and a definition of the techniques used in the implementation of an operational data base, are presented. Examples of key applications are included and interpreted.

  9. Education for the Information Age.

    ERIC Educational Resources Information Center

    Breivik, Patricia Senn

    1992-01-01

    To be effective in the current rapidly changing environment, individuals need more than a knowledge base. They also need information literacy which includes techniques for exploring new information, synthesizing it, and using it in practical ways. Undergraduate education should focus on such resource-based learning directed at problem solving.…

  10. Digital Mapping Techniques '09-Workshop Proceedings, Morgantown, West Virginia, May 10-13, 2009

    USGS Publications Warehouse

    Soller, David R.

    2011-01-01

    As in the previous years' meetings, the objective was to foster informal discussion and exchange of technical information, principally in order to develop more efficient methods for digital mapping, cartography, GIS analysis, and information management. At this meeting, oral and poster presentations and special discussion sessions emphasized (1) methods for creating and publishing map products (here, "publishing" includes Web-based release); (2) field data capture software and techniques, including the use of LiDAR; (3) digital cartographic techniques; (4) migration of digital maps into ArcGIS Geodatabase format; (5) analytical GIS techniques; and (6) continued development of the National Geologic Map Database.

  11. Terminology model discovery using natural language processing and visualization techniques.

    PubMed

    Zhou, Li; Tao, Ying; Cimino, James J; Chen, Elizabeth S; Liu, Hongfang; Lussier, Yves A; Hripcsak, George; Friedman, Carol

    2006-12-01

    Medical terminologies are important for unambiguous encoding and exchange of clinical information. The traditional manual method of developing terminology models is time-consuming and limited in the number of phrases that a human developer can examine. In this paper, we present an automated method for developing medical terminology models based on natural language processing (NLP) and information visualization techniques. Surgical pathology reports were selected as the testing corpus for developing a pathology procedure terminology model. The use of a general NLP processor for the medical domain, MedLEE, provides an automated method for acquiring semantic structures from a free text corpus and sheds light on a new high-throughput method of medical terminology model development. The use of an information visualization technique supports the summarization and visualization of the large quantity of semantic structures generated from medical documents. We believe that a general method based on NLP and information visualization will facilitate the modeling of medical terminologies.

  12. Information system building of the urban electromagnetic environment

    NASA Astrophysics Data System (ADS)

    Wang, Jiechen; Rui, Yikang; Shen, Dingtao; Yu, Qing

    2007-06-01

    The pollution of urban electromagnetic radiation has become more serious, however, there is still lack of a perfect and interactive User System to manage, analyze and issue the information. In this study, taking the electromagnetic environment of Nanjing as an example, an information system based on WebGIS with the techniques of ArcIMS and JSP has been developed, in order to provide the services and technique supports for information query of public and decision making of relevant departments.

  13. New Information Processing Techniques for Military Systems (les Nouvelles techniques de traitement de l’information pour les systemes militaires)

    DTIC Science & Technology

    2001-04-01

    divisée en un certain nombre de sessions. SESSION I – SYSTEMES ET TECHNIQUES DE TRAITEMENT DE L’INFORMATION (I) Les deux premières sessions étaient...établissant des paramètres et en jetant les bases théoriques/mathématiques de futurs systèmes d’information. Il s’agit d’un domaine pour lequel la...transfert de données et de renseignements se fait de plus en plus sentir. Ce symposium couvrira un grand domaine opérationnel qui est d’une grande

  14. Multirobot autonomous landmine detection using distributed multisensor information aggregation

    NASA Astrophysics Data System (ADS)

    Jumadinova, Janyl; Dasgupta, Prithviraj

    2012-06-01

    We consider the problem of distributed sensor information fusion by multiple autonomous robots within the context of landmine detection. We assume that different landmines can be composed of different types of material and robots are equipped with different types of sensors, while each robot has only one type of landmine detection sensor on it. We introduce a novel technique that uses a market-based information aggregation mechanism called a prediction market. Each robot is provided with a software agent that uses sensory input of the robot and performs calculations of the prediction market technique. The result of the agent's calculations is a 'belief' representing the confidence of the agent in identifying the object as a landmine. The beliefs from different robots are aggregated by the market mechanism and passed on to a decision maker agent. The decision maker agent uses this aggregate belief information about a potential landmine and makes decisions about which other robots should be deployed to its location, so that the landmine can be confirmed rapidly and accurately. Our experimental results show that, for identical data distributions and settings, using our prediction market-based information aggregation technique increases the accuracy of object classification favorably as compared to two other commonly used techniques.

  15. The Effect of Bilingual Term List Size on Dictionary-Based Cross-Language Information Retrieval

    DTIC Science & Technology

    2006-01-01

    The Effect of Bilingual Term List Size on Dictionary -Based Cross-Language Information Retrieval Dina Demner-Fushman Department of Computer Science... dictionary -based Cross-Language Information Retrieval (CLIR), in which the goal is to find documents written in one natural language based on queries that...in which the documents are written. In dictionary -based CLIR techniques, the princi- pal source of translation knowledge is a translation lexicon

  16. Symmetric Blind Information Reconciliation for Quantum Key Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kiktenko, Evgeniy O.; Trushechkin, Anton S.; Lim, Charles Ci Wen

    Quantum key distribution (QKD) is a quantum-proof key-exchange scheme which is fast approaching the communication industry. An essential component in QKD is the information reconciliation step, which is used for correcting the quantum-channel noise errors. The recently suggested blind-reconciliation technique, based on low-density parity-check codes, offers remarkable prospectives for efficient information reconciliation without an a priori quantum bit error rate estimation. We suggest an improvement of the blind-information-reconciliation protocol promoting a significant increase in the efficiency of the procedure and reducing its interactivity. Finally, the proposed technique is based on introducing symmetry in operations of parties, and the consideration ofmore » results of unsuccessful belief-propagation decodings.« less

  17. Symmetric Blind Information Reconciliation for Quantum Key Distribution

    DOE PAGES

    Kiktenko, Evgeniy O.; Trushechkin, Anton S.; Lim, Charles Ci Wen; ...

    2017-10-27

    Quantum key distribution (QKD) is a quantum-proof key-exchange scheme which is fast approaching the communication industry. An essential component in QKD is the information reconciliation step, which is used for correcting the quantum-channel noise errors. The recently suggested blind-reconciliation technique, based on low-density parity-check codes, offers remarkable prospectives for efficient information reconciliation without an a priori quantum bit error rate estimation. We suggest an improvement of the blind-information-reconciliation protocol promoting a significant increase in the efficiency of the procedure and reducing its interactivity. Finally, the proposed technique is based on introducing symmetry in operations of parties, and the consideration ofmore » results of unsuccessful belief-propagation decodings.« less

  18. Symmetric Blind Information Reconciliation for Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Kiktenko, E. O.; Trushechkin, A. S.; Lim, C. C. W.; Kurochkin, Y. V.; Fedorov, A. K.

    2017-10-01

    Quantum key distribution (QKD) is a quantum-proof key-exchange scheme which is fast approaching the communication industry. An essential component in QKD is the information reconciliation step, which is used for correcting the quantum-channel noise errors. The recently suggested blind-reconciliation technique, based on low-density parity-check codes, offers remarkable prospectives for efficient information reconciliation without an a priori quantum bit error rate estimation. We suggest an improvement of the blind-information-reconciliation protocol promoting a significant increase in the efficiency of the procedure and reducing its interactivity. The proposed technique is based on introducing symmetry in operations of parties, and the consideration of results of unsuccessful belief-propagation decodings.

  19. 2D DOST based local phase pattern for face recognition

    NASA Astrophysics Data System (ADS)

    Moniruzzaman, Md.; Alam, Mohammad S.

    2017-05-01

    A new two dimensional (2-D) Discrete Orthogonal Stcokwell Transform (DOST) based Local Phase Pattern (LPP) technique has been proposed for efficient face recognition. The proposed technique uses 2-D DOST as preliminary preprocessing and local phase pattern to form robust feature signature which can effectively accommodate various 3D facial distortions and illumination variations. The S-transform, is an extension of the ideas of the continuous wavelet transform (CWT), is also known for its local spectral phase properties in time-frequency representation (TFR). It provides a frequency dependent resolution of the time-frequency space and absolutely referenced local phase information while maintaining a direct relationship with the Fourier spectrum which is unique in TFR. After utilizing 2-D Stransform as the preprocessing and build local phase pattern from extracted phase information yield fast and efficient technique for face recognition. The proposed technique shows better correlation discrimination compared to alternate pattern recognition techniques such as wavelet or Gabor based face recognition. The performance of the proposed method has been tested using the Yale and extended Yale facial database under different environments such as illumination variation and 3D changes in facial expressions. Test results show that the proposed technique yields better performance compared to alternate time-frequency representation (TFR) based face recognition techniques.

  20. Contrast-enhanced spectral mammography based on a photon-counting detector: quantitative accuracy and radiation dose

    NASA Astrophysics Data System (ADS)

    Lee, Seungwan; Kang, Sooncheol; Eom, Jisoo

    2017-03-01

    Contrast-enhanced mammography has been used to demonstrate functional information about a breast tumor by injecting contrast agents. However, a conventional technique with a single exposure degrades the efficiency of tumor detection due to structure overlapping. Dual-energy techniques with energy-integrating detectors (EIDs) also cause an increase of radiation dose and an inaccuracy of material decomposition due to the limitations of EIDs. On the other hands, spectral mammography with photon-counting detectors (PCDs) is able to resolve the issues induced by the conventional technique and EIDs using their energy-discrimination capabilities. In this study, the contrast-enhanced spectral mammography based on a PCD was implemented by using a polychromatic dual-energy model, and the proposed technique was compared with the dual-energy technique with an EID in terms of quantitative accuracy and radiation dose. The results showed that the proposed technique improved the quantitative accuracy as well as reduced radiation dose comparing to the dual-energy technique with an EID. The quantitative accuracy of the contrast-enhanced spectral mammography based on a PCD was slightly improved as a function of radiation dose. Therefore, the contrast-enhanced spectral mammography based on a PCD is able to provide useful information for detecting breast tumors and improving diagnostic accuracy.

  1. Speckle noise reduction in ultrasound images using a discrete wavelet transform-based image fusion technique.

    PubMed

    Choi, Hyun Ho; Lee, Ju Hwan; Kim, Sung Min; Park, Sung Yun

    2015-01-01

    Here, the speckle noise in ultrasonic images is removed using an image fusion-based denoising method. To optimize the denoising performance, each discrete wavelet transform (DWT) and filtering technique was analyzed and compared. In addition, the performances were compared in order to derive the optimal input conditions. To evaluate the speckle noise removal performance, an image fusion algorithm was applied to the ultrasound images, and comparatively analyzed with the original image without the algorithm. As a result, applying DWT and filtering techniques caused information loss and noise characteristics, and did not represent the most significant noise reduction performance. Conversely, an image fusion method applying SRAD-original conditions preserved the key information in the original image, and the speckle noise was removed. Based on such characteristics, the input conditions of SRAD-original had the best denoising performance with the ultrasound images. From this study, the best denoising technique proposed based on the results was confirmed to have a high potential for clinical application.

  2. A novel methodology for querying web images

    NASA Astrophysics Data System (ADS)

    Prabhakara, Rashmi; Lee, Ching Cheng

    2005-01-01

    Ever since the advent of Internet, there has been an immense growth in the amount of image data that is available on the World Wide Web. With such a magnitude of image availability, an efficient and effective image retrieval system is required to make use of this information. This research presents an effective image matching and indexing technique that improvises on existing integrated image retrieval methods. The proposed technique follows a two-phase approach, integrating query by topic and query by example specification methods. The first phase consists of topic-based image retrieval using an improved text information retrieval (IR) technique that makes use of the structured format of HTML documents. It consists of a focused crawler that not only provides for the user to enter the keyword for the topic-based search but also, the scope in which the user wants to find the images. The second phase uses the query by example specification to perform a low-level content-based image match for the retrieval of smaller and relatively closer results of the example image. Information related to the image feature is automatically extracted from the query image by the image processing system. A technique that is not computationally intensive based on color feature is used to perform content-based matching of images. The main goal is to develop a functional image search and indexing system and to demonstrate that better retrieval results can be achieved with this proposed hybrid search technique.

  3. A novel methodology for querying web images

    NASA Astrophysics Data System (ADS)

    Prabhakara, Rashmi; Lee, Ching Cheng

    2004-12-01

    Ever since the advent of Internet, there has been an immense growth in the amount of image data that is available on the World Wide Web. With such a magnitude of image availability, an efficient and effective image retrieval system is required to make use of this information. This research presents an effective image matching and indexing technique that improvises on existing integrated image retrieval methods. The proposed technique follows a two-phase approach, integrating query by topic and query by example specification methods. The first phase consists of topic-based image retrieval using an improved text information retrieval (IR) technique that makes use of the structured format of HTML documents. It consists of a focused crawler that not only provides for the user to enter the keyword for the topic-based search but also, the scope in which the user wants to find the images. The second phase uses the query by example specification to perform a low-level content-based image match for the retrieval of smaller and relatively closer results of the example image. Information related to the image feature is automatically extracted from the query image by the image processing system. A technique that is not computationally intensive based on color feature is used to perform content-based matching of images. The main goal is to develop a functional image search and indexing system and to demonstrate that better retrieval results can be achieved with this proposed hybrid search technique.

  4. Towards Semantic e-Science for Traditional Chinese Medicine

    PubMed Central

    Chen, Huajun; Mao, Yuxin; Zheng, Xiaoqing; Cui, Meng; Feng, Yi; Deng, Shuiguang; Yin, Aining; Zhou, Chunying; Tang, Jinming; Jiang, Xiaohong; Wu, Zhaohui

    2007-01-01

    Background Recent advances in Web and information technologies with the increasing decentralization of organizational structures have resulted in massive amounts of information resources and domain-specific services in Traditional Chinese Medicine. The massive volume and diversity of information and services available have made it difficult to achieve seamless and interoperable e-Science for knowledge-intensive disciplines like TCM. Therefore, information integration and service coordination are two major challenges in e-Science for TCM. We still lack sophisticated approaches to integrate scientific data and services for TCM e-Science. Results We present a comprehensive approach to build dynamic and extendable e-Science applications for knowledge-intensive disciplines like TCM based on semantic and knowledge-based techniques. The semantic e-Science infrastructure for TCM supports large-scale database integration and service coordination in a virtual organization. We use domain ontologies to integrate TCM database resources and services in a semantic cyberspace and deliver a semantically superior experience including browsing, searching, querying and knowledge discovering to users. We have developed a collection of semantic-based toolkits to facilitate TCM scientists and researchers in information sharing and collaborative research. Conclusion Semantic and knowledge-based techniques are suitable to knowledge-intensive disciplines like TCM. It's possible to build on-demand e-Science system for TCM based on existing semantic and knowledge-based techniques. The presented approach in the paper integrates heterogeneous distributed TCM databases and services, and provides scientists with semantically superior experience to support collaborative research in TCM discipline. PMID:17493289

  5. Agents Based e-Commerce and Securing Exchanged Information

    NASA Astrophysics Data System (ADS)

    Al-Jaljouli, Raja; Abawajy, Jemal

    Mobile agents have been implemented in e-Commerce to search and filter information of interest from electronic markets. When the information is very sensitive and critical, it is important to develop a novel security protocol that can efficiently protect the information from malicious tampering as well as unauthorized disclosure or at least detect any malicious act of intruders. In this chapter, we describe robust security techniques that ensure a sound security of information gathered throughout agent’s itinerary against various security attacks, as well as truncation attacks. A sound security protocol is described, which implements the various security techniques that would jointly prevent or at least detect any malicious act of intruders. We reason about the soundness of the protocol usingSymbolic Trace Analyzer (STA), a formal verification tool that is based on symbolic techniques. We analyze the protocol in key configurations and show that it is free of flaws. We also show that the protocol fulfils the various security requirements of exchanged information in MAS, including data-integrity, data-confidentiality, data-authenticity, origin confidentiality and data non-repudiability.

  6. Interferometric and nonlinear-optical spectral-imaging techniques for outer space and live cells

    NASA Astrophysics Data System (ADS)

    Itoh, Kazuyoshi

    2015-12-01

    Multidimensional signals such as the spectral images allow us to have deeper insights into the natures of objects. In this paper the spectral imaging techniques that are based on optical interferometry and nonlinear optics are presented. The interferometric imaging technique is based on the unified theory of Van Cittert-Zernike and Wiener-Khintchine theorems and allows us to retrieve a spectral image of an object in the far zone from the 3D spatial coherence function. The retrieval principle is explained using a very simple object. The promising applications to space interferometers for astronomy that are currently in progress will also be briefly touched on. An interesting extension of interferometric spectral imaging is a 3D and spectral imaging technique that records 4D information of objects where the 3D and spectral information is retrieved from the cross-spectral density function of optical field. The 3D imaging is realized via the numerical inverse propagation of the cross-spectral density. A few techniques suggested recently are introduced. The nonlinear optical technique that utilizes stimulated Raman scattering (SRS) for spectral imaging of biomedical targets is presented lastly. The strong signals of SRS permit us to get vibrational information of molecules in the live cell or tissue in real time. The vibrational information of unstained or unlabeled molecules is crucial especially for medical applications. The 3D information due to the optical nonlinearity is also the attractive feature of SRS spectral microscopy.

  7. Upper atmosphere research: Reaction rate and optical measurements

    NASA Technical Reports Server (NTRS)

    Stief, L. J.; Allen, J. E., Jr.; Nava, D. F.; Payne, W. A., Jr.

    1990-01-01

    The objective is to provide photochemical, kinetic, and spectroscopic information necessary for photochemical models of the Earth's upper atmosphere and to examine reactions or reactants not presently in the models to either confirm the correctness of their exclusion or provide evidence to justify future inclusion in the models. New initiatives are being taken in technique development (many of them laser based) and in the application of established techniques to address gaps in the photochemical/kinetic data base, as well as to provide increasingly reliable information.

  8. Mapping and Managing Knowledge and Information in Resource-Based Learning

    ERIC Educational Resources Information Center

    Tergan, Sigmar-Olaf; Graber, Wolfgang; Neumann, Anja

    2006-01-01

    In resource-based learning scenarios, students are often overwhelmed by the complexity of task-relevant knowledge and information. Techniques for the external interactive representation of individual knowledge in graphical format may help them to cope with complex problem situations. Advanced computer-based concept-mapping tools have the potential…

  9. 77 FR 65395 - Notice of Submission of Proposed Information Collection to OMB: Section 8 Contract Renewal Policy...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-26

    ... Project- Based Section 8 Contracts AGENCY: Office of the Chief Information Officer, HUD. ACTION: Notice... through the use of appropriate automated collection techniques or other forms of information technology, e... Section 8 project-based assistance contracts are renewed. The Section 8 contract renewal process is an...

  10. Model averaging techniques for quantifying conceptual model uncertainty.

    PubMed

    Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg

    2010-01-01

    In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.

  11. Connectionist Interaction Information Retrieval.

    ERIC Educational Resources Information Center

    Dominich, Sandor

    2003-01-01

    Discussion of connectionist views for adaptive clustering in information retrieval focuses on a connectionist clustering technique and activation spreading-based information retrieval model using the interaction information retrieval method. Presents theoretical as well as simulation results as regards computational complexity and includes…

  12. Automating the Air Force Retail-Level Equipment Management Process: An Application of Microcomputer-Based Information Systems Techniques

    DTIC Science & Technology

    1988-09-01

    could use the assistance of a microcomputer-based management information system . However, adequate system design and development requires an in-depth...understanding of the Equipment Management Section and the environment in which it functions were asked and answered. Then, a management information system was...designed, developed, and tested. The management information system is called the Equipment Management Information System (EMIS).

  13. Preliminary Evaluation of BIM-based Approaches for Schedule Delay Analysis

    NASA Astrophysics Data System (ADS)

    Chou, Hui-Yu; Yang, Jyh-Bin

    2017-10-01

    The problem of schedule delay commonly occurs in construction projects. The quality of delay analysis depends on the availability of schedule-related information and delay evidence. More information used in delay analysis usually produces more accurate and fair analytical results. How to use innovative techniques to improve the quality of schedule delay analysis results have received much attention recently. As Building Information Modeling (BIM) technique has been quickly developed, using BIM and 4D simulation techniques have been proposed and implemented. Obvious benefits have been achieved especially in identifying and solving construction consequence problems in advance of construction. This study preforms an intensive literature review to discuss the problems encountered in schedule delay analysis and the possibility of using BIM as a tool in developing a BIM-based approach for schedule delay analysis. This study believes that most of the identified problems can be dealt with by BIM technique. Research results could be a fundamental of developing new approaches for resolving schedule delay disputes.

  14. Silicon photonics for neuromorphic information processing

    NASA Astrophysics Data System (ADS)

    Bienstman, Peter; Dambre, Joni; Katumba, Andrew; Freiberger, Matthias; Laporte, Floris; Lugnan, Alessio

    2018-02-01

    We present our latest results on silicon photonics neuromorphic information processing based a.o. on techniques like reservoir computing. We will discuss aspects like scalability, novel architectures for enhanced power efficiency, as well as all-optical readout. Additionally, we will touch upon new machine learning techniques to operate these integrated readouts. Finally, we will show how these systems can be used for high-speed low-power information processing for applications like recognition of biological cells.

  15. Approach to spatial information security based on digital certificate

    NASA Astrophysics Data System (ADS)

    Cong, Shengri; Zhang, Kai; Chen, Baowen

    2005-11-01

    With the development of the online applications of geographic information systems (GIS) and the spatial information services, the spatial information security becomes more important. This work introduced digital certificates and authorization schemes into GIS to protect the crucial spatial information combining the techniques of the role-based access control (RBAC), the public key infrastructure (PKI) and the privilege management infrastructure (PMI). We investigated the spatial information granularity suited for sensitivity marking and digital certificate model that fits the need of GIS security based on the semantics analysis of spatial information. It implements a secure, flexible, fine-grained data access based on public technologies in GIS in the world.

  16. A Cost Benefit Technique for R & D Based Information.

    ERIC Educational Resources Information Center

    Stern, B. T.

    A cost benefit technique consisting of the following five phases is proposed: (a) specific objectives of the service, (b) measurement of work flow, (c) work costing, (d) charge to users of the information service, and (e) equating demand and cost. In this approach, objectives are best stated by someone not routinely concerned with the individual…

  17. Information System Design Methodology Based on PERT/CPM Networking and Optimization Techniques.

    ERIC Educational Resources Information Center

    Bose, Anindya

    The dissertation attempts to demonstrate that the program evaluation and review technique (PERT)/Critical Path Method (CPM) or some modified version thereof can be developed into an information system design methodology. The methodology utilizes PERT/CPM which isolates the basic functional units of a system and sets them in a dynamic time/cost…

  18. TOF-SIMS imaging technique with information entropy

    NASA Astrophysics Data System (ADS)

    Aoyagi, Satoka; Kawashima, Y.; Kudo, Masahiro

    2005-05-01

    Time-of-flight secondary ion mass spectrometry (TOF-SIMS) is capable of chemical imaging of proteins on insulated samples in principal. However, selection of specific peaks related to a particular protein, which are necessary for chemical imaging, out of numerous candidates had been difficult without an appropriate spectrum analysis technique. Therefore multivariate analysis techniques, such as principal component analysis (PCA), and analysis with mutual information defined by information theory, have been applied to interpret SIMS spectra of protein samples. In this study mutual information was applied to select specific peaks related to proteins in order to obtain chemical images. Proteins on insulated materials were measured with TOF-SIMS and then SIMS spectra were analyzed by means of the analysis method based on the comparison using mutual information. Chemical mapping of each protein was obtained using specific peaks related to each protein selected based on values of mutual information. The results of TOF-SIMS images of proteins on the materials provide some useful information on properties of protein adsorption, optimality of immobilization processes and reaction between proteins. Thus chemical images of proteins by TOF-SIMS contribute to understand interactions between material surfaces and proteins and to develop sophisticated biomaterials.

  19. Time-based self-spacing techniques using cockpit display of traffic information during approach to landing in a terminal area vectoring environment

    NASA Technical Reports Server (NTRS)

    Williams, D. H.

    1983-01-01

    A simulation study was undertaken to evaluate two time-based self-spacing techniques for in-trail following during terminal area approach. An electronic traffic display was provided in the weather radarscope location. The displayed self-spacing cues allowed the simulated aircraft to follow and to maintain spacing on another aircraft which was being vectored by air traffic control (ATC) for landing in a high-density terminal area. Separation performance data indicate the information provided on the traffic display was adequate for the test subjects to accurately follow the approach path of another aircraft without the assistance of ATC. The time-based technique with a constant-delay spacing criterion produced the most satisfactory spacing performance. Pilot comments indicate the workload associated with the self-separation task was very high and that additional spacing command information and/or aircraft autopilot functions would be desirable for operational implementational of the self-spacing task.

  20. Unsupervised, Robust Estimation-based Clustering for Multispectral Images

    NASA Technical Reports Server (NTRS)

    Netanyahu, Nathan S.

    1997-01-01

    To prepare for the challenge of handling the archiving and querying of terabyte-sized scientific spatial databases, the NASA Goddard Space Flight Center's Applied Information Sciences Branch (AISB, Code 935) developed a number of characterization algorithms that rely on supervised clustering techniques. The research reported upon here has been aimed at continuing the evolution of some of these supervised techniques, namely the neural network and decision tree-based classifiers, plus extending the approach to incorporating unsupervised clustering algorithms, such as those based on robust estimation (RE) techniques. The algorithms developed under this task should be suited for use by the Intelligent Information Fusion System (IIFS) metadata extraction modules, and as such these algorithms must be fast, robust, and anytime in nature. Finally, so that the planner/schedule module of the IlFS can oversee the use and execution of these algorithms, all information required by the planner/scheduler must be provided to the IIFS development team to ensure the timely integration of these algorithms into the overall system.

  1. Toward More Evidence-Based Practice

    PubMed Central

    Hotelling, Barbara A.

    2005-01-01

    Childbirth educators are responsible for providing expectant parents with evidence-based information. In this column, the author suggests resources where educators can find evidence-based research for best practices. Additionally, the author describes techniques for childbirth educators to use in presenting research-based information in their classes. A sample of Web sites and books that offer evidence-based resources for expectant parents is provided. PMID:17273422

  2. The research and realization of digital management platform for ultra-precision optical elements within life-cycle

    NASA Astrophysics Data System (ADS)

    Wang, Juan; Wang, Jian; Li, Lijuan; Zhou, Kun

    2014-08-01

    In order to solve the information fusion, process integration, collaborative design and manufacturing for ultra-precision optical elements within life-cycle management, this paper presents a digital management platform which is based on product data and business processes by adopting the modern manufacturing technique, information technique and modern management technique. The architecture and system integration of the digital management platform are discussed in this paper. The digital management platform can realize information sharing and interaction for information-flow, control-flow and value-stream from user's needs to offline in life-cycle, and it can also enhance process control, collaborative research and service ability of ultra-precision optical elements.

  3. Involuntary eye motion correction in retinal optical coherence tomography: Hardware or software solution?

    PubMed

    Baghaie, Ahmadreza; Yu, Zeyun; D'Souza, Roshan M

    2017-04-01

    In this paper, we review state-of-the-art techniques to correct eye motion artifacts in Optical Coherence Tomography (OCT) imaging. The methods for eye motion artifact reduction can be categorized into two major classes: (1) hardware-based techniques and (2) software-based techniques. In the first class, additional hardware is mounted onto the OCT scanner to gather information about the eye motion patterns during OCT data acquisition. This information is later processed and applied to the OCT data for creating an anatomically correct representation of the retina, either in an offline or online manner. In software based techniques, the motion patterns are approximated either by comparing the acquired data to a reference image, or by considering some prior assumptions about the nature of the eye motion. Careful investigations done on the most common methods in the field provides invaluable insight regarding future directions of the research in this area. The challenge in hardware-based techniques lies in the implementation aspects of particular devices. However, the results of these techniques are superior to those obtained from software-based techniques because they are capable of capturing secondary data related to eye motion during OCT acquisition. Software-based techniques on the other hand, achieve moderate success and their performance is highly dependent on the quality of the OCT data in terms of the amount of motion artifacts contained in them. However, they are still relevant to the field since they are the sole class of techniques with the ability to be applied to legacy data acquired using systems that do not have extra hardware to track eye motion. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Making Informed Decisions: The Role of Information Literacy in Ethical and Effective Engineering Design

    ERIC Educational Resources Information Center

    Fosmire, Michael

    2017-01-01

    Engineering designers must make evidence-based decisions when applying the practical tools and techniques of their discipline to human problems. Information literacy provides a structure for determining information gaps, locating appropriate and relevant information, applying that information effectively, and documenting and managing the knowledge…

  5. Knowledge-based geographic information systems (KBGIS): New analytic and data management tools

    USGS Publications Warehouse

    Albert, T.M.

    1988-01-01

    In its simplest form, a geographic information system (GIS) may be viewed as a data base management system in which most of the data are spatially indexed, and upon which sets of procedures operate to answer queries about spatial entities represented in the data base. Utilization of artificial intelligence (AI) techniques can enhance greatly the capabilities of a GIS, particularly in handling very large, diverse data bases involved in the earth sciences. A KBGIS has been developed by the U.S. Geological Survey which incorporates AI techniques such as learning, expert systems, new data representation, and more. The system, which will be developed further and applied, is a prototype of the next generation of GIS's, an intelligent GIS, as well as an example of a general-purpose intelligent data handling system. The paper provides a description of KBGIS and its application, as well as the AI techniques involved. ?? 1988 International Association for Mathematical Geology.

  6. A Review of Recent Developments in X-Ray Diagnostics for Turbulent and Optically Dense Rocket Sprays

    NASA Technical Reports Server (NTRS)

    Radke, Christopher; Halls, Benjamin; Kastengren, Alan; Meyer, Terrence

    2017-01-01

    Highly efficient mixing and atomization of fuel and oxidizers is an important factor in many propulsion and power generating applications. To better quantify breakup and mixing in atomizing sprays, several diagnostic techniques have been developed to collect droplet information and spray statistics. Several optical based techniques, such as Ballistic Imaging and SLIPI have previously demonstrated qualitative measurements in optically dense sprays, however these techniques have produced limited quantitative information in the near injector region. To complement to these advances, a recent wave of developments utilizing synchrotron based x-rays have been successful been implemented facilitating the collection of quantitative measurements in optically dense sprays.

  7. Computer Simulation as an Aid for Management of an Information System.

    ERIC Educational Resources Information Center

    Simmonds, W. H.; And Others

    The aim of this study was to develop methods, based upon computer simulation, of designing information systems and illustrate the use of these methods by application to an information service. The method developed is based upon Monte Carlo and discrete event simulation techniques and is described in an earlier report - Sira report R412 Organizing…

  8. Stereo-vision-based cooperative-vehicle positioning using OCC and neural networks

    NASA Astrophysics Data System (ADS)

    Ifthekhar, Md. Shareef; Saha, Nirzhar; Jang, Yeong Min

    2015-10-01

    Vehicle positioning has been subjected to extensive research regarding driving safety measures and assistance as well as autonomous navigation. The most common positioning technique used in automotive positioning is the global positioning system (GPS). However, GPS is not reliably accurate because of signal blockage caused by high-rise buildings. In addition, GPS is error prone when a vehicle is inside a tunnel. Moreover, GPS and other radio-frequency-based approaches cannot provide orientation information or the position of neighboring vehicles. In this study, we propose a cooperative-vehicle positioning (CVP) technique by using the newly developed optical camera communications (OCC). The OCC technique utilizes image sensors and cameras to receive and decode light-modulated information from light-emitting diodes (LEDs). A vehicle equipped with an OCC transceiver can receive positioning and other information such as speed, lane change, driver's condition, etc., through optical wireless links of neighboring vehicles. Thus, the target vehicle position that is too far away to establish an OCC link can be determined by a computer-vision-based technique combined with the cooperation of neighboring vehicles. In addition, we have devised a back-propagation (BP) neural-network learning method for positioning and range estimation for CVP. The proposed neural-network-based technique can estimate target vehicle position from only two image points of target vehicles using stereo vision. For this, we use rear LEDs on target vehicles as image points. We show from simulation results that our neural-network-based method achieves better accuracy than that of the computer-vision method.

  9. Damage of composite structures: Detection technique, dynamic response and residual strength

    NASA Astrophysics Data System (ADS)

    Lestari, Wahyu

    2001-10-01

    Reliable and accurate health monitoring techniques can prevent catastrophic failures of structures. Conventional damage detection methods are based on visual or localized experimental methods and very often require prior information concerning the vicinity of the damage or defect. The structure must also be readily accessible for inspections. The techniques are also labor intensive. In comparison to these methods, health-monitoring techniques that are based on the structural dynamic response offers unique information on failure of structures. However, systematic relations between the experimental data and the defect are not available and frequently, the number of vibration modes needed for an accurate identification of defects is much higher than the number of modes that can be readily identified in the experiment. These motivated us to develop an experimental data based detection method with systematic relationships between the experimentally identified information and the analytical or mathematical model representing the defective structures. The developed technique use changes in vibrational curvature modes and natural frequencies. To avoid misinterpretation of the identified information, we also need to understand the effects of defects on the structural dynamic response prior to developing health-monitoring techniques. In this thesis work we focus on two type of defects in composite structures, namely delamination and edge notch like defect. Effects of nonlinearity due to the presence of defect and due to the axial stretching are studied for beams with delamination. Once defects are detected in a structure, next concern is determining the effects of the defects on the strength of the structure and its residual stiffness under dynamic loading. In this thesis, energy release rate due to dynamic loading in a delaminated structure is studied, which will be a foundation toward determining the residual strength of the structure.

  10. The Identification of Reasons, Solutions, and Techniques Informing a Theory-Based Intervention Targeting Recreational Sports Participation

    ERIC Educational Resources Information Center

    St Quinton, Tom; Brunton, Julie A.

    2018-01-01

    Purpose: This study is the 3rd piece of formative research utilizing the theory of planned behavior to inform the development of a behavior change intervention. Focus groups were used to identify reasons for and solutions to previously identified key beliefs in addition to potentially effective behavior change techniques. Method: A purposive…

  11. Integrated Knowledge Based Expert System for Disease Diagnosis System

    NASA Astrophysics Data System (ADS)

    Arbaiy, Nureize; Sulaiman, Shafiza Eliza; Hassan, Norlida; Afizah Afip, Zehan

    2017-08-01

    The role and importance of healthcare systems to improve quality of life and social welfare in a society have been well recognized. Attention should be given to raise awareness and implementing appropriate measures to improve health care. Therefore, a computer based system is developed to serve as an alternative for people to self-diagnose their health status based on given symptoms. This strategy should be emphasized so that people can utilize the information correctly as a reference to enjoy healthier life. Hence, a Web-based Community Center for Healthcare Diagnosis system is developed based on expert system technique. Expert system reasoning technique is employed in the system to enable information about treatment and prevention of the diseases based on given symptoms. At present, three diseases are included which are arthritis, thalassemia and pneumococcal. Sets of rule and fact are managed in the knowledge based system. Web based technology is used as a platform to disseminate the information to users in order for them to optimize the information appropriately. This system will benefit people who wish to increase health awareness and seek expert knowledge on the diseases by performing self-diagnosis for early disease detection.

  12. A Foot-Mounted Inertial Measurement Unit (IMU) Positioning Algorithm Based on Magnetic Constraint

    PubMed Central

    Zou, Jiaheng

    2018-01-01

    With the development of related applications, indoor positioning techniques have been more and more widely developed. Based on Wi-Fi, Bluetooth low energy (BLE) and geomagnetism, indoor positioning techniques often rely on the physical location of fingerprint information. The focus and difficulty of establishing the fingerprint database are in obtaining a relatively accurate physical location with as little given information as possible. This paper presents a foot-mounted inertial measurement unit (IMU) positioning algorithm under the loop closure constraint based on magnetic information. It can provide relatively reliable position information without maps and geomagnetic information and provides a relatively accurate coordinate for the collection of a fingerprint database. In the experiment, the features extracted by the multi-level Fourier transform method proposed in this paper are validated and the validity of loop closure matching is tested with a RANSAC-based method. Moreover, the loop closure detection results show that the cumulative error of the trajectory processed by the graph optimization algorithm is significantly suppressed, presenting a good accuracy. The average error of the trajectory under loop closure constraint is controlled below 2.15 m. PMID:29494542

  13. A Foot-Mounted Inertial Measurement Unit (IMU) Positioning Algorithm Based on Magnetic Constraint.

    PubMed

    Wang, Yan; Li, Xin; Zou, Jiaheng

    2018-03-01

    With the development of related applications, indoor positioning techniques have been more and more widely developed. Based on Wi-Fi, Bluetooth low energy (BLE) and geomagnetism, indoor positioning techniques often rely on the physical location of fingerprint information. The focus and difficulty of establishing the fingerprint database are in obtaining a relatively accurate physical location with as little given information as possible. This paper presents a foot-mounted inertial measurement unit (IMU) positioning algorithm under the loop closure constraint based on magnetic information. It can provide relatively reliable position information without maps and geomagnetic information and provides a relatively accurate coordinate for the collection of a fingerprint database. In the experiment, the features extracted by the multi-level Fourier transform method proposed in this paper are validated and the validity of loop closure matching is tested with a RANSAC-based method. Moreover, the loop closure detection results show that the cumulative error of the trajectory processed by the graph optimization algorithm is significantly suppressed, presenting a good accuracy. The average error of the trajectory under loop closure constraint is controlled below 2.15 m.

  14. Note: development of high speed confocal 3D profilometer.

    PubMed

    Ang, Kar Tien; Fang, Zhong Ping; Tay, Arthur

    2014-11-01

    A high-speed confocal 3D profilometer based on the chromatic confocal technology and spinning Nipkow disk technique has been developed and tested. It can measure a whole surface topography by taking only one image that requires less than 0.3 s. Surface height information is retrieved based on the ratios of red, green, and blue color information. A new vector projection technique has developed to enhance the vertical resolution of the measurement. The measurement accuracy of the prototype system has been verified via different test samples.

  15. Bayesian Fusion of Color and Texture Segmentations

    NASA Technical Reports Server (NTRS)

    Manduchi, Roberto

    2000-01-01

    In many applications one would like to use information from both color and texture features in order to segment an image. We propose a novel technique to combine "soft" segmentations computed for two or more features independently. Our algorithm merges models according to a mean entropy criterion, and allows to choose the appropriate number of classes for the final grouping. This technique also allows to improve the quality of supervised classification based on one feature (e.g. color) by merging information from unsupervised segmentation based on another feature (e.g., texture.)

  16. Moisture determination in composite materials using positron lifetime techniques

    NASA Technical Reports Server (NTRS)

    Singh, J. J.; Holt, W. R.; Mock, W., Jr.

    1980-01-01

    A technique was developed which has the potential of providing information on the moisture content as well as its depth in the specimen. This technique was based on the dependence of positron lifetime on the moisture content of the composite specimen. The positron lifetime technique of moisture determination and the results of the initial studies are described.

  17. Information Extraction Using Controlled English to Support Knowledge-Sharing and Decision-Making

    DTIC Science & Technology

    2012-06-01

    or language variants. CE-based information extraction will greatly facilitate the processes in the cognitive and social domains that enable forces...terminology or language variants. CE-based information extraction will greatly facilitate the processes in the cognitive and social domains that...processor is run to turn the atomic CE into a more “ stylistically felicitous” CE, using techniques such as: aggregating all information about an entity

  18. Recommendation in Higher Education Using Data Mining Techniques

    ERIC Educational Resources Information Center

    Vialardi, Cesar; Bravo, Javier; Shafti, Leila; Ortigosa, Alvaro

    2009-01-01

    One of the main problems faced by university students is to take the right decision in relation to their academic itinerary based on available information (for example courses, schedules, sections, classrooms and professors). In this context, this work proposes the use of a recommendation system based on data mining techniques to help students to…

  19. The application of artificial intelligence techniques to large distributed networks

    NASA Technical Reports Server (NTRS)

    Dubyah, R.; Smith, T. R.; Star, J. L.

    1985-01-01

    Data accessibility and transfer of information, including the land resources information system pilot, are structured as large computer information networks. These pilot efforts include the reduction of the difficulty to find and use data, reducing processing costs, and minimize incompatibility between data sources. Artificial Intelligence (AI) techniques were suggested to achieve these goals. The applicability of certain AI techniques are explored in the context of distributed problem solving systems and the pilot land data system (PLDS). The topics discussed include: PLDS and its data processing requirements, expert systems and PLDS, distributed problem solving systems, AI problem solving paradigms, query processing, and distributed data bases.

  20. Localization Using Visual Odometry and a Single Downward-Pointing Camera

    NASA Technical Reports Server (NTRS)

    Swank, Aaron J.

    2012-01-01

    Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.

  1. Multi-intelligence critical rating assessment of fusion techniques (MiCRAFT)

    NASA Astrophysics Data System (ADS)

    Blasch, Erik

    2015-06-01

    Assessment of multi-intelligence fusion techniques includes credibility of algorithm performance, quality of results against mission needs, and usability in a work-domain context. Situation awareness (SAW) brings together low-level information fusion (tracking and identification), high-level information fusion (threat and scenario-based assessment), and information fusion level 5 user refinement (physical, cognitive, and information tasks). To measure SAW, we discuss the SAGAT (Situational Awareness Global Assessment Technique) technique for a multi-intelligence fusion (MIF) system assessment that focuses on the advantages of MIF against single intelligence sources. Building on the NASA TLX (Task Load Index), SAGAT probes, SART (Situational Awareness Rating Technique) questionnaires, and CDM (Critical Decision Method) decision points; we highlight these tools for use in a Multi-Intelligence Critical Rating Assessment of Fusion Techniques (MiCRAFT). The focus is to measure user refinement of a situation over the information fusion quality of service (QoS) metrics: timeliness, accuracy, confidence, workload (cost), and attention (throughput). A key component of any user analysis includes correlation, association, and summarization of data; so we also seek measures of product quality and QuEST of information. Building a notion of product quality from multi-intelligence tools is typically subjective which needs to be aligned with objective machine metrics.

  2. Space based topographic mapping experiment using Seasat synthetic aperture radar and LANDSAT 3 return beam vidicon imagery

    NASA Technical Reports Server (NTRS)

    Mader, G. L.

    1981-01-01

    A technique for producing topographic information is described which is based on same side/same time viewing using a dissimilar combination of radar imagery and photographic images. Common geographic areas viewed from similar space reference locations produce scene elevation displacements in opposite direction and proper use of this characteristic can yield the perspective information necessary for determination of base to height ratios. These base to height ratios can in turn be used to produce a topographic map. A test area covering the Harrisburg, Pennsylvania region was observed by synthetic aperture radar on the Seasat satellite and by return beam vidicon on by the LANDSAT - 3 satellite. The techniques developed for the scaling re-orientation and common registration of the two images are presented along with the topographic determination data. Topographic determination based exclusively on the images content is compared to the map information which is used as a performance calibration base.

  3. High-speed engine/component performance assessment using exergy and thrust-based methods

    NASA Technical Reports Server (NTRS)

    Riggins, D. W.

    1996-01-01

    This investigation summarizes a comparative study of two high-speed engine performance assessment techniques based on energy (available work) and thrust-potential (thrust availability). Simple flow-fields utilizing Rayleigh heat addition and one-dimensional flow with friction are used to demonstrate the fundamental inability of conventional energy techniques to predict engine component performance, aid in component design, or accurately assess flow losses. The use of the thrust-based method on these same examples demonstrates its ability to yield useful information in all these categories. Energy and thrust are related and discussed from the stand-point of their fundamental thermodynamic and fluid dynamic definitions in order to explain the differences in information obtained using the two methods. The conventional definition of energy is shown to include work which is inherently unavailable to an aerospace Brayton engine. An engine-based energy is then developed which accurately accounts for this inherently unavailable work; performance parameters based on this quantity are then shown to yield design and loss information equivalent to the thrust-based method.

  4. Hypergraph Based Feature Selection Technique for Medical Diagnosis.

    PubMed

    Somu, Nivethitha; Raman, M R Gauthama; Kirthivasan, Kannan; Sriram, V S Shankar

    2016-11-01

    The impact of internet and information systems across various domains have resulted in substantial generation of multidimensional datasets. The use of data mining and knowledge discovery techniques to extract the original information contained in the multidimensional datasets play a significant role in the exploitation of complete benefit provided by them. The presence of large number of features in the high dimensional datasets incurs high computational cost in terms of computing power and time. Hence, feature selection technique has been commonly used to build robust machine learning models to select a subset of relevant features which projects the maximal information content of the original dataset. In this paper, a novel Rough Set based K - Helly feature selection technique (RSKHT) which hybridize Rough Set Theory (RST) and K - Helly property of hypergraph representation had been designed to identify the optimal feature subset or reduct for medical diagnostic applications. Experiments carried out using the medical datasets from the UCI repository proves the dominance of the RSKHT over other feature selection techniques with respect to the reduct size, classification accuracy and time complexity. The performance of the RSKHT had been validated using WEKA tool, which shows that RSKHT had been computationally attractive and flexible over massive datasets.

  5. A theory for the retrieval of virtual temperature from winds, radiances and the equations of fluid dynamics

    NASA Technical Reports Server (NTRS)

    Tzvi, G. C.

    1986-01-01

    A technique to deduce the virtual temperature from the combined use of the equations of fluid dynamics, observed wind and observed radiances is described. The wind information could come from ground-based sensitivity very high frequency (VHF) Doppler radars and/or from space-borne Doppler lidars. The radiometers are also assumed to be either space-borne and/or ground-based. From traditional radiometric techniques the vertical structure of the temperature can be estimated only crudely. While it has been known for quite some time that the virtual temperature could be deduced from wind information only, such techniques had to assume the infallibility of certain diagnostic relations. The proposed technique is an extension of the Gal-Chen technique. It is assumed that due to modeling uncertainties the equations of fluid dynamics are satisfied only in the least square sense. The retrieved temperature, however, is constrained to reproduce the observed radiances. It is shown that the combined use of the three sources of information (wind, radiances and fluid dynamical equations) can result in a unique determination of the vertical temperature structure with spatial and temporal resolution comparable to that of the observed wind.

  6. Knowledge Based System Applications for Guidance and Control (Application des Systemes a Base de Connaissances au Guidage-Pilotage)

    DTIC Science & Technology

    1991-01-01

    techniques and integration concepts. Recent advances in digital computation techniques including data base management , represent the core enabling...tactical information management and effective pilot interaction are essential. Pilot decision aiding, combat automation, sensor fusion and ol-board...tactical battle management concepts offer the opportunity for substantial mission effectiveness improvements. Although real-time tactical military

  7. Traditional versus rule-based programming techniques: Application to the control of optional flight information

    NASA Technical Reports Server (NTRS)

    Ricks, Wendell R.; Abbott, Kathy H.

    1987-01-01

    To the software design community, the concern over the costs associated with a program's execution time and implementation is great. It is always desirable, and sometimes imperative, that the proper programming technique is chosen which minimizes all costs for a given application or type of application. A study is described that compared cost-related factors associated with traditional programming techniques to rule-based programming techniques for a specific application. The results of this study favored the traditional approach regarding execution efficiency, but favored the rule-based approach regarding programmer productivity (implementation ease). Although this study examined a specific application, the results should be widely applicable.

  8. Application of remote sensing to land and water resource planning: The Pocomoke River Basin, Maryland

    NASA Technical Reports Server (NTRS)

    Wildesen, S. E.; Phillips, E. P.

    1981-01-01

    Because of the size of the Pocomoke River Basin, the inaccessibility of certain areas, and study time constraints, several remote sensing techniques were used to collect base information on the river corridor, (a 23.2 km channel) and on a 1.2 km wooded floodplain. This information provided an adequate understanding of the environment and its resources, thus enabling effective management options to be designed. The remote sensing techniques used for assessment included manual analysis of high altitude color-infrared photography, computer-assisted analysis of LANDSAT-2 imagery, and the application of airborne oceanographic Lidar for topographic mapping. Results show that each techniques was valuable in providing the needed base data necessary for resource planning.

  9. Fee-Based Services and the Public Library: An Administrative Perspective.

    ERIC Educational Resources Information Center

    Gaines, Ervin J.; Huttner, Marian A.

    1983-01-01

    This article enumerates factors which created demand for fee-based information service (commercial databases, competition for proprietary information in business world, effectiveness of librarians) and relates experiences at two public libraries. Sources of business, value of advertising, techniques of selling, and hiring and deployment of staff…

  10. Data base management system configuration specification. [computer storage devices

    NASA Technical Reports Server (NTRS)

    Neiers, J. W.

    1979-01-01

    The functional requirements and the configuration of the data base management system are described. Techniques and technology which will enable more efficient and timely transfer of useful data from the sensor to the user, extraction of information by the user, and exchange of information among the users are demonstrated.

  11. NLPIR: A Theoretical Framework for Applying Natural Language Processing to Information Retrieval.

    ERIC Educational Resources Information Center

    Zhou, Lina; Zhang, Dongsong

    2003-01-01

    Proposes a theoretical framework called NLPIR that integrates natural language processing (NLP) into information retrieval (IR) based on the assumption that there exists representation distance between queries and documents. Discusses problems in traditional keyword-based IR, including relevance, and describes some existing NLP techniques.…

  12. Remote sensing using MIMO systems

    DOEpatents

    Bikhazi, Nicolas; Young, William F; Nguyen, Hung D

    2015-04-28

    A technique for sensing a moving object within a physical environment using a MIMO communication link includes generating a channel matrix based upon channel state information of the MIMO communication link. The physical environment operates as a communication medium through which communication signals of the MIMO communication link propagate between a transmitter and a receiver. A spatial information variable is generated for the MIMO communication link based on the channel matrix. The spatial information variable includes spatial information about the moving object within the physical environment. A signature for the moving object is generated based on values of the spatial information variable accumulated over time. The moving object is identified based upon the signature.

  13. Integrating multi-criteria techniques with geographical information systems in waste facility location to enhance public participation.

    PubMed

    Higgs, Gary

    2006-04-01

    Despite recent U.K. Government commitments' to encourage public participation in environmental decision making, those exercises conducted to date have been largely confined to 'traditional' modes of participation such as the dissemination of information and in encouraging feedback on proposals through, for example, questionnaires or surveys. It is the premise of this paper that participative approaches that use IT-based methods, based on combined geographical information systems (GIS) and multi-criteria evaluation techniques that could involve the public in the decision-making process, have the potential to build consensus and reduce disputes and conflicts such as those arising from the siting of different types of waste facilities. The potential of these techniques are documented through a review of the existing literature in order to highlight the opportunities and challenges facing decision makers in increasing the involvement of the public at different stages of the waste facility management process. It is concluded that there are important lessons to be learned by researchers, consultants, managers and decision makers if barriers hindering the wider use of such techniques are to be overcome.

  14. Quantum technology and cryptology for information security

    NASA Astrophysics Data System (ADS)

    Naqvi, Syed; Riguidel, Michel

    2007-04-01

    Cryptology and information security are set to play a more prominent role in the near future. In this regard, quantum communication and cryptography offer new opportunities to tackle ICT security. Quantum Information Processing and Communication (QIPC) is a scientific field where new conceptual foundations and techniques are being developed. They promise to play an important role in the future of information Security. It is therefore essential to have a cross-fertilizing development between quantum technology and cryptology in order to address the security challenges of the emerging quantum era. In this article, we discuss the impact of quantum technology on the current as well as future crypto-techniques. We then analyse the assumptions on which quantum computers may operate. Then we present our vision for the distribution of security attributes using a novel form of trust based on Heisenberg's uncertainty; and, building highly secure quantum networks based on the clear transmission of single photons and/or bundles of photons able to withstand unauthorized reading as a result of secure protocols based on the observations of quantum mechanics. We argue how quantum cryptographic systems need to be developed that can take advantage of the laws of physics to provide long-term security based on solid assumptions. This requires a structured integration effort to deploy quantum technologies within the existing security infrastructure. Finally, we conclude that classical cryptographic techniques need to be redesigned and upgraded in view of the growing threat of cryptanalytic attacks posed by quantum information processing devices leading to the development of post-quantum cryptography.

  15. User needs on Nursing Net (The Kango Net) - analyzing the total consultation page - http://www.kango-net.jp/en/index.html.

    PubMed

    Sakyo, Yumi; Nakayama, Kazuhiro; Komatsu, Hiroko; Setoyama, Yoko

    2009-01-01

    People are required to take in and comprehend a massive amount of health information and in turn make some serious decisions based on that information. We, at St. Luke's College of Nursing, provide a rich selection of high-quality health information, and have set up Nursing Net (The Kango Net:Kango is Nursing in Japanese). This website provides information for consumers as well as people interested in the nursing profession. In an attempt to identify the needs of users, this study conducted an analysis of the contents on the total consultation page. Many readers voted that responses to nursing techniques and symptoms questions proved instrumental in their queries. Based on the results of this study, we can conclude that this is an easy-to-access, convenient site for getting health information about physical symptoms and nursing techniques.

  16. A Sparsity-based Framework for Resolution Enhancement in Optical Fault Analysis of Integrated Circuits

    DTIC Science & Technology

    2015-01-01

    for IC fault detection . This section provides background information on inversion methods. Conventional inversion techniques and their shortcomings are...physical techniques, electron beam imaging/analysis, ion beam techniques, scanning probe techniques. Electrical tests are used to detect faults in 13 an...hand, there is also the second harmonic technique through which duty cycle degradation faults are detected by collecting the magnitude and the phase of

  17. Data-Mining Techniques in Detecting Factors Linked to Academic Achievement

    ERIC Educational Resources Information Center

    Martínez Abad, Fernando; Chaparro Caso López, Alicia A.

    2017-01-01

    In light of the emergence of statistical analysis techniques based on data mining in education sciences, and the potential they offer to detect non-trivial information in large databases, this paper presents a procedure used to detect factors linked to academic achievement in large-scale assessments. The study is based on a non-experimental,…

  18. 75 FR 44974 - Intent To Request Renewal From OMB of One Current Public Collection of Information: Sensitive...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-30

    ... TSA PRA Officer, Office of Information Technology (OIT), TSA-11, Transportation Security... technological collection techniques or other forms of information technology. Information Collection Requirement... history records check (CHRC), (2) a name-based check to determine whether the individual poses or is...

  19. Evaluation of the cognitive effects of travel technique in complex real and virtual environments.

    PubMed

    Suma, Evan A; Finkelstein, Samantha L; Reid, Myra; V Babu, Sabarish; Ulinski, Amy C; Hodges, Larry F

    2010-01-01

    We report a series of experiments conducted to investigate the effects of travel technique on information gathering and cognition in complex virtual environments. In the first experiment, participants completed a non-branching multilevel 3D maze at their own pace using either real walking or one of two virtual travel techniques. In the second experiment, we constructed a real-world maze with branching pathways and modeled an identical virtual environment. Participants explored either the real or virtual maze for a predetermined amount of time using real walking or a virtual travel technique. Our results across experiments suggest that for complex environments requiring a large number of turns, virtual travel is an acceptable substitute for real walking if the goal of the application involves learning or reasoning based on information presented in the virtual world. However, for applications that require fast, efficient navigation or travel that closely resembles real-world behavior, real walking has advantages over common joystick-based virtual travel techniques.

  20. A Feature-Reinforcement-Based Approach for Supporting Poly-Lingual Category Integration

    NASA Astrophysics Data System (ADS)

    Wei, Chih-Ping; Chen, Chao-Chi; Cheng, Tsang-Hsiang; Yang, Christopher C.

    Document-category integration (or category integration for short) is fundamental to many e-commerce applications, including information integration along supply chains and information aggregation by intermediaries. Because of the trend of globalization, the requirement for category integration has been extended from monolingual to poly-lingual settings. Poly-lingual category integration (PLCI) aims to integrate two document catalogs, each of which consists of documents written in a mix of languages. Several category integration techniques have been proposed in the literature, but these techniques focus only on monolingual category integration rather than PLCI. In this study, we propose a feature-reinforcement-based PLCI (namely, FR-PLCI) technique that takes into account the master documents of all languages when integrating source documents (in the source catalog) written in a specific language into the master catalog. Using the monolingual category integration (MnCI) technique as a performance benchmark, our empirical evaluation results show that our proposed FR-PLCI technique achieves better integration accuracy than MnCI does in both English and Chinese category integration tasks.

  1. Intramuscular injection technique: an evidence-based approach.

    PubMed

    Ogston-Tuck, Sherri

    2014-09-30

    Intramuscular injections require a thorough and meticulous approach to patient assessment and injection technique. This article, the second in a series of two, reviews the evidence base to inform safer practice and to consider the evidence for nursing practice in this area. A framework for safe practice is included, identifying important points for safe technique, patient care and clinical decision making. It also highlights the ongoing debate in selection of intramuscular injection sites, predominately the ventrogluteal and dorsogluteal muscles.

  2. Hybrid registration of PET/CT in thoracic region with pre-filtering PET sinogram

    NASA Astrophysics Data System (ADS)

    Mokri, S. S.; Saripan, M. I.; Marhaban, M. H.; Nordin, A. J.; Hashim, S.

    2015-11-01

    The integration of physiological (PET) and anatomical (CT) images in cancer delineation requires an accurate spatial registration technique. Although hybrid PET/CT scanner is used to co-register these images, significant misregistrations exist due to patient and respiratory/cardiac motions. This paper proposes a hybrid feature-intensity based registration technique for hybrid PET/CT scanner. First, simulated PET sinogram was filtered with a 3D hybrid mean-median before reconstructing the image. The features were then derived from the segmented structures (lung, heart and tumor) from both images. The registration was performed based on modified multi-modality demon registration with multiresolution scheme. Apart from visual observations improvements, the proposed registration technique increased the normalized mutual information index (NMI) between the PET/CT images after registration. All nine tested datasets show marked improvements in mutual information (MI) index than free form deformation (FFD) registration technique with the highest MI increase is 25%.

  3. Integrated Artificial Intelligence Approaches for Disease Diagnostics.

    PubMed

    Vashistha, Rajat; Chhabra, Deepak; Shukla, Pratyoosh

    2018-06-01

    Mechanocomputational techniques in conjunction with artificial intelligence (AI) are revolutionizing the interpretations of the crucial information from the medical data and converting it into optimized and organized information for diagnostics. It is possible due to valuable perfection in artificial intelligence, computer aided diagnostics, virtual assistant, robotic surgery, augmented reality and genome editing (based on AI) technologies. Such techniques are serving as the products for diagnosing emerging microbial or non microbial diseases. This article represents a combinatory approach of using such approaches and providing therapeutic solutions towards utilizing these techniques in disease diagnostics.

  4. Low-Bit Rate Feedback Strategies for Iterative IA-Precoded MIMO-OFDM-Based Systems

    PubMed Central

    Teodoro, Sara; Silva, Adão; Dinis, Rui; Gameiro, Atílio

    2014-01-01

    Interference alignment (IA) is a promising technique that allows high-capacity gains in interference channels, but which requires the knowledge of the channel state information (CSI) for all the system links. We design low-complexity and low-bit rate feedback strategies where a quantized version of some CSI parameters is fed back from the user terminal (UT) to the base station (BS), which shares it with the other BSs through a limited-capacity backhaul network. This information is then used by BSs to perform the overall IA design. With the proposed strategies, we only need to send part of the CSI information, and this can even be sent only once for a set of data blocks transmitted over time-varying channels. These strategies are applied to iterative MMSE-based IA techniques for the downlink of broadband wireless OFDM systems with limited feedback. A new robust iterative IA technique, where channel quantization errors are taken into account in IA design, is also proposed and evaluated. With our proposed strategies, we need a small number of quantization bits to transmit and share the CSI, when comparing with the techniques used in previous works, while allowing performance close to the one obtained with perfect channel knowledge. PMID:24678274

  5. Low-bit rate feedback strategies for iterative IA-precoded MIMO-OFDM-based systems.

    PubMed

    Teodoro, Sara; Silva, Adão; Dinis, Rui; Gameiro, Atílio

    2014-01-01

    Interference alignment (IA) is a promising technique that allows high-capacity gains in interference channels, but which requires the knowledge of the channel state information (CSI) for all the system links. We design low-complexity and low-bit rate feedback strategies where a quantized version of some CSI parameters is fed back from the user terminal (UT) to the base station (BS), which shares it with the other BSs through a limited-capacity backhaul network. This information is then used by BSs to perform the overall IA design. With the proposed strategies, we only need to send part of the CSI information, and this can even be sent only once for a set of data blocks transmitted over time-varying channels. These strategies are applied to iterative MMSE-based IA techniques for the downlink of broadband wireless OFDM systems with limited feedback. A new robust iterative IA technique, where channel quantization errors are taken into account in IA design, is also proposed and evaluated. With our proposed strategies, we need a small number of quantization bits to transmit and share the CSI, when comparing with the techniques used in previous works, while allowing performance close to the one obtained with perfect channel knowledge.

  6. Towards NIRS-based hand movement recognition.

    PubMed

    Paleari, Marco; Luciani, Riccardo; Ariano, Paolo

    2017-07-01

    This work reports on preliminary results about on hand movement recognition with Near InfraRed Spectroscopy (NIRS) and surface ElectroMyoGraphy (sEMG). Either basing on physical contact (touchscreens, data-gloves, etc.), vision techniques (Microsoft Kinect, Sony PlayStation Move, etc.), or other modalities, hand movement recognition is a pervasive function in today environment and it is at the base of many gaming, social, and medical applications. Albeit, in recent years, the use of muscle information extracted by sEMG has spread out from the medical applications to contaminate the consumer world, this technique still falls short when dealing with movements of the hand. We tested NIRS as a technique to get another point of view on the muscle phenomena and proved that, within a specific movements selection, NIRS can be used to recognize movements and return information regarding muscles at different depths. Furthermore, we propose here three different multimodal movement recognition approaches and compare their performances.

  7. A simple low cost latent fingerprint sensor based on deflectometry and WFT analysis

    NASA Astrophysics Data System (ADS)

    Dhanotia, Jitendra; Chatterjee, Amit; Bhatia, Vimal; Prakash, Shashi

    2018-02-01

    In criminal investigations, latent fingerprints are one of the most significant forms of evidence and most commonly used forensic investigation tool worldwide. The existing non-contact latent fingerprint detection systems are bulky, expensive and require environment which is shock and vibration resistant, thereby limiting their usability outside the laboratory. In this article, a compact, full field, low cost technique for profiling of fingerprints using deflectometry is proposed. Using inexpensive mobile phone screen based structured illumination, and windowed Fourier transform (WFT) based phase retrieval mechanism, the 2D and 3D phase plots reconstruct the profile information of the fingerprint. The phase information is also used to confirm a match between two fingerprints in real time. Since the proposed technique is non-interferometric, the measurements are least affected by environmental perturbations. Using the proposed technique, a portable sensor capable of field deployment has been realized.

  8. Theoretical Bound of CRLB for Energy Efficient Technique of RSS-Based Factor Graph Geolocation

    NASA Astrophysics Data System (ADS)

    Kahar Aziz, Muhammad Reza; Heriansyah; Saputra, EfaMaydhona; Musa, Ardiansyah

    2018-03-01

    To support the increase of wireless geolocation development as the key of the technology in the future, this paper proposes theoretical bound derivation, i.e., Cramer Rao lower bound (CRLB) for energy efficient of received signal strength (RSS)-based factor graph wireless geolocation technique. The theoretical bound derivation is crucially important to evaluate whether the energy efficient technique of RSS-based factor graph wireless geolocation is effective as well as to open the opportunity to further innovation of the technique. The CRLB is derived in this paper by using the Fisher information matrix (FIM) of the main formula of the RSS-based factor graph geolocation technique, which is lied on the Jacobian matrix. The simulation result shows that the derived CRLB has the highest accuracy as a bound shown by its lowest root mean squared error (RMSE) curve compared to the RMSE curve of the RSS-based factor graph geolocation technique. Hence, the derived CRLB becomes the lower bound for the efficient technique of RSS-based factor graph wireless geolocation.

  9. An Electronic Engineering Curriculum Design Based on Concept-Mapping Techniques

    ERIC Educational Resources Information Center

    Toral, S. L.; Martinez-Torres, M. R.; Barrero, F.; Gallardo, S.; Duran, M. J.

    2007-01-01

    Curriculum design is a concern in European Universities as they face the forthcoming European Higher Education Area (EHEA). This process can be eased by the use of scientific tools such as Concept-Mapping Techniques (CMT) that extract and organize the most relevant information from experts' experience using statistics techniques, and helps a…

  10. Music to knowledge: A visual programming environment for the development and evaluation of music information retrieval techniques

    NASA Astrophysics Data System (ADS)

    Ehmann, Andreas F.; Downie, J. Stephen

    2005-09-01

    The objective of the International Music Information Retrieval Systems Evaluation Laboratory (IMIRSEL) project is the creation of a large, secure corpus of audio and symbolic music data accessible to the music information retrieval (MIR) community for the testing and evaluation of various MIR techniques. As part of the IMIRSEL project, a cross-platform JAVA based visual programming environment called Music to Knowledge (M2K) is being developed for a variety of music information retrieval related tasks. The primary objective of M2K is to supply the MIR community with a toolset that provides the ability to rapidly prototype algorithms, as well as foster the sharing of techniques within the MIR community through the use of a standardized set of tools. Due to the relatively large size of audio data and the computational costs associated with some digital signal processing and machine learning techniques, M2K is also designed to support distributed computing across computing clusters. In addition, facilities to allow the integration of non-JAVA based (e.g., C/C++, MATLAB, etc.) algorithms and programs are provided within M2K. [Work supported by the Andrew W. Mellon Foundation and NSF Grants No. IIS-0340597 and No. IIS-0327371.

  11. Three-Dimensional Maps for Disaster Management

    NASA Astrophysics Data System (ADS)

    Bandrova, T.; Zlatanova, S.; Konecny, M.

    2012-07-01

    Geo-information techniques have proven their usefulness for the purposes of early warning and emergency response. These techniques enable us to generate extensive geo-information to make informed decisions in response to natural disasters that lead to better protection of citizens, reduce damage to property, improve the monitoring of these disasters, and facilitate estimates of the damages and losses resulting from them. The maintenance and accessibility of spatial information has improved enormously with the development of spatial data infrastructures (SDIs), especially with second-generation SDIs, in which the original product-based SDI was improved to a process-based SDI. Through the use of SDIs, geo-information is made available to local, national and international organisations in regions affected by natural disasters as well as to volunteers serving in these areas. Volunteer-based systems for information collection (e.g., Ushahidi) have been created worldwide. However, the use of 3D maps is still limited. This paper discusses the applicability of 3D geo-information to disaster management. We discuss some important aspects of maps for disaster management, such as user-centred maps, the necessary components for 3D maps, symbols, and colour schemas. In addition, digital representations are evaluated with respect to their visual controls, i.e., their usefulness for the navigation and exploration of the information. Our recommendations are based on responses from a variety of users of these technologies, including children, geospecialists and disaster managers from different countries.

  12. Proceedings of the National Conference on Energy Resource Management. Volume 1: Techniques, Procedures and Data Bases

    NASA Technical Reports Server (NTRS)

    Brumfield, J. O. (Editor); Schiffman, Y. M. (Editor)

    1982-01-01

    Topics dealing with the integration of remotely sensed data with geographic information system for application in energy resources management are discussed. Associated remote sensing and image analysis techniques are also addressed.

  13. Real-time scalable visual analysis on mobile devices

    NASA Astrophysics Data System (ADS)

    Pattath, Avin; Ebert, David S.; May, Richard A.; Collins, Timothy F.; Pike, William

    2008-02-01

    Interactive visual presentation of information can help an analyst gain faster and better insight from data. When combined with situational or context information, visualization on mobile devices is invaluable to in-field responders and investigators. However, several challenges are posed by the form-factor of mobile devices in developing such systems. In this paper, we classify these challenges into two broad categories - issues in general mobile computing and issues specific to visual analysis on mobile devices. Using NetworkVis and Infostar as example systems, we illustrate some of the techniques that we employed to overcome many of the identified challenges. NetworkVis is an OpenVG-based real-time network monitoring and visualization system developed for Windows Mobile devices. Infostar is a flash-based interactive, real-time visualization application intended to provide attendees access to conference information. Linked time-synchronous visualization, stylus/button-based interactivity, vector graphics, overview-context techniques, details-on-demand and statistical information display are some of the highlights of these applications.

  14. a New Approach for Progressive Dense Reconstruction from Consecutive Images Based on Prior Low-Density 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Lari, Z.; El-Sheimy, N.

    2017-09-01

    In recent years, the increasing incidence of climate-related disasters has tremendously affected our environment. In order to effectively manage and reduce dramatic impacts of such events, the development of timely disaster management plans is essential. Since these disasters are spatial phenomena, timely provision of geospatial information is crucial for effective development of response and management plans. Due to inaccessibility of the affected areas and limited budget of first-responders, timely acquisition of the required geospatial data for these applications is usually possible only using low-cost imaging and georefencing sensors mounted on unmanned platforms. Despite rapid collection of the required data using these systems, available processing techniques are not yet capable of delivering geospatial information to responders and decision makers in a timely manner. To address this issue, this paper introduces a new technique for dense 3D reconstruction of the affected scenes which can deliver and improve the needed geospatial information incrementally. This approach is implemented based on prior 3D knowledge of the scene and employs computationally-efficient 2D triangulation, feature descriptor, feature matching and point verification techniques to optimize and speed up 3D dense scene reconstruction procedure. To verify the feasibility and computational efficiency of the proposed approach, an experiment using a set of consecutive images collected onboard a UAV platform and prior low-density airborne laser scanning over the same area is conducted and step by step results are provided. A comparative analysis of the proposed approach and an available image-based dense reconstruction technique is also conducted to prove the computational efficiency and competency of this technique for delivering geospatial information with pre-specified accuracy.

  15. Intelligent evaluation of color sensory quality of black tea by visible-near infrared spectroscopy technology: A comparison of spectra and color data information

    NASA Astrophysics Data System (ADS)

    Ouyang, Qin; Liu, Yan; Chen, Quansheng; Zhang, Zhengzhu; Zhao, Jiewen; Guo, Zhiming; Gu, Hang

    2017-06-01

    Instrumental test of black tea samples instead of human panel test is attracting massive attention recently. This study focused on an investigation of the feasibility for estimation of the color sensory quality of black tea samples using the VIS-NIR spectroscopy technique, comparing the performances of models based on the spectra and color information. In model calibration, the variables were first selected by genetic algorithm (GA); then the nonlinear back propagation-artificial neural network (BPANN) models were established based on the optimal variables. In comparison with the other models, GA-BPANN models from spectra data information showed the best performance, with the correlation coefficient of 0.8935, and the root mean square error of 0.392 in the prediction set. In addition, models based on the spectra information provided better performance than that based on the color parameters. Therefore, the VIS-NIR spectroscopy technique is a promising tool for rapid and accurate evaluation of the sensory quality of black tea samples.

  16. Intelligent evaluation of color sensory quality of black tea by visible-near infrared spectroscopy technology: A comparison of spectra and color data information.

    PubMed

    Ouyang, Qin; Liu, Yan; Chen, Quansheng; Zhang, Zhengzhu; Zhao, Jiewen; Guo, Zhiming; Gu, Hang

    2017-06-05

    Instrumental test of black tea samples instead of human panel test is attracting massive attention recently. This study focused on an investigation of the feasibility for estimation of the color sensory quality of black tea samples using the VIS-NIR spectroscopy technique, comparing the performances of models based on the spectra and color information. In model calibration, the variables were first selected by genetic algorithm (GA); then the nonlinear back propagation-artificial neural network (BPANN) models were established based on the optimal variables. In comparison with the other models, GA-BPANN models from spectra data information showed the best performance, with the correlation coefficient of 0.8935, and the root mean square error of 0.392 in the prediction set. In addition, models based on the spectra information provided better performance than that based on the color parameters. Therefore, the VIS-NIR spectroscopy technique is a promising tool for rapid and accurate evaluation of the sensory quality of black tea samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Advanced Feedback Methods in Information Retrieval.

    ERIC Educational Resources Information Center

    Salton, G.; And Others

    1985-01-01

    In this study, automatic feedback techniques are applied to Boolean query statements in online information retrieval to generate improved query statements based on information contained in previously retrieved documents. Feedback operations are carried out using conventional Boolean logic and extended logic. Experimental output is included to…

  18. Visibility enhancement of color images using Type-II fuzzy membership function

    NASA Astrophysics Data System (ADS)

    Singh, Harmandeep; Khehra, Baljit Singh

    2018-04-01

    Images taken in poor environmental conditions decrease the visibility and hidden information of digital images. Therefore, image enhancement techniques are necessary for improving the significant details of these images. An extensive review has shown that histogram-based enhancement techniques greatly suffer from over/under enhancement issues. Fuzzy-based enhancement techniques suffer from over/under saturated pixels problems. In this paper, a novel Type-II fuzzy-based image enhancement technique has been proposed for improving the visibility of images. The Type-II fuzzy logic can automatically extract the local atmospheric light and roughly eliminate the atmospheric veil in local detail enhancement. The proposed technique has been evaluated on 10 well-known weather degraded color images and is also compared with four well-known existing image enhancement techniques. The experimental results reveal that the proposed technique outperforms others regarding visible edge ratio, color gradients and number of saturated pixels.

  19. Land Cover Analysis by Using Pixel-Based and Object-Based Image Classification Method in Bogor

    NASA Astrophysics Data System (ADS)

    Amalisana, Birohmatin; Rokhmatullah; Hernina, Revi

    2017-12-01

    The advantage of image classification is to provide earth’s surface information like landcover and time-series changes. Nowadays, pixel-based image classification technique is commonly performed with variety of algorithm such as minimum distance, parallelepiped, maximum likelihood, mahalanobis distance. On the other hand, landcover classification can also be acquired by using object-based image classification technique. In addition, object-based classification uses image segmentation from parameter such as scale, form, colour, smoothness and compactness. This research is aimed to compare the result of landcover classification and its change detection between parallelepiped pixel-based and object-based classification method. Location of this research is Bogor with 20 years range of observation from 1996 until 2016. This region is famous as urban areas which continuously change due to its rapid development, so that time-series landcover information of this region will be interesting.

  20. Wind Gust Measurement Techniques-From Traditional Anemometry to New Possibilities.

    PubMed

    Suomi, Irene; Vihma, Timo

    2018-04-23

    Information on wind gusts is needed for assessment of wind-induced damage and risks to safety. The measurement of wind gust speed requires a high temporal resolution of the anemometer system, because the gust is defined as a short-duration (seconds) maximum of the fluctuating wind speed. Until the digitalization of wind measurements in the 1990s, the wind gust measurements suffered from limited recording and data processing resources. Therefore, the majority of continuous wind gust records date back at most only by 30 years. Although the response characteristics of anemometer systems are good enough today, the traditional measurement techniques at weather stations based on cup and sonic anemometers are limited to heights and regions where the supporting structures can reach. Therefore, existing measurements are mainly concentrated over densely-populated land areas, whereas from remote locations, such as the marine Arctic, wind gust information is available only from sparse coastal locations. Recent developments of wind gust measurement techniques based on turbulence measurements from research aircraft and from Doppler lidar can potentially provide new information from heights and locations unreachable by traditional measurement techniques. Moreover, fast-developing measurement methods based on Unmanned Aircraft Systems (UASs) may add to better coverage of wind gust measurements in the future. In this paper, we provide an overview of the history and the current status of anemometry from the perspective of wind gusts. Furthermore, a discussion on the potential future directions of wind gust measurement techniques is provided.

  1. Distillation of Greenberger-Horne-Zeilinger states by selective information manipulation.

    PubMed

    Cohen, O; Brun, T A

    2000-06-19

    Methods for distilling Greenberger-Horne-Zeilinger (GHZ) states from arbitrary entangled tripartite pure states are described. These techniques work for virtually any input state. Each technique has two stages which we call primary and secondary distillations. Primary distillation produces a GHZ state with some probability, so that when applied to an ensemble of systems a certain percentage is discarded. Secondary distillation produces further GHZs from the discarded systems. These protocols are developed with the help of an approach to quantum information theory based on absolutely selective information, which has other potential applications.

  2. Lip boundary detection techniques using color and depth information

    NASA Astrophysics Data System (ADS)

    Kim, Gwang-Myung; Yoon, Sung H.; Kim, Jung H.; Hur, Gi Taek

    2002-01-01

    This paper presents our approach to using a stereo camera to obtain 3-D image data to be used to improve existing lip boundary detection techniques. We show that depth information as provided by our approach can be used to significantly improve boundary detection systems. Our system detects the face and mouth area in the image by using color, geometric location, and additional depth information for the face. Initially, color and depth information can be used to localize the face. Then we can determine the lip region from the intensity information and the detected eye locations. The system has successfully been used to extract approximate lip regions using RGB color information of the mouth area. Merely using color information is not robust because the quality of the results may vary depending on light conditions, background, and the human race. To overcome this problem, we used a stereo camera to obtain 3-D facial images. 3-D data constructed from the depth information along with color information can provide more accurate lip boundary detection results as compared to color only based techniques.

  3. Can state-of-the-art HVS-based objective image quality criteria be used for image reconstruction techniques based on ROI analysis?

    NASA Astrophysics Data System (ADS)

    Dostal, P.; Krasula, L.; Klima, M.

    2012-06-01

    Various image processing techniques in multimedia technology are optimized using visual attention feature of the human visual system. Spatial non-uniformity causes that different locations in an image are of different importance in terms of perception of the image. In other words, the perceived image quality depends mainly on the quality of important locations known as regions of interest. The performance of such techniques is measured by subjective evaluation or objective image quality criteria. Many state-of-the-art objective metrics are based on HVS properties; SSIM, MS-SSIM based on image structural information, VIF based on the information that human brain can ideally gain from the reference image or FSIM utilizing the low-level features to assign the different importance to each location in the image. But still none of these objective metrics utilize the analysis of regions of interest. We solve the question if these objective metrics can be used for effective evaluation of images reconstructed by processing techniques based on ROI analysis utilizing high-level features. In this paper authors show that the state-of-the-art objective metrics do not correlate well with subjective evaluation while the demosaicing based on ROI analysis is used for reconstruction. The ROI were computed from "ground truth" visual attention data. The algorithm combining two known demosaicing techniques on the basis of ROI location is proposed to reconstruct the ROI in fine quality while the rest of image is reconstructed with low quality. The color image reconstructed by this ROI approach was compared with selected demosaicing techniques by objective criteria and subjective testing. The qualitative comparison of the objective and subjective results indicates that the state-of-the-art objective metrics are still not suitable for evaluation image processing techniques based on ROI analysis and new criteria is demanded.

  4. 76 FR 26319 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-06

    ... Based Standard for Fire Protection for Light Water Reactor Electric Generating Plants, 2001 Edition... techniques or other forms of information technology? The public may examine and have copied, for a fee...

  5. A UML-based ontology for describing hospital information system architectures.

    PubMed

    Winter, A; Brigl, B; Wendt, T

    2001-01-01

    To control the heterogeneity inherent to hospital information systems the information management needs appropriate hospital information systems modeling methods or techniques. This paper shows that, for several reasons, available modeling approaches are not able to answer relevant questions of information management. To overcome this major deficiency we offer an UML-based ontology for describing hospital information systems architectures. This ontology views at three layers: the domain layer, the logical tool layer, and the physical tool layer, and defines the relevant components. The relations between these components, especially between components of different layers make the answering of our information management questions possible.

  6. Confidence Intervals from Realizations of Simulated Nuclear Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younes, W.; Ratkiewicz, A.; Ressler, J. J.

    2017-09-28

    Various statistical techniques are discussed that can be used to assign a level of confidence in the prediction of models that depend on input data with known uncertainties and correlations. The particular techniques reviewed in this paper are: 1) random realizations of the input data using Monte-Carlo methods, 2) the construction of confidence intervals to assess the reliability of model predictions, and 3) resampling techniques to impose statistical constraints on the input data based on additional information. These techniques are illustrated with a calculation of the keff value, based on the 235U(n, f) and 239Pu (n, f) cross sections.

  7. Information theoretic partitioning and confidence based weight assignment for multi-classifier decision level fusion in hyperspectral target recognition applications

    NASA Astrophysics Data System (ADS)

    Prasad, S.; Bruce, L. M.

    2007-04-01

    There is a growing interest in using multiple sources for automatic target recognition (ATR) applications. One approach is to take multiple, independent observations of a phenomenon and perform a feature level or a decision level fusion for ATR. This paper proposes a method to utilize these types of multi-source fusion techniques to exploit hyperspectral data when only a small number of training pixels are available. Conventional hyperspectral image based ATR techniques project the high dimensional reflectance signature onto a lower dimensional subspace using techniques such as Principal Components Analysis (PCA), Fisher's linear discriminant analysis (LDA), subspace LDA and stepwise LDA. While some of these techniques attempt to solve the curse of dimensionality, or small sample size problem, these are not necessarily optimal projections. In this paper, we present a divide and conquer approach to address the small sample size problem. The hyperspectral space is partitioned into contiguous subspaces such that the discriminative information within each subspace is maximized, and the statistical dependence between subspaces is minimized. We then treat each subspace as a separate source in a multi-source multi-classifier setup and test various decision fusion schemes to determine their efficacy. Unlike previous approaches which use correlation between variables for band grouping, we study the efficacy of higher order statistical information (using average mutual information) for a bottom up band grouping. We also propose a confidence measure based decision fusion technique, where the weights associated with various classifiers are based on their confidence in recognizing the training data. To this end, training accuracies of all classifiers are used for weight assignment in the fusion process of test pixels. The proposed methods are tested using hyperspectral data with known ground truth, such that the efficacy can be quantitatively measured in terms of target recognition accuracies.

  8. Flag-based detection of weak gas signatures in long-wave infrared hyperspectral image sequences

    NASA Astrophysics Data System (ADS)

    Marrinan, Timothy; Beveridge, J. Ross; Draper, Bruce; Kirby, Michael; Peterson, Chris

    2016-05-01

    We present a flag manifold based method for detecting chemical plumes in long-wave infrared hyperspectral movies. The method encodes temporal and spatial information related to a hyperspectral pixel into a flag, or nested sequence of linear subspaces. The technique used to create the flags pushes information about the background clutter, ambient conditions, and potential chemical agents into the leading elements of the flags. Exploiting this temporal information allows for a detection algorithm that is sensitive to the presence of weak signals. This method is compared to existing techniques qualitatively on real data and quantitatively on synthetic data to show that the flag-based algorithm consistently performs better on data when the SINRdB is low, and beats the ACE and MF algorithms in probability of detection for low probabilities of false alarm even when the SINRdB is high.

  9. Study on the key technology of optical encryption based on compressive ghost imaging with double random-phase encoding

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Pan, Zilan; Liang, Dong; Ma, Xiuhua; Zhang, Dawei

    2015-12-01

    An optical encryption method based on compressive ghost imaging (CGI) with double random-phase encoding (DRPE), named DRPE-CGI, is proposed. The information is first encrypted by the sender with DRPE, the DRPE-coded image is encrypted by the system of computational ghost imaging with a secret key. The key of N random-phase vectors is generated by the sender and will be shared with the receiver who is the authorized user. The receiver decrypts the DRPE-coded image with the key, with the aid of CGI and a compressive sensing technique, and then reconstructs the original information by the technique of DRPE-decoding. The experiments suggest that cryptanalysts cannot get any useful information about the original image even if they eavesdrop 60% of the key at a given time, so the security of DRPE-CGI is higher than that of the security of conventional ghost imaging. Furthermore, this method can reduce 40% of the information quantity compared with ghost imaging while the qualities of reconstructing the information are the same. It can also improve the quality of the reconstructed plaintext information compared with DRPE-GI with the same sampling times. This technique can be immediately applied to encryption and data storage with the advantages of high security, fast transmission, and high quality of reconstructed information.

  10. Knowledge of skull base anatomy and surgical implications of human sacrifice among pre-Columbian Mesoamerican cultures.

    PubMed

    Lopez-Serna, Raul; Gomez-Amador, Juan Luis; Barges-Coll, Juan; Arriada-Mendicoa, Nicasio; Romero-Vargas, Samuel; Ramos-Peek, Miguel; Celis-Lopez, Miguel Angel; Revuelta-Gutierrez, Rogelio; Portocarrero-Ortiz, Lesly

    2012-08-01

    Human sacrifice became a common cultural trait during the advanced phases of Mesoamerican civilizations. This phenomenon, influenced by complex religious beliefs, included several practices such as decapitation, cranial deformation, and the use of human cranial bones for skull mask manufacturing. Archaeological evidence suggests that all of these practices required specialized knowledge of skull base and upper cervical anatomy. The authors conducted a systematic search for information on skull base anatomical and surgical knowledge among Mesoamerican civilizations. A detailed exposition of these results is presented, along with some interesting information extracted from historical documents and pictorial codices to provide a better understanding of skull base surgical practices among these cultures. Paleoforensic evidence from the Great Temple of Tenochtitlan indicates that Aztec priests used a specialized decapitation technique, based on a deep anatomical knowledge. Trophy skulls were submitted through a stepwise technique for skull mask fabrication, based on skull base anatomical landmarks. Understanding pre-Columbian Mesoamerican religions can only be realized by considering them in their own time and according to their own perspective. Several contributions to medical practice might have arisen from anatomical knowledge emerging from human sacrifice and decapitation techniques.

  11. 77 FR 47867 - Agency Information Collection Activities: Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-10

    ... phenology information to Nature's Notebook through a browser-based web application or via mobile applications for iPhone and Android operating systems, meeting GPEA requirements. The web application interface... techniques or other forms of information technology. Please note that the comments submitted in response to...

  12. Research on BIM-based building information value chain reengineering

    NASA Astrophysics Data System (ADS)

    Hui, Zhao; Weishuang, Xie

    2017-04-01

    The achievement of value and value-added factor to the building engineering information is accomplished through a chain-flow, that is, building the information value chain. Based on the deconstruction of the information chain on the construction information in the traditional information mode, this paper clarifies the value characteristics and requirements of each stage of the construction project. In order to achieve building information value-added, the paper deconstructs the traditional building information value chain, reengineer the information value chain model on the basis of the theory and techniques of BIM, to build value-added management model and analyse the value of the model.

  13. Extraction of CT dose information from DICOM metadata: automated Matlab-based approach.

    PubMed

    Dave, Jaydev K; Gingold, Eric L

    2013-01-01

    The purpose of this study was to extract exposure parameters and dose-relevant indexes of CT examinations from information embedded in DICOM metadata. DICOM dose report files were identified and retrieved from a PACS. An automated software program was used to extract from these files information from the structured elements in the DICOM metadata relevant to exposure. Extracting information from DICOM metadata eliminated potential errors inherent in techniques based on optical character recognition, yielding 100% accuracy.

  14. Colour flow and motion imaging.

    PubMed

    Evans, D H

    2010-01-01

    Colour flow imaging (CFI) is an ultrasound imaging technique whereby colour-coded maps of tissue velocity are superimposed on grey-scale pulse-echo images of tissue anatomy. The most widespread use of the method is to image the movement of blood through arteries and veins, but it may also be used to image the motion of solid tissue. The production of velocity information is technically more demanding than the production of the anatomical information, partly because the target of interest is often blood, which backscatters significantly less power than solid tissues, and partly because several transmit-receive cycles are necessary for each velocity estimate. This review first describes the various components of basic CFI systems necessary to generate the velocity information and to combine it with anatomical information. It then describes a number of variations on the basic autocorrelation technique, including cross-correlation-based techniques, power Doppler, Doppler tissue imaging, and three-dimensional (3D) Doppler imaging. Finally, a number of limitations of current techniques and some potential solutions are reviewed.

  15. Pharmacovigilance from social media: mining adverse drug reaction mentions using sequence labeling with word embedding cluster features.

    PubMed

    Nikfarjam, Azadeh; Sarker, Abeed; O'Connor, Karen; Ginn, Rachel; Gonzalez, Graciela

    2015-05-01

    Social media is becoming increasingly popular as a platform for sharing personal health-related information. This information can be utilized for public health monitoring tasks, particularly for pharmacovigilance, via the use of natural language processing (NLP) techniques. However, the language in social media is highly informal, and user-expressed medical concepts are often nontechnical, descriptive, and challenging to extract. There has been limited progress in addressing these challenges, and thus far, advanced machine learning-based NLP techniques have been underutilized. Our objective is to design a machine learning-based approach to extract mentions of adverse drug reactions (ADRs) from highly informal text in social media. We introduce ADRMine, a machine learning-based concept extraction system that uses conditional random fields (CRFs). ADRMine utilizes a variety of features, including a novel feature for modeling words' semantic similarities. The similarities are modeled by clustering words based on unsupervised, pretrained word representation vectors (embeddings) generated from unlabeled user posts in social media using a deep learning technique. ADRMine outperforms several strong baseline systems in the ADR extraction task by achieving an F-measure of 0.82. Feature analysis demonstrates that the proposed word cluster features significantly improve extraction performance. It is possible to extract complex medical concepts, with relatively high performance, from informal, user-generated content. Our approach is particularly scalable, suitable for social media mining, as it relies on large volumes of unlabeled data, thus diminishing the need for large, annotated training data sets. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  16. Automatic classification of minimally invasive instruments based on endoscopic image sequences

    NASA Astrophysics Data System (ADS)

    Speidel, Stefanie; Benzko, Julia; Krappe, Sebastian; Sudra, Gunther; Azad, Pedram; Müller-Stich, Beat Peter; Gutt, Carsten; Dillmann, Rüdiger

    2009-02-01

    Minimally invasive surgery is nowadays a frequently applied technique and can be regarded as a major breakthrough in surgery. The surgeon has to adopt special operation-techniques and deal with difficulties like the complex hand-eye coordination and restricted mobility. To alleviate these constraints we propose to enhance the surgeon's capabilities by providing a context-aware assistance using augmented reality techniques. To analyze the current situation for context-aware assistance, we need intraoperatively gained sensor data and a model of the intervention. A situation consists of information about the performed activity, the used instruments, the surgical objects, the anatomical structures and defines the state of an intervention for a given moment in time. The endoscopic images provide a rich source of information which can be used for an image-based analysis. Different visual cues are observed in order to perform an image-based analysis with the objective to gain as much information as possible about the current situation. An important visual cue is the automatic recognition of the instruments which appear in the scene. In this paper we present the classification of minimally invasive instruments using the endoscopic images. The instruments are not modified by markers. The system segments the instruments in the current image and recognizes the instrument type based on three-dimensional instrument models.

  17. RFID-based information visibility for hospital operations: exploring its positive effects using discrete event simulation.

    PubMed

    Asamoah, Daniel A; Sharda, Ramesh; Rude, Howard N; Doran, Derek

    2016-10-12

    Long queues and wait times often occur at hospitals and affect smooth delivery of health services. To improve hospital operations, prior studies have developed scheduling techniques to minimize patient wait times. However, these studies lack in demonstrating how such techniques respond to real-time information needs of hospitals and efficiently manage wait times. This article presents a multi-method study on the positive impact of providing real-time scheduling information to patients using the RFID technology. Using a simulation methodology, we present a generic scenario, which can be mapped to real-life situations, where patients can select the order of laboratory services. The study shows that information visibility offered by RFID technology results in decreased wait times and improves resource utilization. We also discuss the applicability of the results based on field interviews granted by hospital clinicians and administrators on the perceived barriers and benefits of an RFID system.

  18. Managing World Wide Web Information in a Frames Environment: A Guide to Constructing Web Pages Using Frames.

    ERIC Educational Resources Information Center

    Gilstrap, Donald L.

    1998-01-01

    Explains how to build World Wide Web home pages using frames-based HTML so that librarians can manage Web-based information and improve their home pages. Provides descriptions and 15 examples for writing frames-HTML code, including advanced concepts and additional techniques for home-page design. (Author/LRW)

  19. A Rapid Information Dissemination System--A Follow-Up Report.

    ERIC Educational Resources Information Center

    Miner, Lynn E.; Niederjohn, Russel J.

    1980-01-01

    A rapid information dissemination system at Marquette University which uses an audio-based technique for quickly transmitting time-dependent information to research faculty is described. The system uses a tape recorder, a special purpose speech processing system, and a telephone auto-answer recorder. Present uses and proposed future modifications…

  20. 77 FR 65391 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-26

    ...) ways to enhance the quality, utility, and clarity of the information to be collected; and (4) the use of automated collection techniques or other forms of information technology to minimize the..., there is a focused interest in providing quality and value-based healthcare for Medicare beneficiaries...

  1. Reshaping the Enterprise through an Information Architecture and Process Reengineering.

    ERIC Educational Resources Information Center

    Laudato, Nicholas C.; DeSantis, Dennis J.

    1995-01-01

    The approach used by the University of Pittsburgh (Pennsylvania) in designing a campus-wide information architecture and a framework for reengineering the business process included building consensus on a general philosophy for information systems, using pattern-based abstraction techniques, applying data modeling and application prototyping, and…

  2. B-tree search reinforcement learning for model based intelligent agent

    NASA Astrophysics Data System (ADS)

    Bhuvaneswari, S.; Vignashwaran, R.

    2013-03-01

    Agents trained by learning techniques provide a powerful approximation of active solutions for naive approaches. In this study using B - Trees implying reinforced learning the data search for information retrieval is moderated to achieve accuracy with minimum search time. The impact of variables and tactics applied in training are determined using reinforcement learning. Agents based on these techniques perform satisfactory baseline and act as finite agents based on the predetermined model against competitors from the course.

  3. Cluster analysis based on dimensional information with applications to feature selection and classification

    NASA Technical Reports Server (NTRS)

    Eigen, D. J.; Fromm, F. R.; Northouse, R. A.

    1974-01-01

    A new clustering algorithm is presented that is based on dimensional information. The algorithm includes an inherent feature selection criterion, which is discussed. Further, a heuristic method for choosing the proper number of intervals for a frequency distribution histogram, a feature necessary for the algorithm, is presented. The algorithm, although usable as a stand-alone clustering technique, is then utilized as a global approximator. Local clustering techniques and configuration of a global-local scheme are discussed, and finally the complete global-local and feature selector configuration is shown in application to a real-time adaptive classification scheme for the analysis of remote sensed multispectral scanner data.

  4. The iLappSurgery taTME app: a modern adjunct to the teaching of surgical techniques.

    PubMed

    Atallah, S; Brady, R R W

    2016-09-01

    Application-based technology has emerged as a method of modern information communication, and this has been applied towards surgical training and education. It allows surgeons the ability to obtain portable and instant access to information that is otherwise difficult to deliver. The iLappSurgery Foundation has recently launched the transanal total mesorectal excision educational application (taTME app) which provides a useful adjunct, especially for surgeons interested in mastery of the taTME technique and its principles. The article provides a detailed review of the application, which has achieved a large user-base since its debut in June, 2016.

  5. Invariant Feature Matching for Image Registration Application Based on New Dissimilarity of Spatial Features

    PubMed Central

    Mousavi Kahaki, Seyed Mostafa; Nordin, Md Jan; Ashtari, Amir H.; J. Zahra, Sophia

    2016-01-01

    An invariant feature matching method is proposed as a spatially invariant feature matching approach. Deformation effects, such as affine and homography, change the local information within the image and can result in ambiguous local information pertaining to image points. New method based on dissimilarity values, which measures the dissimilarity of the features through the path based on Eigenvector properties, is proposed. Evidence shows that existing matching techniques using similarity metrics—such as normalized cross-correlation, squared sum of intensity differences and correlation coefficient—are insufficient for achieving adequate results under different image deformations. Thus, new descriptor’s similarity metrics based on normalized Eigenvector correlation and signal directional differences, which are robust under local variation of the image information, are proposed to establish an efficient feature matching technique. The method proposed in this study measures the dissimilarity in the signal frequency along the path between two features. Moreover, these dissimilarity values are accumulated in a 2D dissimilarity space, allowing accurate corresponding features to be extracted based on the cumulative space using a voting strategy. This method can be used in image registration applications, as it overcomes the limitations of the existing approaches. The output results demonstrate that the proposed technique outperforms the other methods when evaluated using a standard dataset, in terms of precision-recall and corner correspondence. PMID:26985996

  6. Combined X-ray CT and mass spectrometry for biomedical imaging applications

    NASA Astrophysics Data System (ADS)

    Schioppa, E., Jr.; Ellis, S.; Bruinen, A. L.; Visser, J.; Heeren, R. M. A.; Uher, J.; Koffeman, E.

    2014-04-01

    Imaging technologies play a key role in many branches of science, especially in biology and medicine. They provide an invaluable insight into both internal structure and processes within a broad range of samples. There are many techniques that allow one to obtain images of an object. Different techniques are based on the analysis of a particular sample property by means of a dedicated imaging system, and as such, each imaging modality provides the researcher with different information. The use of multimodal imaging (imaging with several different techniques) can provide additional and complementary information that is not possible when employing a single imaging technique alone. In this study, we present for the first time a multi-modal imaging technique where X-ray computerized tomography (CT) is combined with mass spectrometry imaging (MSI). While X-ray CT provides 3-dimensional information regarding the internal structure of the sample based on X-ray absorption coefficients, MSI of thin sections acquired from the same sample allows the spatial distribution of many elements/molecules, each distinguished by its unique mass-to-charge ratio (m/z), to be determined within a single measurement and with a spatial resolution as low as 1 μm or even less. The aim of the work is to demonstrate how molecular information from MSI can be spatially correlated with 3D structural information acquired from X-ray CT. In these experiments, frozen samples are imaged in an X-ray CT setup using Medipix based detectors equipped with a CO2 cooled sample holder. Single projections are pre-processed before tomographic reconstruction using a signal-to-thickness calibration. In the second step, the object is sliced into thin sections (circa 20 μm) that are then imaged using both matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) and secondary ion (SIMS) mass spectrometry, where the spatial distribution of specific molecules within the sample is determined. The combination of two vastly different imaging approaches provides complementary information (i.e., anatomical and molecular distributions) that allows the correlation of distinct structural features with specific molecules distributions leading to unique insights in disease development.

  7. Localized analysis of paint-coat drying using dynamic speckle interferometry

    NASA Astrophysics Data System (ADS)

    Sierra-Sosa, Daniel; Tebaldi, Myrian; Grumel, Eduardo; Rabal, Hector; Elmaghraby, Adel

    2018-07-01

    The paint-coating is part of several industrial processes, including the automotive industry, architectural coatings, machinery and appliances. These paint-coatings must comply with high quality standards, for this reason evaluation techniques from paint-coatings are in constant development. One important factor from the paint-coating process is the drying, as it has influence on the quality of final results. In this work we present an assessment technique based on the optical dynamic speckle interferometry, this technique allows for the temporal activity evaluation of the paint-coating drying process, providing localized information from drying. This localized information is relevant in order to address the drying homogeneity, optimal drying, and quality control. The technique relies in the definition of a new temporal history of the speckle patterns to obtain the local activity; this information is then clustered to provide a convenient indicative of different drying process stages. The experimental results presented were validated using the gravimetric drying curves

  8. Pruning a minimum spanning tree

    NASA Astrophysics Data System (ADS)

    Sandoval, Leonidas

    2012-04-01

    This work employs various techniques in order to filter random noise from the information provided by minimum spanning trees obtained from the correlation matrices of international stock market indices prior to and during times of crisis. The first technique establishes a threshold above which connections are considered affected by noise, based on the study of random networks with the same probability density distribution of the original data. The second technique is to judge the strength of a connection by its survival rate, which is the amount of time a connection between two stock market indices endures. The idea is that true connections will survive for longer periods of time, and that random connections will not. That information is then combined with the information obtained from the first technique in order to create a smaller network, in which most of the connections are either strong or enduring in time.

  9. Context-Based Intent Understanding for Autonomous Systems in Naval and Collaborative Robot Applications

    DTIC Science & Technology

    2013-10-29

    COVERED (From - To) 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...based on contextual information, 3) develop vision-based techniques for learning of contextual information, and detection and identification of...that takes into account many possible contexts. The probability distributions of these contexts will be learned from existing databases on common sense

  10. Extraction of Urban Trees from Integrated Airborne Based Digital Image and LIDAR Point Cloud Datasets - Initial Results

    NASA Astrophysics Data System (ADS)

    Dogon-yaro, M. A.; Kumar, P.; Rahman, A. Abdul; Buyuksalih, G.

    2016-10-01

    Timely and accurate acquisition of information on the condition and structural changes of urban trees serves as a tool for decision makers to better appreciate urban ecosystems and their numerous values which are critical to building up strategies for sustainable development. The conventional techniques used for extracting tree features include; ground surveying and interpretation of the aerial photography. However, these techniques are associated with some constraint, such as labour intensive field work, a lot of financial requirement, influences by weather condition and topographical covers which can be overcome by means of integrated airborne based LiDAR and very high resolution digital image datasets. This study presented a semi-automated approach for extracting urban trees from integrated airborne based LIDAR and multispectral digital image datasets over Istanbul city of Turkey. The above scheme includes detection and extraction of shadow free vegetation features based on spectral properties of digital images using shadow index and NDVI techniques and automated extraction of 3D information about vegetation features from the integrated processing of shadow free vegetation image and LiDAR point cloud datasets. The ability of the developed algorithms shows a promising result as an automated and cost effective approach to estimating and delineated 3D information of urban trees. The research also proved that integrated datasets is a suitable technology and a viable source of information for city managers to be used in urban trees management.

  11. Application of content-based image compression to telepathology

    NASA Astrophysics Data System (ADS)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  12. Computational Methods to Predict Protein Interaction Partners

    NASA Astrophysics Data System (ADS)

    Valencia, Alfonso; Pazos, Florencio

    In the new paradigm for studying biological phenomena represented by Systems Biology, cellular components are not considered in isolation but as forming complex networks of relationships. Protein interaction networks are among the first objects studied from this new point of view. Deciphering the interactome (the whole network of interactions for a given proteome) has been shown to be a very complex task. Computational techniques for detecting protein interactions have become standard tools for dealing with this problem, helping and complementing their experimental counterparts. Most of these techniques use genomic or sequence features intuitively related with protein interactions and are based on "first principles" in the sense that they do not involve training with examples. There are also other computational techniques that use other sources of information (i.e. structural information or even experimental data) or are based on training with examples.

  13. High performance optical encryption based on computational ghost imaging with QR code and compressive sensing technique

    NASA Astrophysics Data System (ADS)

    Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan

    2015-10-01

    In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.

  14. Novel Variants of a Histogram Shift-Based Reversible Watermarking Technique for Medical Images to Improve Hiding Capacity

    PubMed Central

    Tuckley, Kushal

    2017-01-01

    In telemedicine systems, critical medical data is shared on a public communication channel. This increases the risk of unauthorised access to patient's information. This underlines the importance of secrecy and authentication for the medical data. This paper presents two innovative variations of classical histogram shift methods to increase the hiding capacity. The first technique divides the image into nonoverlapping blocks and embeds the watermark individually using the histogram method. The second method separates the region of interest and embeds the watermark only in the region of noninterest. This approach preserves the medical information intact. This method finds its use in critical medical cases. The high PSNR (above 45 dB) obtained for both techniques indicates imperceptibility of the approaches. Experimental results illustrate superiority of the proposed approaches when compared with other methods based on histogram shifting techniques. These techniques improve embedding capacity by 5–15% depending on the image type, without affecting the quality of the watermarked image. Both techniques also enable lossless reconstruction of the watermark and the host medical image. A higher embedding capacity makes the proposed approaches attractive for medical image watermarking applications without compromising the quality of the image. PMID:29104744

  15. Potential for Imaging Engineered Tissues with X-Ray Phase Contrast

    PubMed Central

    Appel, Alyssa; Anastasio, Mark A.

    2011-01-01

    As the field of tissue engineering advances, it is crucial to develop imaging methods capable of providing detailed three-dimensional information on tissue structure. X-ray imaging techniques based on phase-contrast (PC) have great potential for a number of biomedical applications due to their ability to provide information about soft tissue structure without exogenous contrast agents. X-ray PC techniques retain the excellent spatial resolution, tissue penetration, and calcified tissue contrast of conventional X-ray techniques while providing drastically improved imaging of soft tissue and biomaterials. This suggests that X-ray PC techniques are very promising for evaluation of engineered tissues. In this review, four different implementations of X-ray PC imaging are described and applications to tissues of relevance to tissue engineering reviewed. In addition, recent applications of X-ray PC to the evaluation of biomaterial scaffolds and engineered tissues are presented and areas for further development and application of these techniques are discussed. Imaging techniques based on X-ray PC have significant potential for improving our ability to image and characterize engineered tissues, and their continued development and optimization could have significant impact on the field of tissue engineering. PMID:21682604

  16. Multiple Query Evaluation Based on an Enhanced Genetic Algorithm.

    ERIC Educational Resources Information Center

    Tamine, Lynda; Chrisment, Claude; Boughanem, Mohand

    2003-01-01

    Explains the use of genetic algorithms to combine results from multiple query evaluations to improve relevance in information retrieval. Discusses niching techniques, relevance feedback techniques, and evolution heuristics, and compares retrieval results obtained by both genetic multiple query evaluation and classical single query evaluation…

  17. Pattern recognition of satellite cloud imagery for improved weather prediction

    NASA Technical Reports Server (NTRS)

    Gautier, Catherine; Somerville, Richard C. J.; Volfson, Leonid B.

    1986-01-01

    The major accomplishment was the successful development of a method for extracting time derivative information from geostationary meteorological satellite imagery. This research is a proof-of-concept study which demonstrates the feasibility of using pattern recognition techniques and a statistical cloud classification method to estimate time rate of change of large-scale meteorological fields from remote sensing data. The cloud classification methodology is based on typical shape function analysis of parameter sets characterizing the cloud fields. The three specific technical objectives, all of which were successfully achieved, are as follows: develop and test a cloud classification technique based on pattern recognition methods, suitable for the analysis of visible and infrared geostationary satellite VISSR imagery; develop and test a methodology for intercomparing successive images using the cloud classification technique, so as to obtain estimates of the time rate of change of meteorological fields; and implement this technique in a testbed system incorporating an interactive graphics terminal to determine the feasibility of extracting time derivative information suitable for comparison with numerical weather prediction products.

  18. Tuberculosis diagnosis support analysis for precarious health information systems.

    PubMed

    Orjuela-Cañón, Alvaro David; Camargo Mendoza, Jorge Eliécer; Awad García, Carlos Enrique; Vergara Vela, Erika Paola

    2018-04-01

    Pulmonary tuberculosis is a world emergency for the World Health Organization. Techniques and new diagnosis tools are important to battle this bacterial infection. There have been many advances in all those fields, but in developing countries such as Colombia, where the resources and infrastructure are limited, new fast and less expensive strategies are increasingly needed. Artificial neural networks are computational intelligence techniques that can be used in this kind of problems and offer additional support in the tuberculosis diagnosis process, providing a tool to medical staff to make decisions about management of subjects under suspicious of tuberculosis. A database extracted from 105 subjects with precarious information of people under suspect of pulmonary tuberculosis was used in this study. Data extracted from sex, age, diabetes, homeless, AIDS status and a variable with clinical knowledge from the medical personnel were used. Models based on artificial neural networks were used, exploring supervised learning to detect the disease. Unsupervised learning was used to create three risk groups based on available information. Obtained results are comparable with traditional techniques for detection of tuberculosis, showing advantages such as fast and low implementation costs. Sensitivity of 97% and specificity of 71% where achieved. Used techniques allowed to obtain valuable information that can be useful for physicians who treat the disease in decision making processes, especially under limited infrastructure and data. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. An automatic method for retrieving and indexing catalogues of biomedical courses.

    PubMed

    Maojo, Victor; de la Calle, Guillermo; García-Remesal, Miguel; Bankauskaite, Vaida; Crespo, Jose

    2008-11-06

    Although there is wide information about Biomedical Informatics education and courses in different Websites, information is usually not exhaustive and difficult to update. We propose a new methodology based on information retrieval techniques for extracting, indexing and retrieving automatically information about educational offers. A web application has been developed to make available such information in an inventory of courses and educational offers.

  20. Simulations of multi-contrast x-ray imaging using near-field speckles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zdora, Marie-Christine; Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom and Department of Physics & Astronomy, University College London, London, WC1E 6BT; Thibault, Pierre

    2016-01-28

    X-ray dark-field and phase-contrast imaging using near-field speckles is a novel technique that overcomes limitations inherent in conventional absorption x-ray imaging, i.e. poor contrast for features with similar density. Speckle-based imaging yields a wealth of information with a simple setup tolerant to polychromatic and divergent beams, and simple data acquisition and analysis procedures. Here, we present a simulation software used to model the image formation with the speckle-based technique, and we compare simulated results on a phantom sample with experimental synchrotron data. Thorough simulation of a speckle-based imaging experiment will help for better understanding and optimising the technique itself.

  1. Inversion of particle-size distribution from angular light-scattering data with genetic algorithms.

    PubMed

    Ye, M; Wang, S; Lu, Y; Hu, T; Zhu, Z; Xu, Y

    1999-04-20

    A stochastic inverse technique based on a genetic algorithm (GA) to invert particle-size distribution from angular light-scattering data is developed. This inverse technique is independent of any given a priori information of particle-size distribution. Numerical tests show that this technique can be successfully applied to inverse problems with high stability in the presence of random noise and low susceptibility to the shape of distributions. It has also been shown that the GA-based inverse technique is more efficient in use of computing time than the inverse Monte Carlo method recently developed by Ligon et al. [Appl. Opt. 35, 4297 (1996)].

  2. Discovery of Information Diffusion Process in Social Networks

    NASA Astrophysics Data System (ADS)

    Kim, Kwanho; Jung, Jae-Yoon; Park, Jonghun

    Information diffusion analysis in social networks is of significance since it enables us to deeply understand dynamic social interactions among users. In this paper, we introduce approaches to discovering information diffusion process in social networks based on process mining. Process mining techniques are applied from three perspectives: social network analysis, process discovery and community recognition. We then present experimental results by using a real-life social network data. The proposed techniques are expected to employ as new analytical tools in online social networks such as blog and wikis for company marketers, politicians, news reporters and online writers.

  3. The application analysis of the multi-angle polarization technique for ocean color remote sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Yongchao; Zhu, Jun; Yin, Huan; Zhang, Keli

    2017-02-01

    The multi-angle polarization technique, which uses the intensity of polarized radiation as the observed quantity, is a new remote sensing means for earth observation. With this method, not only can the multi-angle light intensity data be provided, but also the multi-angle information of polarized radiation can be obtained. So, the technique may solve the problems, those could not be solved with the traditional remote sensing methods. Nowadays, the multi-angle polarization technique has become one of the hot topics in the field of the international quantitative research on remote sensing. In this paper, we firstly introduce the principles of the multi-angle polarization technique, then the situations of basic research and engineering applications are particularly summarized and analysed in 1) the peeled-off method of sun glitter based on polarization, 2) the ocean color remote sensing based on polarization, 3) oil spill detection using polarization technique, 4) the ocean aerosol monitoring based on polarization. Finally, based on the previous work, we briefly present the problems and prospects of the multi-angle polarization technique used in China's ocean color remote sensing.

  4. Towards Effective Clustering Techniques for the Analysis of Electric Power Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogan, Emilie A.; Cotilla Sanchez, Jose E.; Halappanavar, Mahantesh

    2013-11-30

    Clustering is an important data analysis technique with numerous applications in the analysis of electric power grids. Standard clustering techniques are oblivious to the rich structural and dynamic information available for power grids. Therefore, by exploiting the inherent topological and electrical structure in the power grid data, we propose new methods for clustering with applications to model reduction, locational marginal pricing, phasor measurement unit (PMU or synchrophasor) placement, and power system protection. We focus our attention on model reduction for analysis based on time-series information from synchrophasor measurement devices, and spectral techniques for clustering. By comparing different clustering techniques onmore » two instances of realistic power grids we show that the solutions are related and therefore one could leverage that relationship for a computational advantage. Thus, by contrasting different clustering techniques we make a case for exploiting structure inherent in the data with implications for several domains including power systems.« less

  5. Evaluating the Informative Quality of Documents in SGML Format from Judgements by Means of Fuzzy Linguistic Techniques Based on Computing with Words.

    ERIC Educational Resources Information Center

    Herrera-Viedma, Enrique; Peis, Eduardo

    2003-01-01

    Presents a fuzzy evaluation method of SGML documents based on computing with words. Topics include filtering the amount of information available on the Web to assist users in their search processes; document type definitions; linguistic modeling; user-system interaction; and use with XML and other markup languages. (Author/LRW)

  6. Knowledge-Based Vision Techniques for the Autonomous Land Vehicle Program

    DTIC Science & Technology

    1991-10-01

    Knowledge System The CKS is an object-oriented knowledge database that was originally designed to serve as the central information manager for a...34 Representation Space: An Approach to the Integra- tion of Visual Information ," Proc. of DARPA Image Understanding Workshop, Palo Alto, CA, pp. 263-272, May 1989...Strat, " Information Management in a Sensor-Based Au- tonomous System," Proc. DARPA Image Understanding Workshop, University of Southern CA, Vol.1, pp

  7. Cloud-based adaptive exon prediction for DNA analysis.

    PubMed

    Putluri, Srinivasareddy; Zia Ur Rahman, Md; Fathima, Shaik Yasmeen

    2018-02-01

    Cloud computing offers significant research and economic benefits to healthcare organisations. Cloud services provide a safe place for storing and managing large amounts of such sensitive data. Under conventional flow of gene information, gene sequence laboratories send out raw and inferred information via Internet to several sequence libraries. DNA sequencing storage costs will be minimised by use of cloud service. In this study, the authors put forward a novel genomic informatics system using Amazon Cloud Services, where genomic sequence information is stored and accessed for processing. True identification of exon regions in a DNA sequence is a key task in bioinformatics, which helps in disease identification and design drugs. Three base periodicity property of exons forms the basis of all exon identification techniques. Adaptive signal processing techniques found to be promising in comparison with several other methods. Several adaptive exon predictors (AEPs) are developed using variable normalised least mean square and its maximum normalised variants to reduce computational complexity. Finally, performance evaluation of various AEPs is done based on measures such as sensitivity, specificity and precision using various standard genomic datasets taken from National Center for Biotechnology Information genomic sequence database.

  8. Medical undergraduates’ use of behaviour change talk: the example of facilitating weight management

    PubMed Central

    2013-01-01

    Background Obesity, an increasing problem worldwide, is a leading cause of morbidity and mortality. Management principally requires lifestyle (i.e. behavioural) changes. An evidence-base exists of behaviour change techniques for weight loss; however, in routine practice doctors are often unsure about effective treatments and commonly use theoretically-unfounded communication strategies (e.g. information-giving). It is not known if communication skills teaching during undergraduate training adequately prepares future doctors to engage in effective behaviour change talk with patients. The aim of the study was to examine which behaviour change techniques medical undergraduates use to facilitate lifestyle adjustments in obese patients. Methods Forty-eight medical trainees in their clinical years of a UK medical school conducted two simulated consultations each. Both consultations involved an obese patient scenario where weight loss was indicated. Use of simulated patients (SPs) ensured standardisation of key variables (e.g. barriers to behaviour change). Presentation of scenario order was counterbalanced. Following each consultation, students assessed the techniques they perceived themselves to have used. SPs rated the extent to which they intended to make behavioural changes and why. Anonymised transcripts of the audiotaped consultations were coded by independent assessors, blind to student and SP ratings, using a validated behaviour change taxonomy. Results Students reported using a wide range of evidence-based techniques. In contrast, codings of observed communication behaviours were limited. SPs behavioural intention varied and a range of helpful elements of student’s communication were revealed. Conclusions Current skills-based communication programmes do not adequately prepare future doctors for the growing task of facilitating weight management. Students are able to generalise some communication skills to these encounters, but are over confident and have limited ability to use evidence-based theoretically informed techniques. They recognise this as a learning need. Educators will need to tackle the challenges of integrating theoretically informed and evidence based behaviour change talk within medical training. PMID:23347344

  9. 75 FR 71434 - Notice of Public Information Collection(s) Being Reviewed by the Federal Communications...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-23

    ... reliable U.S. space-based services--including communications and remote sensing satellite services for the... information on the respondents, including the use of automated collection techniques or other forms of... information. Needs and Uses: This collection will be submitted as an extension after the 60-day comment period...

  10. Preceiving Patterns of Reference Service: A Survey

    ERIC Educational Resources Information Center

    Blakely, Florence

    1971-01-01

    Reference librarians must, if they hope to survive, retool in preparation for becoming the interface between the patron and computer-based information systems. This involves sharpening the interview technique and understanding where to plug into the information flow process. (4 references) (Author)

  11. Synchrophasor-Assisted Prediction of Stability/Instability of a Power System

    NASA Astrophysics Data System (ADS)

    Saha Roy, Biman Kumar; Sinha, Avinash Kumar; Pradhan, Ashok Kumar

    2013-05-01

    This paper presents a technique for real-time prediction of stability/instability of a power system based on synchrophasor measurements obtained from phasor measurement units (PMUs) at generator buses. For stability assessment the technique makes use of system severity indices developed using bus voltage magnitude obtained from PMUs and generator electrical power. Generator power is computed using system information and PMU information like voltage and current phasors obtained from PMU. System stability/instability is predicted when the indices exceeds a threshold value. A case study is carried out on New England 10-generator, 39-bus system to validate the performance of the technique.

  12. Dual-energy contrast-enhanced mammography.

    PubMed

    Travieso Aja, M M; Rodríguez Rodríguez, M; Alayón Hernández, S; Vega Benítez, V; Luzardo, O P

    2014-01-01

    The degree of vascularization in breast lesions is related to their malignancy. For this reason, functional diagnostic imaging techniques have become important in recent years. Dual-energy contrast-enhanced mammography is a new, apparently promising technique in breast cancer that provides information about the degree of vascularization of the lesion in addition to the morphological information provided by conventional mammography. This article describes the state of the art for dual-energy contrast-enhanced mammography. Based on 15 months' clinical experience, we illustrate this review with clinical cases that allow us to discuss the advantages and limitations of this technique. Copyright © 2014 SERAM. Published by Elsevier Espana. All rights reserved.

  13. A Novel Fast and Secure Approach for Voice Encryption Based on DNA Computing

    NASA Astrophysics Data System (ADS)

    Kakaei Kate, Hamidreza; Razmara, Jafar; Isazadeh, Ayaz

    2018-06-01

    Today, in the world of information communication, voice information has a particular importance. One way to preserve voice data from attacks is voice encryption. The encryption algorithms use various techniques such as hashing, chaotic, mixing, and many others. In this paper, an algorithm is proposed for voice encryption based on three different schemes to increase flexibility and strength of the algorithm. The proposed algorithm uses an innovative encoding scheme, the DNA encryption technique and a permutation function to provide a secure and fast solution for voice encryption. The algorithm is evaluated based on various measures including signal to noise ratio, peak signal to noise ratio, correlation coefficient, signal similarity and signal frequency content. The results demonstrate applicability of the proposed method in secure and fast encryption of voice files

  14. Creating stimuli for the study of biological-motion perception.

    PubMed

    Dekeyser, Mathias; Verfaillie, Karl; Vanrie, Jan

    2002-08-01

    In the perception of biological motion, the stimulus information is confined to a small number of lights attached to the major joints of a moving person. Despite this drastic degradation of the stimulus information, the human visual apparatus organizes the swarm of moving dots into a vivid percept of a moving biological creature. Several techniques have been proposed to create point-light stimuli: placing dots at strategic locations on photographs or films, video recording a person with markers attached to the body, computer animation based on artificial synthesis, and computer animation based on motion-capture data. A description is given of the technique we are currently using in our laboratory to produce animated point-light figures. The technique is based on a combination of motion capture and three-dimensional animation software (Character Studio, Autodesk, Inc., 1998). Some of the advantages of our approach are that the same actions can be shown from any viewpoint, that point-light versions, as well as versions with a full-fleshed character, can be created of the same actions, and that point lights can indicate the center of a joint (thereby eliminating several disadvantages associated with other techniques).

  15. Customer Churn Prediction for Broadband Internet Services

    NASA Astrophysics Data System (ADS)

    Huang, B. Q.; Kechadi, M.-T.; Buckley, B.

    Although churn prediction has been an area of research in the voice branch of telecommunications services, more focused studies on the huge growth area of Broadband Internet services are limited. Therefore, this paper presents a new set of features for broadband Internet customer churn prediction, based on Henley segments, the broadband usage, dial types, the spend of dial-up, line-information, bill and payment information, account information. Then the four prediction techniques (Logistic Regressions, Decision Trees, Multilayer Perceptron Neural Networks and Support Vector Machines) are applied in customer churn, based on the new features. Finally, the evaluation of new features and a comparative analysis of the predictors are made for broadband customer churn prediction. The experimental results show that the new features with these four modelling techniques are efficient for customer churn prediction in the broadband service field.

  16. Information support for decision making on dispatching control of water distribution in irrigation

    NASA Astrophysics Data System (ADS)

    Yurchenko, I. F.

    2018-05-01

    The research has been carried out on developing the technique of supporting decision making for on-line control, operational management of water allocation for the interfarm irrigation projects basing on the analytical patterns of dispatcher control. This technique provides an increase of labour productivity as well as higher management quality due to the improved level of automation, as well as decision making optimization taking into account diagnostics of the issues, solutions classification, information being required to the decision makers.

  17. Nationwide Hybrid Change Detection of Buildings

    NASA Astrophysics Data System (ADS)

    Hron, V.; Halounova, L.

    2016-06-01

    The Fundamental Base of Geographic Data of the Czech Republic (hereinafter FBGD) is a national 2D geodatabase at a 1:10,000 scale with more than 100 geographic objects. This paper describes the design of the permanent updating mechanism of buildings in FBGD. The proposed procedure belongs to the category of hybrid change detection (HCD) techniques which combine pixel-based and object-based evaluation. The main sources of information for HCD are cadastral information and bi-temporal vertical digital aerial photographs. These photographs have great information potential because they contain multispectral, position and also elevation information. Elevation information represents a digital surface model (DSM) which can be obtained using the image matching technique. Pixel-based evaluation of bi-temporal DSMs enables fast localization of places with potential building changes. These coarse results are subsequently classified through the object-based image analysis (OBIA) using spectral, textural and contextual features and GIS tools. The advantage of the two-stage evaluation is the pre-selection of locations where image segmentation (a computationally demanding part of OBIA) is performed. It is not necessary to apply image segmentation to the entire scene, but only to the surroundings of detected changes, which contributes to significantly faster processing and lower hardware requirements. The created technology is based on open-source software solutions that allow easy portability on multiple computers and parallelization of processing. This leads to significant savings of financial resources which can be expended on the further development of FBGD.

  18. The Identification of Reasons, Solutions, and Techniques Informing a Theory-Based Intervention Targeting Recreational Sports Participation.

    PubMed

    St Quinton, Tom; Brunton, Julie A

    2018-06-01

    This study is the 3rd piece of formative research utilizing the theory of planned behavior to inform the development of a behavior change intervention. Focus groups were used to identify reasons for and solutions to previously identified key beliefs in addition to potentially effective behavior change techniques. A purposive sample of 22 first-year undergraduate students (n = 8 men; M age  = 19.8 years, SD = 1.3 years) attending a university in the North of England was used. Focus groups were audio-recorded; recordings were transcribed verbatim, analyzed thematically, and coded for recurrent themes. The data revealed 14 reasons regarding enjoyment, 11 reasons for friends' approval, 11 reasons for friends' own participation, 14 reasons for the approval of family members, and 10 solutions to time constraints. Twelve distinct techniques were suggested to attend to these reasons and solutions. This qualitative research will be used to inform the development of a theory-based intervention to increase students' participation in university recreational sports.

  19. Real-time estimation of wildfire perimeters from curated crowdsourcing

    NASA Astrophysics Data System (ADS)

    Zhong, Xu; Duckham, Matt; Chong, Derek; Tolhurst, Kevin

    2016-04-01

    Real-time information about the spatial extents of evolving natural disasters, such as wildfire or flood perimeters, can assist both emergency responders and the general public during an emergency. However, authoritative information sources can suffer from bottlenecks and delays, while user-generated social media data usually lacks the necessary structure and trustworthiness for reliable automated processing. This paper describes and evaluates an automated technique for real-time tracking of wildfire perimeters based on publicly available “curated” crowdsourced data about telephone calls to the emergency services. Our technique is based on established data mining tools, and can be adjusted using a small number of intuitive parameters. Experiments using data from the devastating Black Saturday wildfires (2009) in Victoria, Australia, demonstrate the potential for the technique to detect and track wildfire perimeters automatically, in real time, and with moderate accuracy. Accuracy can be further increased through combination with other authoritative demographic and environmental information, such as population density and dynamic wind fields. These results are also independently validated against data from the more recent 2014 Mickleham-Dalrymple wildfires.

  20. Real-time estimation of wildfire perimeters from curated crowdsourcing

    PubMed Central

    Zhong, Xu; Duckham, Matt; Chong, Derek; Tolhurst, Kevin

    2016-01-01

    Real-time information about the spatial extents of evolving natural disasters, such as wildfire or flood perimeters, can assist both emergency responders and the general public during an emergency. However, authoritative information sources can suffer from bottlenecks and delays, while user-generated social media data usually lacks the necessary structure and trustworthiness for reliable automated processing. This paper describes and evaluates an automated technique for real-time tracking of wildfire perimeters based on publicly available “curated” crowdsourced data about telephone calls to the emergency services. Our technique is based on established data mining tools, and can be adjusted using a small number of intuitive parameters. Experiments using data from the devastating Black Saturday wildfires (2009) in Victoria, Australia, demonstrate the potential for the technique to detect and track wildfire perimeters automatically, in real time, and with moderate accuracy. Accuracy can be further increased through combination with other authoritative demographic and environmental information, such as population density and dynamic wind fields. These results are also independently validated against data from the more recent 2014 Mickleham-Dalrymple wildfires. PMID:27063569

  1. Real-time estimation of wildfire perimeters from curated crowdsourcing.

    PubMed

    Zhong, Xu; Duckham, Matt; Chong, Derek; Tolhurst, Kevin

    2016-04-11

    Real-time information about the spatial extents of evolving natural disasters, such as wildfire or flood perimeters, can assist both emergency responders and the general public during an emergency. However, authoritative information sources can suffer from bottlenecks and delays, while user-generated social media data usually lacks the necessary structure and trustworthiness for reliable automated processing. This paper describes and evaluates an automated technique for real-time tracking of wildfire perimeters based on publicly available "curated" crowdsourced data about telephone calls to the emergency services. Our technique is based on established data mining tools, and can be adjusted using a small number of intuitive parameters. Experiments using data from the devastating Black Saturday wildfires (2009) in Victoria, Australia, demonstrate the potential for the technique to detect and track wildfire perimeters automatically, in real time, and with moderate accuracy. Accuracy can be further increased through combination with other authoritative demographic and environmental information, such as population density and dynamic wind fields. These results are also independently validated against data from the more recent 2014 Mickleham-Dalrymple wildfires.

  2. Multimode nonlinear optical imaging of the dermis in ex vivo human skin based on the combination of multichannel mode and Lambda mode.

    PubMed

    Zhuo, Shuangmu; Chen, Jianxin; Luo, Tianshu; Zou, Dingsong

    2006-08-21

    A Multimode nonlinear optical imaging technique based on the combination of multichannel mode and Lambda mode is developed to investigate human dermis. Our findings show that this technique not only improves the image contrast of the structural proteins of extracellular matrix (ECM) but also provides an image-guided spectral analysis method to identify both cellular and ECM intrinsic components including collagen, elastin, NAD(P)H and flavin. By the combined use of multichannel mode and Lambda mode in tandem, the obtained in-depth two photon-excited fluorescence (TPEF) and second-harmonic generation (SHG) imaging and TPEF/SHG signals depth-dependence decay can offer a sensitive tool for obtaining quantitative tissue structural and biochemical information. These results suggest that the technique has the potential to provide more accurate information for determining tissue physiological and pathological states.

  3. Magnetic resonance imaging based functional imaging in paediatric oncology.

    PubMed

    Manias, Karen A; Gill, Simrandip K; MacPherson, Lesley; Foster, Katharine; Oates, Adam; Peet, Andrew C

    2017-02-01

    Imaging is central to management of solid tumours in children. Conventional magnetic resonance imaging (MRI) is the standard imaging modality for tumours of the central nervous system (CNS) and limbs and is increasingly used in the abdomen. It provides excellent structural detail, but imparts limited information about tumour type, aggressiveness, metastatic potential or early treatment response. MRI based functional imaging techniques, such as magnetic resonance spectroscopy, diffusion and perfusion weighted imaging, probe tissue properties to provide clinically important information about metabolites, structure and blood flow. This review describes the role of and evidence behind these functional imaging techniques in paediatric oncology and implications for integrating them into routine clinical practice. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Using Word Clouds to Develop Proactive Learners

    ERIC Educational Resources Information Center

    Miley, Frances; Read, Andrew

    2011-01-01

    This article examines student responses to a technique for summarizing electronically available information based on word frequency. Students used this technique to create word clouds, using those word clouds to enhance personal and small group study. This is a qualitative study. Small focus groups were used to obtain student feedback. Feedback…

  5. Ultrasonic non invasive techniques for microbiological instrumentation

    NASA Astrophysics Data System (ADS)

    Elvira, L.; Sierra, C.; Galán, B.; Resa, P.

    2010-01-01

    Non invasive techniques based on ultrasounds have advantageous features to study, characterize and monitor microbiological and enzymatic reactions. These processes may change the sound speed, viscosity or particle distribution size of the medium where they take place, which makes possible their analysis using ultrasonic techniques. In this work, two different systems for the analysis of microbiological liquid media based on ultrasounds are presented. In first place, an industrial application based on an ultrasonic monitoring technique for microbiological growth detection in milk is shown. Such a system may improve the quality control strategies in food production factories, being able to decrease the time required to detect possible contaminations in packed products. Secondly, a study about the growing of the Escherichia coli DH5 α in different conditions is presented. It is shown that the use of ultrasonic non invasive characterization techniques in combination with other conventional measurements like optical density provides complementary information about the metabolism of these bacteria.

  6. Human age estimation combining third molar and skeletal development.

    PubMed

    Thevissen, P W; Kaur, J; Willems, G

    2012-03-01

    The wide prediction intervals obtained with age estimation methods based on third molar development could be reduced by combining these dental observations with age-related skeletal information. Therefore, on cephalometric radiographs, the most accurate age-estimating skeletal variable and related registration method were searched and added to a regression model, with age as response and third molar stages as explanatory variable. In a pilot set up on a dataset of 496 (283 M; 213 F) cephalometric radiographs, the techniques of Baccetti et al. (2005) (BA), Seedat et al. (2005) (SE), Caldas et al. (2007) and Rai et al. (2008) (RA) were verified. In the main study, data from 460 (208 F, 224 M) individuals in an age range between 3 and 26 years, for which at the same day an orthopantogram and a cephalogram were taken, were collected. On the orthopantomograms, the left third molar development was registered using the scoring system described by Gleiser and Hunt (1955) and modified by Köhler (1994) (GH). On the cephalograms, cervical vertebrae development was registered according to the BA and SE techniques. A regression model, with age as response and the GH scores as explanatory variable, was fitted to the data. Next, information of BA, SE and BA + SE was, respectively, added to this model. From all obtained models, the determination coefficients and the root mean squared errors were calculated. Inclusion of information from cephalograms based on the BA, as well as the SE, technique improved the amount of explained variance in age acquired from panoramic radiographs using the GH technique with 48%. Inclusion of cephalometric BA + SE information marginally improved the previous result (+1%). The RMSE decreased with 1.93, 1.85 and 2.03 years by adding, respectively, BA, SE and BA + SE information to the GH model. The SE technique allows clinically the fastest and easiest registration of the degree of development of the cervical vertebrae. Therefore, the choice of technique to classify cervical vertebrae development in addition to third molar development is preferably the SE technique.

  7. Wind Gust Measurement Techniques—From Traditional Anemometry to New Possibilities

    PubMed Central

    2018-01-01

    Information on wind gusts is needed for assessment of wind-induced damage and risks to safety. The measurement of wind gust speed requires a high temporal resolution of the anemometer system, because the gust is defined as a short-duration (seconds) maximum of the fluctuating wind speed. Until the digitalization of wind measurements in the 1990s, the wind gust measurements suffered from limited recording and data processing resources. Therefore, the majority of continuous wind gust records date back at most only by 30 years. Although the response characteristics of anemometer systems are good enough today, the traditional measurement techniques at weather stations based on cup and sonic anemometers are limited to heights and regions where the supporting structures can reach. Therefore, existing measurements are mainly concentrated over densely-populated land areas, whereas from remote locations, such as the marine Arctic, wind gust information is available only from sparse coastal locations. Recent developments of wind gust measurement techniques based on turbulence measurements from research aircraft and from Doppler lidar can potentially provide new information from heights and locations unreachable by traditional measurement techniques. Moreover, fast-developing measurement methods based on Unmanned Aircraft Systems (UASs) may add to better coverage of wind gust measurements in the future. In this paper, we provide an overview of the history and the current status of anemometry from the perspective of wind gusts. Furthermore, a discussion on the potential future directions of wind gust measurement techniques is provided. PMID:29690647

  8. D Tracking Based Augmented Reality for Cultural Heritage Data Management

    NASA Astrophysics Data System (ADS)

    Battini, C.; Landi, G.

    2015-02-01

    The development of contactless documentation techniques is allowing researchers to collect high volumes of three-dimensional data in a short time but with high levels of accuracy. The digitalisation of cultural heritage opens up the possibility of using image processing and analysis, and computer graphics techniques, to preserve this heritage for future generations; augmenting it with additional information or with new possibilities for its enjoyment and use. The collection of precise datasets about cultural heritage status is crucial for its interpretation, its conservation and during the restoration processes. The application of digital-imaging solutions for various feature extraction, image data-analysis techniques, and three-dimensional reconstruction of ancient artworks, allows the creation of multidimensional models that can incorporate information coming from heterogeneous data sets, research results and historical sources. Real objects can be scanned and reconstructed virtually, with high levels of data accuracy and resolution. Real-time visualisation software and hardware is rapidly evolving and complex three-dimensional models can be interactively visualised and explored on applications developed for mobile devices. This paper will show how a 3D reconstruction of an object, with multiple layers of information, can be stored and visualised through a mobile application that will allow interaction with a physical object for its study and analysis, using 3D Tracking based Augmented Reality techniques.

  9. Layout Slam with Model Based Loop Closure for 3d Indoor Corridor Reconstruction

    NASA Astrophysics Data System (ADS)

    Baligh Jahromi, A.; Sohn, G.; Jung, J.; Shahbazi, M.; Kang, J.

    2018-05-01

    In this paper, we extend a recently proposed visual Simultaneous Localization and Mapping (SLAM) techniques, known as Layout SLAM, to make it robust against error accumulations, abrupt changes of camera orientation and miss-association of newly visited parts of the scene to the previously visited landmarks. To do so, we present a novel technique of loop closing based on layout model matching; i.e., both model information (topology and geometry of reconstructed models) and image information (photometric features) are used to address a loop-closure detection. The advantages of using the layout-related information in the proposed loop-closing technique are twofold. First, it imposes a metric constraint on the global map consistency and, thus, adjusts the mapping scale drifts. Second, it can reduce matching ambiguity in the context of indoor corridors, where the scene is homogenously textured and extracting sufficient amount of distinguishable point features is a challenging task. To test the impact of the proposed technique on the performance of Layout SLAM, we have performed the experiments on wide-angle videos captured by a handheld camera. This dataset was collected from the indoor corridors of a building at York University. The obtained results demonstrate that the proposed method successfully detects the instances of loops while producing very limited trajectory errors.

  10. A robust approach for a filter-based monocular simultaneous localization and mapping (SLAM) system.

    PubMed

    Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni

    2013-07-03

    Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes.

  11. Education and Training for On-Line Use of Data Bases

    ERIC Educational Resources Information Center

    Williams, Martha E.

    1977-01-01

    This paper discusses vehicles for education and training, tools and techniques for promotion, and details the information requirements of the processors, service managers, searchers, and end users of on-line data bases. (Author/KP)

  12. AHIMSA - Ad hoc histogram information measure sensing algorithm for feature selection in the context of histogram inspired clustering techniques

    NASA Technical Reports Server (NTRS)

    Dasarathy, B. V.

    1976-01-01

    An algorithm is proposed for dimensionality reduction in the context of clustering techniques based on histogram analysis. The approach is based on an evaluation of the hills and valleys in the unidimensional histograms along the different features and provides an economical means of assessing the significance of the features in a nonparametric unsupervised data environment. The method has relevance to remote sensing applications.

  13. Trends in analytical techniques applied to particulate matter characterization: A critical review of fundaments and applications.

    PubMed

    Galvão, Elson Silva; Santos, Jane Meri; Lima, Ana Teresa; Reis, Neyval Costa; Orlando, Marcos Tadeu D'Azeredo; Stuetz, Richard Michael

    2018-05-01

    Epidemiological studies have shown the association of airborne particulate matter (PM) size and chemical composition with health problems affecting the cardiorespiratory and central nervous systems. PM also act as cloud condensation nuclei (CNN) or ice nuclei (IN), taking part in the clouds formation process, and therefore can impact the climate. There are several works using different analytical techniques in PM chemical and physical characterization to supply information to source apportionment models that help environmental agencies to assess damages accountability. Despite the numerous analytical techniques described in the literature available for PM characterization, laboratories are normally limited to the in-house available techniques, which raises the question if a given technique is suitable for the purpose of a specific experimental work. The aim of this work consists of summarizing the main available technologies for PM characterization, serving as a guide for readers to find the most appropriate technique(s) for their investigation. Elemental analysis techniques like atomic spectrometry based and X-ray based techniques, organic and carbonaceous techniques and surface analysis techniques are discussed, illustrating their main features as well as their advantages and drawbacks. We also discuss the trends in analytical techniques used over the last two decades. The choice among all techniques is a function of a number of parameters such as: the relevant particles physical properties, sampling and measuring time, access to available facilities and the costs associated to equipment acquisition, among other considerations. An analytical guide map is presented as a guideline for choosing the most appropriated technique for a given analytical information required. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Base-Catalyzed Linkage Isomerization: An Undergraduate Inorganic Kinetics Experiment.

    ERIC Educational Resources Information Center

    Jackson, W. G.; And Others

    1981-01-01

    Describes kinetics experiments completed in a single two-hour laboratory period at 25 degrees Centigrade of nitrito to nitro rearrangement, based on the recently discovered base-catalysis path. Includes information on synthesis and characterization of linkage isomers, spectrophotometric techniques, and experimental procedures. (SK)

  15. Operator Performance Measures for Assessing Voice Communication Effectiveness

    DTIC Science & Technology

    1989-07-01

    performance and work- load assessment techniques have been based.I Broadbent (1958) described a limited capacity filter model of human information...INFORMATION PROCESSING 20 3.1.1. Auditory Attention 20 3.1.2. Auditory Memory 24 3.2. MODELS OF INFORMATION PROCESSING 24 3.2.1. Capacity Theories 25...Learning 0 Attention * Language Specialization • Decision Making• Problem Solving Auditory Information Processing Models of Processing Ooemtor

  16. Developing an Information Infrastructure To Support Information Retrieval: Towards a Theory of Clustering Based in Classification.

    ERIC Educational Resources Information Center

    Micco, Mary; Popp, Rich

    Techniques for building a world-wide information infrastructure by reverse engineering existing databases to link them in a hierarchical system of subject clusters to create an integrated database are explored. The controlled vocabulary of the Library of Congress Subject Headings is used to ensure consistency and group similar items. Each database…

  17. An Information-Systems Program for the Language Sciences. Final Report on Survey-and-Analysis Stage, 1967-1968.

    ERIC Educational Resources Information Center

    Freeman, Robert R.; And Others

    The main results of the survey-and-analysis stage include a substantial collection of preliminary data on the language-sciences information user community, its professional specialties and information channels, its indexing tools, and its terminologies. The prospects and techniques for the development of a modern, discipline-based information…

  18. Algorithms and architecture for multiprocessor based circuit simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deutsch, J.T.

    Accurate electrical simulation is critical to the design of high performance integrated circuits. Logic simulators can verify function and give first-order timing information. Switch level simulators are more effective at dealing with charge sharing than standard logic simulators, but cannot provide accurate timing information or discover DC problems. Delay estimation techniques and cell level simulation can be used in constrained design methods, but must be tuned for each application, and circuit simulation must still be used to generate the cell models. None of these methods has the guaranteed accuracy that many circuit designers desire, and none can provide detailed waveformmore » information. Detailed electrical-level simulation can predict circuit performance if devices and parasitics are modeled accurately. However, the computational requirements of conventional circuit simulators make it impractical to simulate current large circuits. In this dissertation, the implementation of Iterated Timing Analysis (ITA), a relaxation-based technique for accurate circuit simulation, on a special-purpose multiprocessor is presented. The ITA method is an SOR-Newton, relaxation-based method which uses event-driven analysis and selective trace to exploit the temporal sparsity of the electrical network. Because event-driven selective trace techniques are employed, this algorithm lends itself to implementation on a data-driven computer.« less

  19. A comparative analysis of swarm intelligence techniques for feature selection in cancer classification.

    PubMed

    Gunavathi, Chellamuthu; Premalatha, Kandasamy

    2014-01-01

    Feature selection in cancer classification is a central area of research in the field of bioinformatics and used to select the informative genes from thousands of genes of the microarray. The genes are ranked based on T-statistics, signal-to-noise ratio (SNR), and F-test values. The swarm intelligence (SI) technique finds the informative genes from the top-m ranked genes. These selected genes are used for classification. In this paper the shuffled frog leaping with Lévy flight (SFLLF) is proposed for feature selection. In SFLLF, the Lévy flight is included to avoid premature convergence of shuffled frog leaping (SFL) algorithm. The SI techniques such as particle swarm optimization (PSO), cuckoo search (CS), SFL, and SFLLF are used for feature selection which identifies informative genes for classification. The k-nearest neighbour (k-NN) technique is used to classify the samples. The proposed work is applied on 10 different benchmark datasets and examined with SI techniques. The experimental results show that the results obtained from k-NN classifier through SFLLF feature selection method outperform PSO, CS, and SFL.

  20. Digital Mapping Techniques '07 - Workshop Proceedings

    USGS Publications Warehouse

    Soller, David R.

    2008-01-01

    The Digital Mapping Techniques '07 (DMT'07) workshop was attended by 85 technical experts from 49 agencies, universities, and private companies, including representatives from 27 state geological surveys. This year's meeting, the tenth in the annual series, was hosted by the South Carolina Geological Survey, from May 20-23, 2007, on the University of South Carolina campus in Columbia, South Carolina. Each DMT workshop has been coordinated by the U.S. Geological Survey's National Geologic Map Database Project and the Association of American State Geologists (AASG). As in previous year's meetings, the objective was to foster informal discussion and exchange of technical information, principally in order to develop more efficient methods for digital mapping, cartography, GIS analysis, and information management. At this meeting, oral and poster presentations and special discussion sessions emphasized: 1) methods for creating and publishing map products (here, 'publishing' includes Web-based release); 2) field data capture software and techniques, including the use of LIDAR; 3) digital cartographic techniques; 4) migration of digital maps into ArcGIS Geodatabase format; 5) analytical GIS techniques; and 6) continued development of the National Geologic Map Database.

  1. Biometric image enhancement using decision rule based image fusion techniques

    NASA Astrophysics Data System (ADS)

    Sagayee, G. Mary Amirtha; Arumugam, S.

    2010-02-01

    Introducing biometrics into information systems may result in considerable benefits. Most of the researchers confirmed that the finger print is widely used than the iris or face and more over it is the primary choice for most privacy concerned applications. For finger prints applications, choosing proper sensor is at risk. The proposed work deals about, how the image quality can be improved by introducing image fusion technique at sensor levels. The results of the images after introducing the decision rule based image fusion technique are evaluated and analyzed with its entropy levels and root mean square error.

  2. PLAN2L: a web tool for integrated text mining and literature-based bioentity relation extraction.

    PubMed

    Krallinger, Martin; Rodriguez-Penagos, Carlos; Tendulkar, Ashish; Valencia, Alfonso

    2009-07-01

    There is an increasing interest in using literature mining techniques to complement information extracted from annotation databases or generated by bioinformatics applications. Here we present PLAN2L, a web-based online search system that integrates text mining and information extraction techniques to access systematically information useful for analyzing genetic, cellular and molecular aspects of the plant model organism Arabidopsis thaliana. Our system facilitates a more efficient retrieval of information relevant to heterogeneous biological topics, from implications in biological relationships at the level of protein interactions and gene regulation, to sub-cellular locations of gene products and associations to cellular and developmental processes, i.e. cell cycle, flowering, root, leaf and seed development. Beyond single entities, also predefined pairs of entities can be provided as queries for which literature-derived relations together with textual evidences are returned. PLAN2L does not require registration and is freely accessible at http://zope.bioinfo.cnio.es/plan2l.

  3. An industrial information integration approach to in-orbit spacecraft

    NASA Astrophysics Data System (ADS)

    Du, Xiaoning; Wang, Hong; Du, Yuhao; Xu, Li Da; Chaudhry, Sohail; Bi, Zhuming; Guo, Rong; Huang, Yongxuan; Li, Jisheng

    2017-01-01

    To operate an in-orbit spacecraft, the spacecraft status has to be monitored autonomously by collecting and analysing real-time data, and then detecting abnormities and malfunctions of system components. To develop an information system for spacecraft state detection, we investigate the feasibility of using ontology-based artificial intelligence in the system development. We propose a new modelling technique based on the semantic web, agent, scenarios and ontologies model. In modelling, the subjects of astronautics fields are classified, corresponding agents and scenarios are defined, and they are connected by the semantic web to analyse data and detect failures. We introduce the modelling methodologies and the resulted framework of the status detection information system in this paper. We discuss system components as well as their interactions in details. The system has been prototyped and tested to illustrate its feasibility and effectiveness. The proposed modelling technique is generic which can be extended and applied to the system development of other large-scale and complex information systems.

  4. Network meta-analysis: a technique to gather evidence from direct and indirect comparisons

    PubMed Central

    2017-01-01

    Systematic reviews and pairwise meta-analyses of randomized controlled trials, at the intersection of clinical medicine, epidemiology and statistics, are positioned at the top of evidence-based practice hierarchy. These are important tools to base drugs approval, clinical protocols and guidelines formulation and for decision-making. However, this traditional technique only partially yield information that clinicians, patients and policy-makers need to make informed decisions, since it usually compares only two interventions at the time. In the market, regardless the clinical condition under evaluation, usually many interventions are available and few of them have been studied in head-to-head studies. This scenario precludes conclusions to be drawn from comparisons of all interventions profile (e.g. efficacy and safety). The recent development and introduction of a new technique – usually referred as network meta-analysis, indirect meta-analysis, multiple or mixed treatment comparisons – has allowed the estimation of metrics for all possible comparisons in the same model, simultaneously gathering direct and indirect evidence. Over the last years this statistical tool has matured as technique with models available for all types of raw data, producing different pooled effect measures, using both Frequentist and Bayesian frameworks, with different software packages. However, the conduction, report and interpretation of network meta-analysis still poses multiple challenges that should be carefully considered, especially because this technique inherits all assumptions from pairwise meta-analysis but with increased complexity. Thus, we aim to provide a basic explanation of network meta-analysis conduction, highlighting its risks and benefits for evidence-based practice, including information on statistical methods evolution, assumptions and steps for performing the analysis. PMID:28503228

  5. Analysis of health behaviour change interventions for preventing dental caries delivered in primary schools.

    PubMed

    Adair, P M; Burnside, G; Pine, C M

    2013-01-01

    To improve oral health in children, the key behaviours (tooth brushing and sugar control) responsible for development of dental caries need to be better understood, as well as how to promote these behaviours effectively so they become habitual; and, the specific, optimal techniques to use in interventions. The aim of this paper is to describe and analyse the behaviour change techniques that have been used in primary school-based interventions to prevent dental caries (utilizing a Cochrane systematic review that we have undertaken) and to identify opportunities for improving future interventions by incorporating a comprehensive range of behaviour change techniques. Papers of five interventions were reviewed and data were independently extracted. Results indicate that behaviour change techniques were limited to information-behaviour links, information on consequences, instruction and demonstration of behaviours. None of the interventions were based on behaviour change theory. We conclude that behaviour change techniques used in school interventions to reduce dental caries were limited and focused around providing information about how behaviour impacts on health and the consequences of not developing the correct health behaviours as well as providing oral hygiene instruction. Establishing which techniques are effective is difficult due to poor reporting of interventions in studies. Future design of oral health promotion interventions using behaviour change theory for development and evaluation (and reporting results in academic journals) could strengthen the potential for efficacy and provide a framework to use a much wider range of behaviour change techniques. Future studies should include development and publication of intervention manuals which is becoming standard practice in other health promoting programmes. © 2013 S. Karger AG, Basel.

  6. A Study on User Interface Design of Knowledge Management Groupware in Selected Leading Organizations of Pakistan

    DTIC Science & Technology

    2004-06-01

    Information Systems, Faculty of ICT, International Islamic University, Malaysia . Abstract. Several techniques for evaluating a groupware...inspection based techniques couldn’t be carried out in other parts of Pakistan where the IT industry has mushroomed in the past few years. Nevertheless...there are no set standards for using any particular technique. Evaluating a groupware interface is an evolving process and requires more investigation

  7. Spatial-spectral preprocessing for endmember extraction on GPU's

    NASA Astrophysics Data System (ADS)

    Jimenez, Luis I.; Plaza, Javier; Plaza, Antonio; Li, Jun

    2016-10-01

    Spectral unmixing is focused in the identification of spectrally pure signatures, called endmembers, and their corresponding abundances in each pixel of a hyperspectral image. Mainly focused on the spectral information contained in the hyperspectral images, endmember extraction techniques have recently included spatial information to achieve more accurate results. Several algorithms have been developed for automatic or semi-automatic identification of endmembers using spatial and spectral information, including the spectral-spatial endmember extraction (SSEE) where, within a preprocessing step in the technique, both sources of information are extracted from the hyperspectral image and equally used for this purpose. Previous works have implemented the SSEE technique in four main steps: 1) local eigenvectors calculation in each sub-region in which the original hyperspectral image is divided; 2) computation of the maxima and minima projection of all eigenvectors over the entire hyperspectral image in order to obtain a candidates pixels set; 3) expansion and averaging of the signatures of the candidate set; 4) ranking based on the spectral angle distance (SAD). The result of this method is a list of candidate signatures from which the endmembers can be extracted using various spectral-based techniques, such as orthogonal subspace projection (OSP), vertex component analysis (VCA) or N-FINDR. Considering the large volume of data and the complexity of the calculations, there is a need for efficient implementations. Latest- generation hardware accelerators such as commodity graphics processing units (GPUs) offer a good chance for improving the computational performance in this context. In this paper, we develop two different implementations of the SSEE algorithm using GPUs. Both are based on the eigenvectors computation within each sub-region of the first step, one using the singular value decomposition (SVD) and another one using principal component analysis (PCA). Based on our experiments with hyperspectral data sets, high computational performance is observed in both cases.

  8. Handling Missing Data With Multilevel Structural Equation Modeling and Full Information Maximum Likelihood Techniques.

    PubMed

    Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda

    2016-08-01

    With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  9. Going Beyond, Going Further: The Preparation of Acid-Base Titration Curves.

    ERIC Educational Resources Information Center

    McClendon, Michael

    1984-01-01

    Background information, list of materials needed, and procedures used are provided for a simple technique for generating mechanically plotted acid-base titration curves. The method is suitable for second-year high school chemistry students. (JN)

  10. Method, apparatus and system to compensate for drift by physically unclonable function circuitry

    DOEpatents

    Hamlet, Jason

    2016-11-22

    Techniques and mechanisms to detect and compensate for drift by a physically uncloneable function (PUF) circuit. In an embodiment, first state information is registered as reference information to be made available for subsequent evaluation of whether drift by PUF circuitry has occurred. The first state information is associated with a first error correction strength. The first state information is generated based on a first PUF value output by the PUF circuitry. In another embodiment, second state information is determined based on a second PUF value that is output by the PUF circuitry. An evaluation of whether drift has occurred is performed based on the first state information and the second state information, the evaluation including determining whether a threshold error correction strength is exceeded concurrent with a magnitude of error being less than the first error correction strength.

  11. An Object-Based Requirements Modeling Method.

    ERIC Educational Resources Information Center

    Cordes, David W.; Carver, Doris L.

    1992-01-01

    Discusses system modeling and specification as it relates to object-based information systems development and software development. An automated system model based on the objects in the initial requirements document is described, the requirements document translator is explained, and a sample application of the technique is provided. (12…

  12. Face recognition using 3D facial shape and color map information: comparison and combination

    NASA Astrophysics Data System (ADS)

    Godil, Afzal; Ressler, Sandy; Grother, Patrick

    2004-08-01

    In this paper, we investigate the use of 3D surface geometry for face recognition and compare it to one based on color map information. The 3D surface and color map data are from the CAESAR anthropometric database. We find that the recognition performance is not very different between 3D surface and color map information using a principal component analysis algorithm. We also discuss the different techniques for the combination of the 3D surface and color map information for multi-modal recognition by using different fusion approaches and show that there is significant improvement in results. The effectiveness of various techniques is compared and evaluated on a dataset with 200 subjects in two different positions.

  13. Demographic management in a federated healthcare environment.

    PubMed

    Román, I; Roa, L M; Reina-Tosina, J; Madinabeitia, G

    2006-09-01

    The purpose of this paper is to provide a further step toward the decentralization of identification and demographic information about persons by solving issues related to the integration of demographic agents in a federated healthcare environment. The aim is to identify a particular person in every system of a federation and to obtain a unified view of his/her demographic information stored in different locations. This work is based on semantic models and techniques, and pursues the reconciliation of several current standardization works including ITU-T's Open Distributed Processing, CEN's prEN 12967, OpenEHR's dual and reference models, CEN's General Purpose Information Components and CORBAmed's PID service. We propose a new paradigm for the management of person identification and demographic data, based on the development of an open architecture of specialized distributed components together with the incorporation of techniques for the efficient management of domain ontologies, in order to have a federated demographic service. This new service enhances previous correlation solutions sharing ideas with different standards and domains like semantic techniques and database systems. The federation philosophy enforces us to devise solutions to the semantic, functional and instance incompatibilities in our approach. Although this work is based on several models and standards, we have improved them by combining their contributions and developing a federated architecture that does not require the centralization of demographic information. The solution is thus a good approach to face integration problems and the applied methodology can be easily extended to other tasks involved in the healthcare organization.

  14. A multimodal imaging platform with integrated simultaneous photoacoustic microscopy, optical coherence tomography, optical Doppler tomography and fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Dadkhah, Arash; Zhou, Jun; Yeasmin, Nusrat; Jiao, Shuliang

    2018-02-01

    Various optical imaging modalities with different optical contrast mechanisms have been developed over the past years. Although most of these imaging techniques are being used in many biomedical applications and researches, integration of these techniques will allow researchers to reach the full potential of these technologies. Nevertheless, combining different imaging techniques is always challenging due to the difference in optical and hardware requirements for different imaging systems. Here, we developed a multimodal optical imaging system with the capability of providing comprehensive structural, functional and molecular information of living tissue in micrometer scale. This imaging system integrates photoacoustic microscopy (PAM), optical coherence tomography (OCT), optical Doppler tomography (ODT) and fluorescence microscopy in one platform. Optical-resolution PAM (OR-PAM) provides absorption-based imaging of biological tissues. Spectral domain OCT is able to provide structural information based on the scattering property of biological sample with no need for exogenous contrast agents. In addition, ODT is a functional extension of OCT with the capability of measurement and visualization of blood flow based on the Doppler effect. Fluorescence microscopy allows to reveal molecular information of biological tissue using autofluoresce or exogenous fluorophores. In-vivo as well as ex-vivo imaging studies demonstrated the capability of our multimodal imaging system to provide comprehensive microscopic information on biological tissues. Integrating all the aforementioned imaging modalities for simultaneous multimodal imaging has promising potential for preclinical research and clinical practice in the near future.

  15. Study on key techniques for camera-based hydrological record image digitization

    NASA Astrophysics Data System (ADS)

    Li, Shijin; Zhan, Di; Hu, Jinlong; Gao, Xiangtao; Bo, Ping

    2015-10-01

    With the development of information technology, the digitization of scientific or engineering drawings has received more and more attention. In hydrology, meteorology, medicine and mining industry, the grid drawing sheet is commonly used to record the observations from sensors. However, these paper drawings may be destroyed and contaminated due to improper preservation or overuse. Further, it will be a heavy workload and prone to error if these data are manually transcripted into the computer. Hence, in order to digitize these drawings, establishing the corresponding data base will ensure the integrity of data and provide invaluable information for further research. This paper presents an automatic system for hydrological record image digitization, which consists of three key techniques, i.e., image segmentation, intersection point localization and distortion rectification. First, a novel approach to the binarization of the curves and grids in the water level sheet image has been proposed, which is based on the fusion of gradient and color information adaptively. Second, a fast search strategy for cross point location is invented and point-by-point processing is thus avoided, with the help of grid distribution information. And finally, we put forward a local rectification method through analyzing the central portions of the image and utilizing the domain knowledge of hydrology. The processing speed is accelerated, while the accuracy is still satisfying. Experiments on several real water level records show that our proposed techniques are effective and capable of recovering the hydrological observations accurately.

  16. Technique for information retrieval using enhanced latent semantic analysis generating rank approximation matrix by factorizing the weighted morpheme-by-document matrix

    DOEpatents

    Chew, Peter A; Bader, Brett W

    2012-10-16

    A technique for information retrieval includes parsing a corpus to identify a number of wordform instances within each document of the corpus. A weighted morpheme-by-document matrix is generated based at least in part on the number of wordform instances within each document of the corpus and based at least in part on a weighting function. The weighted morpheme-by-document matrix separately enumerates instances of stems and affixes. Additionally or alternatively, a term-by-term alignment matrix may be generated based at least in part on the number of wordform instances within each document of the corpus. At least one lower rank approximation matrix is generated by factorizing the weighted morpheme-by-document matrix and/or the term-by-term alignment matrix.

  17. Complex method to calculate objective assessments of information systems protection to improve expert assessments reliability

    NASA Astrophysics Data System (ADS)

    Abdenov, A. Zh; Trushin, V. A.; Abdenova, G. A.

    2018-01-01

    The paper considers the questions of filling the relevant SIEM nodes based on calculations of objective assessments in order to improve the reliability of subjective expert assessments. The proposed methodology is necessary for the most accurate security risk assessment of information systems. This technique is also intended for the purpose of establishing real-time operational information protection in the enterprise information systems. Risk calculations are based on objective estimates of the adverse events implementation probabilities, predictions of the damage magnitude from information security violations. Calculations of objective assessments are necessary to increase the reliability of the proposed expert assessments.

  18. Application of constrained deconvolution technique for reconstruction of electron bunch profile with strongly non-Gaussian shape

    NASA Astrophysics Data System (ADS)

    Geloni, G.; Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.

    2004-08-01

    An effective and practical technique based on the detection of the coherent synchrotron radiation (CSR) spectrum can be used to characterize the profile function of ultra-short bunches. The CSR spectrum measurement has an important limitation: no spectral phase information is available, and the complete profile function cannot be obtained in general. In this paper we propose to use constrained deconvolution method for bunch profile reconstruction based on a priori-known information about formation of the electron bunch. Application of the method is illustrated with practically important example of a bunch formed in a single bunch-compressor. Downstream of the bunch compressor the bunch charge distribution is strongly non-Gaussian with a narrow leading peak and a long tail. The longitudinal bunch distribution is derived by measuring the bunch tail constant with a streak camera and by using a priory available information about profile function.

  19. Investigation of the Impact of Extracting and Exchanging Health Information by Using Internet and Social Networks.

    PubMed

    Pistolis, John; Zimeras, Stelios; Chardalias, Kostas; Roupa, Zoe; Fildisis, George; Diomidous, Marianna

    2016-06-01

    Social networks (1) have been embedded in our daily life for a long time. They constitute a powerful tool used nowadays for both searching and exchanging information on different issues by using Internet searching engines (Google, Bing, etc.) and Social Networks (Facebook, Twitter etc.). In this paper, are presented the results of a research based on the frequency and the type of the usage of the Internet and the Social Networks by the general public and the health professionals. The objectives of the research were focused on the investigation of the frequency of seeking and meticulously searching for health information in the social media by both individuals and health practitioners. The exchanging of information is a procedure that involves the issues of reliability and quality of information. In this research, by using advanced statistical techniques an effort is made to investigate the participant's profile in using social networks for searching and exchanging information on health issues. Based on the answers 93 % of the people, use the Internet to find information on health-subjects. Considering principal component analysis, the most important health subjects were nutrition (0.719 %), respiratory issues (0.79 %), cardiological issues (0.777%), psychological issues (0.667%) and total (73.8%). The research results, based on different statistical techniques revealed that the 61.2% of the males and 56.4% of the females intended to use the social networks for searching medical information. Based on the principal components analysis, the most important sources that the participants mentioned, were the use of the Internet and social networks for exchanging information on health issues. These sources proved to be of paramount importance to the participants of the study. The same holds for nursing, medical and administrative staff in hospitals.

  20. Kristi Potter | NREL

    Science.gov Websites

    on methods to improve visualization techniques by adding qualitative information regarding sketch-based methods for conveying levels of reliability in architectural renderings. Dr. Potter

  1. Spatial downscaling of soil prediction models based on weighted generalized additive models in smallholder farm settings.

    PubMed

    Xu, Yiming; Smith, Scot E; Grunwald, Sabine; Abd-Elrahman, Amr; Wani, Suhas P; Nair, Vimala D

    2017-09-11

    Digital soil mapping (DSM) is gaining momentum as a technique to help smallholder farmers secure soil security and food security in developing regions. However, communications of the digital soil mapping information between diverse audiences become problematic due to the inconsistent scale of DSM information. Spatial downscaling can make use of accessible soil information at relatively coarse spatial resolution to provide valuable soil information at relatively fine spatial resolution. The objective of this research was to disaggregate the coarse spatial resolution soil exchangeable potassium (K ex ) and soil total nitrogen (TN) base map into fine spatial resolution soil downscaled map using weighted generalized additive models (GAMs) in two smallholder villages in South India. By incorporating fine spatial resolution spectral indices in the downscaling process, the soil downscaled maps not only conserve the spatial information of coarse spatial resolution soil maps but also depict the spatial details of soil properties at fine spatial resolution. The results of this study demonstrated difference between the fine spatial resolution downscaled maps and fine spatial resolution base maps is smaller than the difference between coarse spatial resolution base maps and fine spatial resolution base maps. The appropriate and economical strategy to promote the DSM technique in smallholder farms is to develop the relatively coarse spatial resolution soil prediction maps or utilize available coarse spatial resolution soil maps at the regional scale and to disaggregate these maps to the fine spatial resolution downscaled soil maps at farm scale.

  2. Needed Actions within Defense Acquisitions Based on a Forecast of Future Mobile Information and Communications Technologies Deployed in Austere Environments

    DTIC Science & Technology

    2013-03-01

    Deshmukh , and Vrat (2002) 30 performed an analysis to match forecasting techniques with specific technologies. In this study, the authors found...Technological Forecasting and Social Change, 79, 744-765. Mishra, S., Deshmukh , S., & Vrat, P. (2002). Matching of Technological Forecasting Technique to

  3. Integrating Traditional Learning and Games on Large Displays: An Experimental Study

    ERIC Educational Resources Information Center

    Ardito, Carmelo; Lanzilotti, Rosa; Costabile, Maria F.; Desolda, Giuseppe

    2013-01-01

    Current information and communication technology (ICT) has the potential to bring further changes to education. New learning techniques must be identified to take advantage of recent technological tools, such as smartphones, multimodal interfaces, multi-touch displays, etc. Game-based techniques that capitalize on ICT have proven to be very…

  4. A Comparison of Mean Phase Difference and Generalized Least Squares for Analyzing Single-Case Data

    ERIC Educational Resources Information Center

    Manolov, Rumen; Solanas, Antonio

    2013-01-01

    The present study focuses on single-case data analysis specifically on two procedures for quantifying differences between baseline and treatment measurements. The first technique tested is based on generalized least square regression analysis and is compared to a proposed non-regression technique, which allows obtaining similar information. The…

  5. Computer sciences

    NASA Technical Reports Server (NTRS)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  6. A Secure and Robust Compressed Domain Video Steganography for Intra- and Inter-Frames Using Embedding-Based Byte Differencing (EBBD) Scheme

    PubMed Central

    Idbeaa, Tarik; Abdul Samad, Salina; Husain, Hafizah

    2016-01-01

    This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values. PMID:26963093

  7. A Secure and Robust Compressed Domain Video Steganography for Intra- and Inter-Frames Using Embedding-Based Byte Differencing (EBBD) Scheme.

    PubMed

    Idbeaa, Tarik; Abdul Samad, Salina; Husain, Hafizah

    2016-01-01

    This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values.

  8. Incorporation of genomic information into genetic evaluation: U. S. beef industry as a model

    USDA-ARS?s Scientific Manuscript database

    In his presentation, Dr. Kuehn described approaches for using information garnered through developments in genomics to improve the accuracy of genetic evaluation. He considered the history of these molecular-based techniques, including their strengths and potential weaknesses, and his experiences wi...

  9. Personalized Recommender System for Digital Libraries

    ERIC Educational Resources Information Center

    Omisore, M. O.; Samuel, O. W.

    2014-01-01

    The huge amount of information available online has given rise to personalization and filtering systems. Recommender systems (RS) constitute a specific type of information filtering technique that present items according to user's interests. In this research, a web-based personalized recommender system capable of providing learners with books that…

  10. Informal Reading Inventories: Creating Teacher-Designed Literature-Based Assessments

    ERIC Educational Resources Information Center

    Provost, Mary C.; Lambert, Monica A.; Babkie, Andrea M.

    2010-01-01

    Mandates emphasizing student achievement have increased the importance of appropriate assessment techniques for students in general and special education classrooms. Informal reading inventories (IRIs), designed by classroom teachers, have been proven to be an efficient and effective way to determine students' strengths, weaknesses, and strategies…

  11. Addressing Information Literacy through Student-Centered Learning

    ERIC Educational Resources Information Center

    Bond, Paul

    2016-01-01

    This case study describes several courses that resulted from a teaching partnership between an instructional technologist/professor and a librarian that evolved over several semesters, and the information literacy implications of the course formats. In order to increase student engagement, active learning and inquiry-based learning techniques were…

  12. A vector scanning processing technique for pulsed laser velocimetry

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Edwards, Robert V.

    1989-01-01

    Pulsed laser sheet velocimetry yields nonintrusive measurements of two-dimensional velocity vectors across an extended planar region of a flow. Current processing techniques offer high precision (1 pct) velocity estimates, but can require several hours of processing time on specialized array processors. Under some circumstances, a simple, fast, less accurate (approx. 5 pct), data reduction technique which also gives unambiguous velocity vector information is acceptable. A direct space domain processing technique was examined. The direct space domain processing technique was found to be far superior to any other techniques known, in achieving the objectives listed above. It employs a new data coding and reduction technique, where the particle time history information is used directly. Further, it has no 180 deg directional ambiguity. A complex convection vortex flow was recorded and completely processed in under 2 minutes on an 80386 based PC, producing a 2-D velocity vector map of the flow field. Hence, using this new space domain vector scanning (VS) technique, pulsed laser velocimetry data can be reduced quickly and reasonably accurately, without specialized array processing hardware.

  13. Characterization of shape and deformation of MEMS by quantitative optoelectronic metrology techniques

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    2002-06-01

    Recent technological trends based on miniaturization of mechanical, electro-mechanical, and photonic devices to the microscopic scale, have led to the development of microelectromechanical systems (MEMS). Effective development of MEMS components requires the synergism of advanced design, analysis, and fabrication methodologies, and also of quantitative metrology techniques for characterizing their performance, reliability, and integrity during the electronic packaging cycle. In this paper, we describe opto-electronic techniques for measuring, with sub-micrometer accuracy, shape and changes in states of deformation of MEMS strictures. With the described opto-electronic techniques, it is possible to characterize MEMS components using the display and data modes. In the display mode, interferometric information related to shape and deformation is displayed at video frame rates, providing the capability for adjusting and setting experimental conditions. In the data mode, interferometric information related to shape and deformation is recorded as high-spatial and high-digital resolution images, which are further processed to provide quantitative 3D information. Furthermore, the quantitative 3D data are exported to computer-aided design (CAD) environments and utilized for analysis and optimization of MEMS devices. Capabilities of opto- electronic techniques are illustrated with representative applications demonstrating their applicability to provide indispensable quantitative information for the effective development and optimization of MEMS devices.

  14. Automatic topic identification of health-related messages in online health community using text classification.

    PubMed

    Lu, Yingjie

    2013-01-01

    To facilitate patient involvement in online health community and obtain informative support and emotional support they need, a topic identification approach was proposed in this paper for identifying automatically topics of the health-related messages in online health community, thus assisting patients in reaching the most relevant messages for their queries efficiently. Feature-based classification framework was presented for automatic topic identification in our study. We first collected the messages related to some predefined topics in a online health community. Then we combined three different types of features, n-gram-based features, domain-specific features and sentiment features to build four feature sets for health-related text representation. Finally, three different text classification techniques, C4.5, Naïve Bayes and SVM were adopted to evaluate our topic classification model. By comparing different feature sets and different classification techniques, we found that n-gram-based features, domain-specific features and sentiment features were all considered to be effective in distinguishing different types of health-related topics. In addition, feature reduction technique based on information gain was also effective to improve the topic classification performance. In terms of classification techniques, SVM outperformed C4.5 and Naïve Bayes significantly. The experimental results demonstrated that the proposed approach could identify the topics of online health-related messages efficiently.

  15. Absorption Filter Based Optical Diagnostics in High Speed Flows

    NASA Technical Reports Server (NTRS)

    Samimy, Mo; Elliott, Gregory; Arnette, Stephen

    1996-01-01

    Two major regimes where laser light scattered by molecules or particles in a flow contains significant information about the flow are Mie scattering and Rayleigh scattering. Mie scattering is used to obtain only velocity information, while Rayleigh scattering can be used to measure both the velocity and the thermodynamic properties of the flow. Now, recently introduced (1990, 1991) absorption filter based diagnostic techniques have started a new era in flow visualization, simultaneous velocity and thermodynamic measurements, and planar velocity measurements. Using a filtered planar velocimetry (FPV) technique, we have modified the optically thick iodine filter profile of Miles, et al., and used it in the pressure-broaden regime which accommodates measurements in a wide range of velocity applications. Measuring velocity and thermodynamic properties simultaneously, using absorption filtered based Rayleigh scattering, involves not only the measurement of the Doppler shift, but also the spectral profile of the Rayleigh scattering signal. Using multiple observation angles, simultaneous measurement of one component velocity and thermodynamic properties in a supersonic jet were measured. Presently, the technique is being extended for simultaneous measurements of all three components of velocity and thermodynamic properties.

  16. A conversation-based process tracing method for use with naturalistic decisions: an evaluation study.

    PubMed

    Williamson, J; Ranyard, R; Cuthbert, L

    2000-05-01

    This study is an evaluation of a process tracing method developed for naturalistic decisions, in this case a consumer choice task. The method is based on Huber et al.'s (1997) Active Information Search (AIS) technique, but develops it by providing spoken rather than written answers to respondents' questions, and by including think aloud instructions. The technique is used within a conversation-based situation, rather than the respondent thinking aloud 'into an empty space', as is conventionally the case in think aloud techniques. The method results in a concurrent verbal protocol as respondents make their decisions, and a retrospective report in the form of a post-decision summary. The method was found to be virtually non-reactive in relation to think aloud, although the variable of Preliminary Attribute Elicitation showed some evidence of reactivity. This was a methodological evaluation, and as such the data reported are essentially descriptive. Nevertheless, the data obtained indicate that the method is capable of producing information about decision processes which could have theoretical importance in terms of evaluating models of decision-making.

  17. An intelligent content discovery technique for health portal content management.

    PubMed

    De Silva, Daswin; Burstein, Frada

    2014-04-23

    Continuous content management of health information portals is a feature vital for its sustainability and widespread acceptance. Knowledge and experience of a domain expert is essential for content management in the health domain. The rate of generation of online health resources is exponential and thereby manual examination for relevance to a specific topic and audience is a formidable challenge for domain experts. Intelligent content discovery for effective content management is a less researched topic. An existing expert-endorsed content repository can provide the necessary leverage to automatically identify relevant resources and evaluate qualitative metrics. This paper reports on the design research towards an intelligent technique for automated content discovery and ranking for health information portals. The proposed technique aims to improve efficiency of the current mostly manual process of portal content management by utilising an existing expert-endorsed content repository as a supporting base and a benchmark to evaluate the suitability of new content A model for content management was established based on a field study of potential users. The proposed technique is integral to this content management model and executes in several phases (ie, query construction, content search, text analytics and fuzzy multi-criteria ranking). The construction of multi-dimensional search queries with input from Wordnet, the use of multi-word and single-word terms as representative semantics for text analytics and the use of fuzzy multi-criteria ranking for subjective evaluation of quality metrics are original contributions reported in this paper. The feasibility of the proposed technique was examined with experiments conducted on an actual health information portal, the BCKOnline portal. Both intermediary and final results generated by the technique are presented in the paper and these help to establish benefits of the technique and its contribution towards effective content management. The prevalence of large numbers of online health resources is a key obstacle for domain experts involved in content management of health information portals and websites. The proposed technique has proven successful at search and identification of resources and the measurement of their relevance. It can be used to support the domain expert in content management and thereby ensure the health portal is up-to-date and current.

  18. A bird strike handbook for base-level managers

    NASA Astrophysics Data System (ADS)

    Payson, R. P.; Vance, J. D.

    1984-09-01

    To help develop more awareness about bird strikes and bird strike reduction techniques, this thesis compiled all relevant information through an extensive literature search, review of base-level documents, and personal interviews. The final product--A Bird Strike Handbook for Base-Level Managers--provides information on bird strike statistics, methods to reduce the strike hazards, and means to obtain additional assistance. The handbook is organized for use by six major base agencies: Maintenance, Civil Engineering, Operations, Air Field Management, Safety, and Air Traffic Control. An appendix follows at the end.

  19. A Course in Information Techniques for Dental Students

    PubMed Central

    Dannenberg, Dena

    1972-01-01

    A course plan is presented for introducing literature searching and critical skills to dental students. Topics include the “life cycle of information,” reference sources available, search procedure, abstracting and indexing, and personal information systems. Teaching is structured around planned seminars and student projects. The course design is compatible with traditional dental curricula and is based on students' interest in dentistry rather than in information/library science. PMID:5024320

  20. Information visualization of the minority game

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Herbert, R. D.; Webber, R.

    2008-02-01

    Many dynamical systems produce large quantities of data. How can the system be understood from the output data? Often people are simply overwhelmed by the data. Traditional tools such as tables and plots are often not adequate, and new techniques are needed to help people to analyze the system. In this paper, we propose the use of two spacefilling visualization tools to examine the output from a complex agent-based financial model. We measure the effectiveness and performance of these tools through usability experiments. Based on the experimental results, we develop two new visualization techniques that combine the advantages and discard the disadvantages of the information visualization tools. The model we use is an evolutionary version of the Minority Game which simulates a financial market.

  1. Setting analyst: A practical harvest planning technique

    Treesearch

    Olivier R.M. Halleux; W. Dale Greene

    2001-01-01

    Setting Analyst is an ArcView extension that facilitates practical harvest planning for ground-based systems. By modeling the travel patterns of ground-based machines, it compares different harvesting settings based on projected average skidding distance, logging costs, and site disturbance levels. Setting Analyst uses information commonly available to consulting...

  2. A Symbolic Approach Using Feature Construction Capable of Acquiring Information/Knowledge for Building Expert Systems.

    ERIC Educational Resources Information Center

    Major, Raymond L.

    1998-01-01

    Presents a technique for developing a knowledge-base of information to use in an expert system. Proposed approach employs a popular machine-learning algorithm along with a method for forming a finite number of features or conjuncts of at most n primitive attributes. Illustrates this procedure by examining qualitative information represented in a…

  3. Fault diagnosis model for power transformers based on information fusion

    NASA Astrophysics Data System (ADS)

    Dong, Ming; Yan, Zhang; Yang, Li; Judd, Martin D.

    2005-07-01

    Methods used to assess the insulation status of power transformers before they deteriorate to a critical state include dissolved gas analysis (DGA), partial discharge (PD) detection and transfer function techniques, etc. All of these approaches require experience in order to correctly interpret the observations. Artificial intelligence (AI) is increasingly used to improve interpretation of the individual datasets. However, a satisfactory diagnosis may not be obtained if only one technique is used. For example, the exact location of PD cannot be predicted if only DGA is performed. However, using diverse methods may result in different diagnosis solutions, a problem that is addressed in this paper through the introduction of a fuzzy information infusion model. An inference scheme is proposed that yields consistent conclusions and manages the inherent uncertainty in the various methods. With the aid of information fusion, a framework is established that allows different diagnostic tools to be combined in a systematic way. The application of information fusion technique for insulation diagnostics of transformers is proved promising by means of examples.

  4. Cloud-based adaptive exon prediction for DNA analysis

    PubMed Central

    Putluri, Srinivasareddy; Fathima, Shaik Yasmeen

    2018-01-01

    Cloud computing offers significant research and economic benefits to healthcare organisations. Cloud services provide a safe place for storing and managing large amounts of such sensitive data. Under conventional flow of gene information, gene sequence laboratories send out raw and inferred information via Internet to several sequence libraries. DNA sequencing storage costs will be minimised by use of cloud service. In this study, the authors put forward a novel genomic informatics system using Amazon Cloud Services, where genomic sequence information is stored and accessed for processing. True identification of exon regions in a DNA sequence is a key task in bioinformatics, which helps in disease identification and design drugs. Three base periodicity property of exons forms the basis of all exon identification techniques. Adaptive signal processing techniques found to be promising in comparison with several other methods. Several adaptive exon predictors (AEPs) are developed using variable normalised least mean square and its maximum normalised variants to reduce computational complexity. Finally, performance evaluation of various AEPs is done based on measures such as sensitivity, specificity and precision using various standard genomic datasets taken from National Center for Biotechnology Information genomic sequence database. PMID:29515813

  5. Transmission ultrasonography. [time delay spectrometry for soft tissue transmission imaging

    NASA Technical Reports Server (NTRS)

    Heyser, R. C.; Le Croissette, D. H.

    1973-01-01

    Review of the results of the application of an advanced signal-processing technique, called time delay spectrometry, in obtaining soft tissue transmission images by transmission ultrasonography, both in vivo and in vitro. The presented results include amplitude ultrasound pictures and phase ultrasound pictures obtained by this technique. While amplitude ultrasonographs of tissue are closely analogous to X-ray pictures in that differential absorption is imaged, phase ultrasonographs represent an entirely new source of information based on differential time of propagation. Thus, a new source of information is made available for detailed analysis.

  6. Retrieval of the atmospheric compounds using a spectral optical thickness information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ioltukhovski, A.A.

    A spectral inversion technique for retrieval of the atmospheric gases and aerosols contents is proposed. This technique based upon the preliminary measurement or retrieval of the spectral optical thickness. The existence of a priori information about the spectral cross sections for some of the atmospheric components allows to retrieve the relative contents of these components in the atmosphere. Method of smooth filtration makes possible to estimate contents of atmospheric aerosols with known cross sections and to filter out other aerosols; this is done independently from their relative contribution to the optical thickness.

  7. Dual-wavelength phase-shifting digital holography selectively extracting wavelength information from wavelength-multiplexed holograms.

    PubMed

    Tahara, Tatsuki; Mori, Ryota; Kikunaga, Shuhei; Arai, Yasuhiko; Takaki, Yasuhiro

    2015-06-15

    Dual-wavelength phase-shifting digital holography that selectively extracts wavelength information from five wavelength-multiplexed holograms is presented. Specific phase shifts for respective wavelengths are introduced to remove the crosstalk components and extract only the object wave at the desired wavelength from the holograms. Object waves in multiple wavelengths are selectively extracted by utilizing 2π ambiguity and the subtraction procedures based on phase-shifting interferometry. Numerical results show the validity of the proposed technique. The proposed technique is also experimentally demonstrated.

  8. Using Dual Regression to Investigate Network Shape and Amplitude in Functional Connectivity Analyses

    PubMed Central

    Nickerson, Lisa D.; Smith, Stephen M.; Öngür, Döst; Beckmann, Christian F.

    2017-01-01

    Independent Component Analysis (ICA) is one of the most popular techniques for the analysis of resting state FMRI data because it has several advantageous properties when compared with other techniques. Most notably, in contrast to a conventional seed-based correlation analysis, it is model-free and multivariate, thus switching the focus from evaluating the functional connectivity of single brain regions identified a priori to evaluating brain connectivity in terms of all brain resting state networks (RSNs) that simultaneously engage in oscillatory activity. Furthermore, typical seed-based analysis characterizes RSNs in terms of spatially distributed patterns of correlation (typically by means of simple Pearson's coefficients) and thereby confounds together amplitude information of oscillatory activity and noise. ICA and other regression techniques, on the other hand, retain magnitude information and therefore can be sensitive to both changes in the spatially distributed nature of correlations (differences in the spatial pattern or “shape”) as well as the amplitude of the network activity. Furthermore, motion can mimic amplitude effects so it is crucial to use a technique that retains such information to ensure that connectivity differences are accurately localized. In this work, we investigate the dual regression approach that is frequently applied with group ICA to assess group differences in resting state functional connectivity of brain networks. We show how ignoring amplitude effects and how excessive motion corrupts connectivity maps and results in spurious connectivity differences. We also show how to implement the dual regression to retain amplitude information and how to use dual regression outputs to identify potential motion effects. Two key findings are that using a technique that retains magnitude information, e.g., dual regression, and using strict motion criteria are crucial for controlling both network amplitude and motion-related amplitude effects, respectively, in resting state connectivity analyses. We illustrate these concepts using realistic simulated resting state FMRI data and in vivo data acquired in healthy subjects and patients with bipolar disorder and schizophrenia. PMID:28348512

  9. Nursing Information Systems Requirements: A Milestone for Patient Outcome and Patient Safety Improvement.

    PubMed

    Farzandipour, Mehrdad; Meidani, Zahra; Riazi, Hossein; Sadeqi Jabali, Monireh

    2016-12-01

    Considering the integral role of understanding users' requirements in information system success, this research aimed to determine functional requirements of nursing information systems through a national survey. Delphi technique method was applied to conduct this study through three phases: focus group method modified Delphi technique and classic Delphi technique. A cross-sectional study was conducted to evaluate the proposed requirements within 15 general hospitals in Iran. Forty-three of 76 approved requirements were clinical, and 33 were administrative ones. Nurses' mean agreements for clinical requirements were higher than those of administrative requirements; minimum and maximum means of clinical requirements were 3.3 and 3.88, respectively. Minimum and maximum means of administrative requirements were 3.1 and 3.47, respectively. Research findings indicated that those information system requirements that support nurses in doing tasks including direct care, medicine prescription, patient treatment management, and patient safety have been the target of special attention. As nurses' requirements deal directly with patient outcome and patient safety, nursing information systems requirements should not only address automation but also nurses' tasks and work processes based on work analysis.

  10. Usability-driven pruning of large ontologies: the case of SNOMED CT.

    PubMed

    López-García, Pablo; Boeker, Martin; Illarramendi, Arantza; Schulz, Stefan

    2012-06-01

    To study ontology modularization techniques when applied to SNOMED CT in a scenario in which no previous corpus of information exists and to examine if frequency-based filtering using MEDLINE can reduce subset size without discarding relevant concepts. Subsets were first extracted using four graph-traversal heuristics and one logic-based technique, and were subsequently filtered with frequency information from MEDLINE. Twenty manually coded discharge summaries from cardiology patients were used as signatures and test sets. The coverage, size, and precision of extracted subsets were measured. Graph-traversal heuristics provided high coverage (71-96% of terms in the test sets of discharge summaries) at the expense of subset size (17-51% of the size of SNOMED CT). Pre-computed subsets and logic-based techniques extracted small subsets (1%), but coverage was limited (24-55%). Filtering reduced the size of large subsets to 10% while still providing 80% coverage. Extracting subsets to annotate discharge summaries is challenging when no previous corpus exists. Ontology modularization provides valuable techniques, but the resulting modules grow as signatures spread across subhierarchies, yielding a very low precision. Graph-traversal strategies and frequency data from an authoritative source can prune large biomedical ontologies and produce useful subsets that still exhibit acceptable coverage. However, a clinical corpus closer to the specific use case is preferred when available.

  11. Developing Visualization Techniques for Semantics-based Information Networks

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Hall, David R.

    2003-01-01

    Information systems incorporating complex network structured information spaces with a semantic underpinning - such as hypermedia networks, semantic networks, topic maps, and concept maps - are being deployed to solve some of NASA s critical information management problems. This paper describes some of the human interaction and navigation problems associated with complex semantic information spaces and describes a set of new visual interface approaches to address these problems. A key strategy is to leverage semantic knowledge represented within these information spaces to construct abstractions and views that will be meaningful to the human user. Human-computer interaction methodologies will guide the development and evaluation of these approaches, which will benefit deployed NASA systems and also apply to information systems based on the emerging Semantic Web.

  12. Exploration of Deaf People’s Health Information Sources and Techniques for Information Delivery in Cape Town: A Qualitative Study for the Design and Development of a Mobile Health App

    PubMed Central

    Glaser, Meryl; Tucker, William David; Diehl, Jan Carel

    2016-01-01

    Background Many cultural and linguistic Deaf people in South Africa face disparity when accessing health information because of social and language barriers. The number of certified South African Sign Language interpreters (SASLIs) is also insufficient to meet the demand of the Deaf population in the country. Our research team, in collaboration with the Deaf communities in Cape Town, devised a mobile health app called SignSupport to bridge the communication gaps in health care contexts. We consequently plan to extend our work with a Health Knowledge Transfer System (HKTS) to provide Deaf people with accessible, understandable, and accurate health information. We conducted an explorative study to prepare the groundwork for the design and development of the system. Objectives To investigate the current modes of health information distributed to Deaf people in Cape Town, identify the health information sources Deaf people prefer and their reasons, and define effective techniques for delivering understandable information to generate the groundwork for the mobile health app development with and for Deaf people. Methods A qualitative methodology using semistructured interviews with sensitizing tools was used in a community-based codesign setting. A total of 23 Deaf people and 10 health professionals participated in this study. Inductive and deductive coding was used for the analysis. Results Deaf people currently have access to 4 modes of health information distribution through: Deaf and other relevant organizations, hearing health professionals, personal interactions, and the mass media. Their preferred and accessible sources are those delivering information in signed language and with communication techniques that match Deaf people’s communication needs. Accessible and accurate health information can be delivered to Deaf people by 3 effective techniques: using signed language including its dialects, through health drama with its combined techniques, and accompanying the information with pictures in combination with simple text descriptions. Conclusions We can apply the knowledge gained from this exploration to build the groundwork of the mobile health information system. We see an opportunity to design an HKTS to assist the information delivery during the patient-health professional interactions in primary health care settings. Deaf people want to understand the information relevant to their diagnosed disease and its self-management. The 3 identified effective techniques will be applied to deliver health information through the mobile health app. PMID:27836819

  13. Wildlands management for wildlife viewing

    Treesearch

    Tamsie Cooper; William W. Shaw

    1979-01-01

    Wildlife in general and wildlife viewing in particular play significant roles in enriching the aesthetic experiences of visitors to wildlands. Effective management of wildlife for viewing must be based on human as well as biological information. The integration of the principles and techniques developed for game management with information on the human dimensions of...

  14. Separating Item and Order Information through Process Dissociation

    ERIC Educational Resources Information Center

    Nairne, James S.; Kelley, Matthew R.

    2004-01-01

    In the present paper, we develop and apply a technique, based on the logic of process dissociation, for obtaining numerical estimates of item and order information. Certain variables, such as phonological similarity, are widely believed to produce dissociative effects on item and order retention. However, such beliefs rest on the questionable…

  15. [Modern bacterial taxonomy: techniques review--application to bacteria that nodulate leguminous plants (BNL)].

    PubMed

    Zakhia, Frédéric; de Lajudie, Philippe

    2006-03-01

    Taxonomy is the science that studies the relationships between organisms. It comprises classification, nomenclature, and identification. Modern bacterial taxonomy is polyphasic. This means that it is based on several molecular techniques, each one retrieving the information at different cellular levels (proteins, fatty acids, DNA...). The obtained results are combined and analysed to reach a "consensus taxonomy" of a microorganism. Until 1970, a small number of classification techniques were available for microbiologists (mainly phenotypic characterization was performed: a legume species nodulation ability for a Rhizobium, for example). With the development of techniques based on polymerase chain reaction for characterization, the bacterial taxonomy has undergone great changes. In particular, the classification of the legume nodulating bacteria has been repeatedly modified over the last 20 years. We present here a review of the currently used molecular techniques in bacterial characterization, with examples of application of these techniques for the study of the legume nodulating bacteria.

  16. Exploring actinide materials through synchrotron radiation techniques.

    PubMed

    Shi, Wei-Qun; Yuan, Li-Yong; Wang, Cong-Zhi; Wang, Lin; Mei, Lei; Xiao, Cheng-Liang; Zhang, Li; Li, Zi-Jie; Zhao, Yu-Liang; Chai, Zhi-Fang

    2014-12-10

    Synchrotron radiation (SR) based techniques have been utilized with increasing frequency in the past decade to explore the brilliant and challenging sciences of actinide-based materials. This trend is partially driven by the basic needs for multi-scale actinide speciation and bonding information and also the realistic needs for nuclear energy research. In this review, recent research progresses on actinide related materials by means of various SR techniques were selectively highlighted and summarized, with the emphasis on X-ray absorption spectroscopy, X-ray diffraction and scattering spectroscopy, which are powerful tools to characterize actinide materials. In addition, advanced SR techniques for exploring future advanced nuclear fuel cycles dealing with actinides are illustrated as well. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Trajectory-Based Performance Assessment for Aviation Weather Information

    NASA Technical Reports Server (NTRS)

    Vigeant-Langlois, Laurence; Hansman, R. John, Jr.

    2003-01-01

    Based on an analysis of aviation decision-makers' time-related weather information needs, an abstraction of the aviation weather decision task was developed, that involves 4-D intersection testing between aircraft trajectory hypertubes and hazardous weather hypervolumes. The framework builds on the hypothesis that hazardous meteorological fields can be simplified using discrete boundaries of surrogate threat attributes. The abstractions developed in the framework may be useful in studying how to improve the performance of weather forecasts from the trajectory-centric perspective, as well as for developing useful visualization techniques of weather information.

  18. Multivariate statistical analysis software technologies for astrophysical research involving large data bases

    NASA Technical Reports Server (NTRS)

    Djorgovski, George

    1993-01-01

    The existing and forthcoming data bases from NASA missions contain an abundance of information whose complexity cannot be efficiently tapped with simple statistical techniques. Powerful multivariate statistical methods already exist which can be used to harness much of the richness of these data. Automatic classification techniques have been developed to solve the problem of identifying known types of objects in multiparameter data sets, in addition to leading to the discovery of new physical phenomena and classes of objects. We propose an exploratory study and integration of promising techniques in the development of a general and modular classification/analysis system for very large data bases, which would enhance and optimize data management and the use of human research resource.

  19. Multivariate statistical analysis software technologies for astrophysical research involving large data bases

    NASA Technical Reports Server (NTRS)

    Djorgovski, Stanislav

    1992-01-01

    The existing and forthcoming data bases from NASA missions contain an abundance of information whose complexity cannot be efficiently tapped with simple statistical techniques. Powerful multivariate statistical methods already exist which can be used to harness much of the richness of these data. Automatic classification techniques have been developed to solve the problem of identifying known types of objects in multi parameter data sets, in addition to leading to the discovery of new physical phenomena and classes of objects. We propose an exploratory study and integration of promising techniques in the development of a general and modular classification/analysis system for very large data bases, which would enhance and optimize data management and the use of human research resources.

  20. A Robust Approach for a Filter-Based Monocular Simultaneous Localization and Mapping (SLAM) System

    PubMed Central

    Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni

    2013-01-01

    Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes. PMID:23823972

  1. Protecting against cyber threats in networked information systems

    NASA Astrophysics Data System (ADS)

    Ertoz, Levent; Lazarevic, Aleksandar; Eilertson, Eric; Tan, Pang-Ning; Dokas, Paul; Kumar, Vipin; Srivastava, Jaideep

    2003-07-01

    This paper provides an overview of our efforts in detecting cyber attacks in networked information systems. Traditional signature based techniques for detecting cyber attacks can only detect previously known intrusions and are useless against novel attacks and emerging threats. Our current research at the University of Minnesota is focused on developing data mining techniques to automatically detect attacks against computer networks and systems. This research is being conducted as a part of MINDS (Minnesota Intrusion Detection System) project at the University of Minnesota. Experimental results on live network traffic at the University of Minnesota show that the new techniques show great promise in detecting novel intrusions. In particular, during the past few months our techniques have been successful in automatically identifying several novel intrusions that could not be detected using state-of-the-art tools such as SNORT.

  2. New layer-based imaging and rapid prototyping techniques for computer-aided design and manufacture of custom dental restoration.

    PubMed

    Lee, M-Y; Chang, C-C; Ku, Y C

    2008-01-01

    Fixed dental restoration by conventional methods greatly relies on the skill and experience of the dental technician. The quality and accuracy of the final product depends mostly on the technician's subjective judgment. In addition, the traditional manual operation involves many complex procedures, and is a time-consuming and labour-intensive job. Most importantly, no quantitative design and manufacturing information is preserved for future retrieval. In this paper, a new device for scanning the dental profile and reconstructing 3D digital information of a dental model based on a layer-based imaging technique, called abrasive computer tomography (ACT) was designed in-house and proposed for the design of custom dental restoration. The fixed partial dental restoration was then produced by rapid prototyping (RP) and computer numerical control (CNC) machining methods based on the ACT scanned digital information. A force feedback sculptor (FreeForm system, Sensible Technologies, Inc., Cambridge MA, USA), which comprises 3D Touch technology, was applied to modify the morphology and design of the fixed dental restoration. In addition, a comparison of conventional manual operation and digital manufacture using both RP and CNC machining technologies for fixed dental restoration production is presented. Finally, a digital custom fixed restoration manufacturing protocol integrating proposed layer-based dental profile scanning, computer-aided design, 3D force feedback feature modification and advanced fixed restoration manufacturing techniques is illustrated. The proposed method provides solid evidence that computer-aided design and manufacturing technologies may become a new avenue for custom-made fixed restoration design, analysis, and production in the 21st century.

  3. Extracting Information about the Rotator Cuff from Magnetic Resonance Images Using Deterministic and Random Techniques

    PubMed Central

    De Los Ríos, F. A.; Paluszny, M.

    2015-01-01

    We consider some methods to extract information about the rotator cuff based on magnetic resonance images; the study aims to define an alternative method of display that might facilitate the detection of partial tears in the supraspinatus tendon. Specifically, we are going to use families of ellipsoidal triangular patches to cover the humerus head near the affected area. These patches are going to be textured and displayed with the information of the magnetic resonance images using the trilinear interpolation technique. For the generation of points to texture each patch, we propose a new method that guarantees the uniform distribution of its points using a random statistical method. Its computational cost, defined as the average computing time to generate a fixed number of points, is significantly lower as compared with deterministic and other standard statistical techniques. PMID:25650281

  4. Dosimetric changes with computed tomography automatic tube-current modulation techniques.

    PubMed

    Spampinato, Sofia; Gueli, Anna Maria; Milone, Pietro; Raffaele, Luigi Angelo

    2018-04-06

    The study is aimed at a verification of dose changes for a computed tomography automatic tube-current modulation (ATCM) technique. For this purpose, anthropomorphic phantom and Gafchromic ® XR-QA2 films were used. Radiochromic films were cut according to the shape of two thorax regions. The ATCM algorithm is based on noise index (NI) and three exam protocols with different NI were chosen, of which one was a reference. Results were compared with dose values displayed by the console and with Poisson statistics. The information obtained with radiochromic films has been normalized with respect to the NI reference value to compare dose percentage variations. Results showed that, on average, the information reported by the CT console and calculated values coincide with measurements. The study allowed verification of the dose information reported by the CT console for an ATCM technique. Although this evaluation represents an estimate, the method can be a starting point for further studies.

  5. Aimulet: a multilingual spatial optical voice card terminal for location and direction based information services

    NASA Astrophysics Data System (ADS)

    Itoh, Hideo; Lin, Xin; Kaji, Ryosaku; Niwa, Tatsuya; Nakamura, Yoshiyuki; Nishimura, Takuichi

    2006-01-01

    The National Institute of Advanced Industrial Science and Technology (AIST) in Japan has been developing Aimulet, which is a compact low-power consuming information terminal for a personal information services. Conventional Aimulet, which is called Aimulet ver. 1 or CoBIT, has features of location and direction sensitive information service device without batteries. On the other hand, the Aimulet ver. 1 has two subjects, one is multiplex and demultiplex of some contents, and another is operation under sunshine. In Former subject is of solved by the wavelength multiplex technique using LED emitter with different wavelength and dielectric optical filters. Latter subject is solved by new micro spherical solar cells with a visible-light-eliminating optical filter and a new design of light irradiation. These techniques are applied to the EXPO 2005, Aichi Japan and introduced in public. The former technique is applied on Aimulet GH, which is used in Orange Hall of the Global House, scientific museum with a fossil of a frozen mammoth. The latter technique is applied on Aimulet LA, which is used in the Laurie Anderson's WALK project in the Japanese Garden.

  6. BaTMAn: Bayesian Technique for Multi-image Analysis

    NASA Astrophysics Data System (ADS)

    Casado, J.; Ascasibar, Y.; García-Benito, R.; Guidi, G.; Choudhury, O. S.; Bellocchi, E.; Sánchez, S. F.; Díaz, A. I.

    2016-12-01

    Bayesian Technique for Multi-image Analysis (BaTMAn) characterizes any astronomical dataset containing spatial information and performs a tessellation based on the measurements and errors provided as input. The algorithm iteratively merges spatial elements as long as they are statistically consistent with carrying the same information (i.e. identical signal within the errors). The output segmentations successfully adapt to the underlying spatial structure, regardless of its morphology and/or the statistical properties of the noise. BaTMAn identifies (and keeps) all the statistically-significant information contained in the input multi-image (e.g. an IFS datacube). The main aim of the algorithm is to characterize spatially-resolved data prior to their analysis.

  7. Consideration of techniques to mitigate the unauthorized 3D printing production of keys

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy; Kerlin, Scott

    2016-05-01

    The illicit production of 3D printed keys based on remote-sensed imagery is problematic as it allows a would-be intruder to access a secured facility without the attack attempt being as obviously detectable as conventional techniques. This paper considers the problem from multiple perspectives. First, it looks at different attack types and considers the prospective attack from a digital information perspective. Second, based on this, techniques for securing keys are considered. Third, the design of keys is considered from the perspective of making them more difficult to duplicate using visible light sensing and 3D printing. Policy and legal considerations are discussed.

  8. A survey of light-scattering techniques used in the remote monitoring of atmospheric aerosols

    NASA Technical Reports Server (NTRS)

    Deirmendjian, D.

    1980-01-01

    A critical survey of the literature on the use of light-scattering mechanisms in the remote monitoring of atmospheric aerosols, their geographical and spatial distribution, and temporal variations was undertaken to aid in the choice of future operational systems, both ground based and air or space borne. An evaluation, mainly qualitative and subjective, of various techniques and systems is carried out. No single system is found to be adequate for operational purposes. A combination of earth surface and space-borne systems based mainly on passive techniques involving solar radiation with active (lidar) systems to provide auxiliary or backup information is tentatively recommended.

  9. What's so different about Lacan's approach to psychoanalysis?

    PubMed

    Fink, Bruce

    2011-12-01

    Clinical work based on Lacanian principles is rarely compared in the psychoanalytic literature with that based on other principles. The author attempts to highlight a few important theoretical differences regarding language, desire, affect, and time between a Lacanian approach and certain others that lead to differences in focus and technique, related, for example, to interpretation, scansion, and countertransference. Lacanian techniques are illustrated with brief clinical vignettes. In the interest of confidentiality, identifying information and certain circumstances have been changed or omitted in the material presented.

  10. Frequency comb-based time transfer over a 159 km long installed fiber network

    NASA Astrophysics Data System (ADS)

    Lessing, M.; Margolis, H. S.; Brown, C. T. A.; Marra, G.

    2017-05-01

    We demonstrate a frequency comb-based time transfer technique on a 159 km long installed fiber link. Timing information is superimposed onto the optical pulse train of an ITU-channel-filtered mode-locked laser using an intensity modulation scheme. The environmentally induced optical path length fluctuations are compensated using a round-trip phase noise cancellation technique. When the fiber link is stabilized, a time deviation of 300 fs at 5 s and an accuracy at the 100 ps level are achieved.

  11. Use of Formative Classroom Assessment Techniques in a Project Management Course

    ERIC Educational Resources Information Center

    Purcell, Bernice M.

    2014-01-01

    Formative assessment is considered to be an evaluation technique that informs the instructor of the level of student learning, giving evidence when it may be necessary for the instructor to make a change in delivery based upon the results. Several theories of formative assessment exist, all which propound the importance of feedback to the student.…

  12. Weighted hybrid technique for recommender system

    NASA Astrophysics Data System (ADS)

    Suriati, S.; Dwiastuti, Meisyarah; Tulus, T.

    2017-12-01

    Recommender system becomes very popular and has important role in an information system or webpages nowadays. A recommender system tries to make a prediction of which item a user may like based on his activity on the system. There are some familiar techniques to build a recommender system, such as content-based filtering and collaborative filtering. Content-based filtering does not involve opinions from human to make the prediction, while collaborative filtering does, so collaborative filtering can predict more accurately. However, collaborative filtering cannot give prediction to items which have never been rated by any user. In order to cover the drawbacks of each approach with the advantages of other approach, both approaches can be combined with an approach known as hybrid technique. Hybrid technique used in this work is weighted technique in which the prediction score is combination linear of scores gained by techniques that are combined.The purpose of this work is to show how an approach of weighted hybrid technique combining content-based filtering and item-based collaborative filtering can work in a movie recommender system and to show the performance comparison when both approachare combined and when each approach works alone. There are three experiments done in this work, combining both techniques with different parameters. The result shows that the weighted hybrid technique that is done in this work does not really boost the performance up, but it helps to give prediction score for unrated movies that are impossible to be recommended by only using collaborative filtering.

  13. The Development and Evaluation of a Computer-Based System for Managing the Design and Pilot-Testing of Interactive Videodisc Programs. Training and Development Research Center, Project Number Forty-Three.

    ERIC Educational Resources Information Center

    Sayre, Scott Alan

    The purpose of this study was to develop and validate a computer-based system that would allow interactive video developers to integrate and manage the design components prior to production. These components of an interactive video (IVD) program include visual information in a variety of formats, audio information, and instructional techniques,…

  14. Private Information Retrieval Techniques for Enabling Location Privacy in Location-Based Services

    NASA Astrophysics Data System (ADS)

    Khoshgozaran, Ali; Shahabi, Cyrus

    The ubiquity of smartphones and other location-aware hand-held devices has resulted in a dramatic increase in popularity of location-based services (LBS) tailored to user locations. The comfort of LBS comes with a privacy cost. Various distressing privacy violations caused by sharing sensitive location information with potentially malicious services have highlighted the importance of location privacy research aiming to protect user privacy while interacting with LBS.

  15. Pilot study: evaluation of the use of the convergent interview technique in understanding the perception of surgical design and simulation.

    PubMed

    Logan, Heather; Wolfaardt, Johan; Boulanger, Pierre; Hodgetts, Bill; Seikaly, Hadi

    2013-06-19

    It is important to understand the perceived value of surgical design and simulation (SDS) amongst surgeons, as this will influence its implementation in clinical settings. The purpose of the present study was to examine the application of the convergent interview technique in the field of surgical design and simulation and evaluate whether the technique would uncover new perceptions of virtual surgical planning (VSP) and medical models not discovered by other qualitative case-based techniques. Five surgeons were asked to participate in the study. Each participant was interviewed following the convergent interview technique. After each interview, the interviewer interpreted the information by seeking agreements and disagreements among the interviewees in order to understand the key concepts in the field of SDS. Fifteen important issues were extracted from the convergent interviews. In general, the convergent interview was an effective technique in collecting information about the perception of clinicians. The study identified three areas where the technique could be improved upon for future studies in the SDS field.

  16. Pilot study: evaluation of the use of the convergent interview technique in understanding the perception of surgical design and simulation

    PubMed Central

    2013-01-01

    Background It is important to understand the perceived value of surgical design and simulation (SDS) amongst surgeons, as this will influence its implementation in clinical settings. The purpose of the present study was to examine the application of the convergent interview technique in the field of surgical design and simulation and evaluate whether the technique would uncover new perceptions of virtual surgical planning (VSP) and medical models not discovered by other qualitative case-based techniques. Methods Five surgeons were asked to participate in the study. Each participant was interviewed following the convergent interview technique. After each interview, the interviewer interpreted the information by seeking agreements and disagreements among the interviewees in order to understand the key concepts in the field of SDS. Results Fifteen important issues were extracted from the convergent interviews. Conclusion In general, the convergent interview was an effective technique in collecting information about the perception of clinicians. The study identified three areas where the technique could be improved upon for future studies in the SDS field. PMID:23782771

  17. The Xeno-glycomics database (XDB): a relational database of qualitative and quantitative pig glycome repertoire.

    PubMed

    Park, Hae-Min; Park, Ju-Hyeong; Kim, Yoon-Woo; Kim, Kyoung-Jin; Jeong, Hee-Jin; Jang, Kyoung-Soon; Kim, Byung-Gee; Kim, Yun-Gon

    2013-11-15

    In recent years, the improvement of mass spectrometry-based glycomics techniques (i.e. highly sensitive, quantitative and high-throughput analytical tools) has enabled us to obtain a large dataset of glycans. Here we present a database named Xeno-glycomics database (XDB) that contains cell- or tissue-specific pig glycomes analyzed with mass spectrometry-based techniques, including a comprehensive pig glycan information on chemical structures, mass values, types and relative quantities. It was designed as a user-friendly web-based interface that allows users to query the database according to pig tissue/cell types or glycan masses. This database will contribute in providing qualitative and quantitative information on glycomes characterized from various pig cells/organs in xenotransplantation and might eventually provide new targets in the α1,3-galactosyltransferase gene-knock out pigs era. The database can be accessed on the web at http://bioinformatics.snu.ac.kr/xdb.

  18. The Optimization of In-Memory Space Partitioning Trees for Cache Utilization

    NASA Astrophysics Data System (ADS)

    Yeo, Myung Ho; Min, Young Soo; Bok, Kyoung Soo; Yoo, Jae Soo

    In this paper, a novel cache conscious indexing technique based on space partitioning trees is proposed. Many researchers investigated efficient cache conscious indexing techniques which improve retrieval performance of in-memory database management system recently. However, most studies considered data partitioning and targeted fast information retrieval. Existing data partitioning-based index structures significantly degrade performance due to the redundant accesses of overlapped spaces. Specially, R-tree-based index structures suffer from the propagation of MBR (Minimum Bounding Rectangle) information by updating data frequently. In this paper, we propose an in-memory space partitioning index structure for optimal cache utilization. The proposed index structure is compared with the existing index structures in terms of update performance, insertion performance and cache-utilization rate in a variety of environments. The results demonstrate that the proposed index structure offers better performance than existing index structures.

  19. Extraction of decision rules via imprecise probabilities

    NASA Astrophysics Data System (ADS)

    Abellán, Joaquín; López, Griselda; Garach, Laura; Castellano, Javier G.

    2017-05-01

    Data analysis techniques can be applied to discover important relations among features. This is the main objective of the Information Root Node Variation (IRNV) technique, a new method to extract knowledge from data via decision trees. The decision trees used by the original method were built using classic split criteria. The performance of new split criteria based on imprecise probabilities and uncertainty measures, called credal split criteria, differs significantly from the performance obtained using the classic criteria. This paper extends the IRNV method using two credal split criteria: one based on a mathematical parametric model, and other one based on a non-parametric model. The performance of the method is analyzed using a case study of traffic accident data to identify patterns related to the severity of an accident. We found that a larger number of rules is generated, significantly supplementing the information obtained using the classic split criteria.

  20. A novel murmur-based heart sound feature extraction technique using envelope-morphological analysis

    NASA Astrophysics Data System (ADS)

    Yao, Hao-Dong; Ma, Jia-Li; Fu, Bin-Bin; Wang, Hai-Yang; Dong, Ming-Chui

    2015-07-01

    Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.

  1. Feedback mechanism for smart nozzles and nebulizers

    DOEpatents

    Montaser, Akbar [Potomac, MD; Jorabchi, Kaveh [Arlington, VA; Kahen, Kaveh [Kleinburg, CA

    2009-01-27

    Nozzles and nebulizers able to produce aerosol with optimum and reproducible quality based on feedback information obtained using laser imaging techniques. Two laser-based imaging techniques based on particle image velocimetry (PTV) and optical patternation map and contrast size and velocity distributions for indirect and direct pneumatic nebulizations in plasma spectrometry. Two pulses from thin laser sheet with known time difference illuminate droplets flow field. Charge coupled device (CCL)) captures scattering of laser light from droplets, providing two instantaneous particle images. Pointwise cross-correlation of corresponding images yields two-dimensional velocity map of aerosol velocity field. For droplet size distribution studies, solution is doped with fluorescent dye and both laser induced florescence (LIF) and Mie scattering images are captured simultaneously by two CCDs with the same field of view. Ratio of LIF/Mie images provides relative droplet size information, then scaled by point calibration method via phase Doppler particle analyzer.

  2. Development and evaluation of an automatic labeling technique for spring small grains

    NASA Technical Reports Server (NTRS)

    Crist, E. P.; Malila, W. A. (Principal Investigator)

    1981-01-01

    A labeling technique is described which seeks to associate a sampling entity with a particular crop or crop group based on similarity of growing season and temporal-spectral patterns of development. Human analyst provide contextual information, after which labeling decisions are made automatically. Results of a test of the technique on a large, multi-year data set are reported. Grain labeling accuracies are similar to those achieved by human analysis techniques, while non-grain accuracies are lower. Recommendations for improvments and implications of the test results are discussed.

  3. Security of statistical data bases: invasion of privacy through attribute correlational modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palley, M.A.

    This study develops, defines, and applies a statistical technique for the compromise of confidential information in a statistical data base. Attribute Correlational Modeling (ACM) recognizes that the information contained in a statistical data base represents real world statistical phenomena. As such, ACM assumes correlational behavior among the database attributes. ACM proceeds to compromise confidential information through creation of a regression model, where the confidential attribute is treated as the dependent variable. The typical statistical data base may preclude the direct application of regression. In this scenario, the research introduces the notion of a synthetic data base, created through legitimate queriesmore » of the actual data base, and through proportional random variation of responses to these queries. The synthetic data base is constructed to resemble the actual data base as closely as possible in a statistical sense. ACM then applies regression analysis to the synthetic data base, and utilizes the derived model to estimate confidential information in the actual database.« less

  4. Comparing the quality of accessing medical literature using content-based visual and textual information retrieval

    NASA Astrophysics Data System (ADS)

    Müller, Henning; Kalpathy-Cramer, Jayashree; Kahn, Charles E., Jr.; Hersh, William

    2009-02-01

    Content-based visual information (or image) retrieval (CBIR) has been an extremely active research domain within medical imaging over the past ten years, with the goal of improving the management of visual medical information. Many technical solutions have been proposed, and application scenarios for image retrieval as well as image classification have been set up. However, in contrast to medical information retrieval using textual methods, visual retrieval has only rarely been applied in clinical practice. This is despite the large amount and variety of visual information produced in hospitals every day. This information overload imposes a significant burden upon clinicians, and CBIR technologies have the potential to help the situation. However, in order for CBIR to become an accepted clinical tool, it must demonstrate a higher level of technical maturity than it has to date. Since 2004, the ImageCLEF benchmark has included a task for the comparison of visual information retrieval algorithms for medical applications. In 2005, a task for medical image classification was introduced and both tasks have been run successfully for the past four years. These benchmarks allow an annual comparison of visual retrieval techniques based on the same data sets and the same query tasks, enabling the meaningful comparison of various retrieval techniques. The datasets used from 2004-2007 contained images and annotations from medical teaching files. In 2008, however, the dataset used was made up of 67,000 images (along with their associated figure captions and the full text of their corresponding articles) from two Radiological Society of North America (RSNA) scientific journals. This article describes the results of the medical image retrieval task of the ImageCLEF 2008 evaluation campaign. We compare the retrieval results of both visual and textual information retrieval systems from 15 research groups on the aforementioned data set. The results show clearly that, currently, visual retrieval alone does not achieve the performance necessary for real-world clinical applications. Most of the common visual retrieval techniques have a MAP (Mean Average Precision) of around 2-3%, which is much lower than that achieved using textual retrieval (MAP=29%). Advanced machine learning techniques, together with good training data, have been shown to improve the performance of visual retrieval systems in the past. Multimodal retrieval (basing retrieval on both visual and textual information) can achieve better results than purely visual, but only when carefully applied. In many cases, multimodal retrieval systems performed even worse than purely textual retrieval systems. On the other hand, some multimodal retrieval systems demonstrated significantly increased early precision, which has been shown to be a desirable behavior in real-world systems.

  5. A PLM-based automated inspection planning system for coordinate measuring machine

    NASA Astrophysics Data System (ADS)

    Zhao, Haibin; Wang, Junying; Wang, Boxiong; Wang, Jianmei; Chen, Huacheng

    2006-11-01

    With rapid progress of Product Lifecycle Management (PLM) in manufacturing industry, automatic generation of inspection planning of product and the integration with other activities in product lifecycle play important roles in quality control. But the techniques for these purposes are laggard comparing with techniques of CAD/CAM. Therefore, an automatic inspection planning system for Coordinate Measuring Machine (CMM) was developed to improve the automatization of measuring based on the integration of inspection system in PLM. Feature information representation is achieved based on a PLM canter database; measuring strategy is optimized through the integration of multi-sensors; reasonable number and distribution of inspection points are calculated and designed with the guidance of statistic theory and a synthesis distribution algorithm; a collision avoidance method is proposed to generate non-collision inspection path with high efficiency. Information mapping is performed between Neutral Interchange Files (NIFs), such as STEP, DML, DMIS, XML, etc., to realize information integration with other activities in the product lifecycle like design, manufacturing and inspection execution, etc. Simulation was carried out to demonstrate the feasibility of the proposed system. As a result, the inspection process is becoming simpler and good result can be got based on the integration in PLM.

  6. Intelligent agents for adaptive security market surveillance

    NASA Astrophysics Data System (ADS)

    Chen, Kun; Li, Xin; Xu, Baoxun; Yan, Jiaqi; Wang, Huaiqing

    2017-05-01

    Market surveillance systems have increasingly gained in usage for monitoring trading activities in stock markets to maintain market integrity. Existing systems primarily focus on the numerical analysis of market activity data and generally ignore textual information. To fulfil the requirements of information-based surveillance, a multi-agent-based architecture that uses agent intercommunication and incremental learning mechanisms is proposed to provide a flexible and adaptive inspection process. A prototype system is implemented using the techniques of text mining and rule-based reasoning, among others. Based on experiments in the scalping surveillance scenario, the system can identify target information evidence up to 87.50% of the time and automatically identify 70.59% of cases depending on the constraints on the available information sources. The results of this study indicate that the proposed information surveillance system is effective. This study thus contributes to the market surveillance literature and has significant practical implications.

  7. Natural Inspired Intelligent Visual Computing and Its Application to Viticulture.

    PubMed

    Ang, Li Minn; Seng, Kah Phooi; Ge, Feng Lu

    2017-05-23

    This paper presents an investigation of natural inspired intelligent computing and its corresponding application towards visual information processing systems for viticulture. The paper has three contributions: (1) a review of visual information processing applications for viticulture; (2) the development of natural inspired computing algorithms based on artificial immune system (AIS) techniques for grape berry detection; and (3) the application of the developed algorithms towards real-world grape berry images captured in natural conditions from vineyards in Australia. The AIS algorithms in (2) were developed based on a nature-inspired clonal selection algorithm (CSA) which is able to detect the arcs in the berry images with precision, based on a fitness model. The arcs detected are then extended to perform the multiple arcs and ring detectors information processing for the berry detection application. The performance of the developed algorithms were compared with traditional image processing algorithms like the circular Hough transform (CHT) and other well-known circle detection methods. The proposed AIS approach gave a Fscore of 0.71 compared with Fscores of 0.28 and 0.30 for the CHT and a parameter-free circle detection technique (RPCD) respectively.

  8. Parallel Mutual Information Based Construction of Genome-Scale Networks on the Intel® Xeon Phi™ Coprocessor.

    PubMed

    Misra, Sanchit; Pamnany, Kiran; Aluru, Srinivas

    2015-01-01

    Construction of whole-genome networks from large-scale gene expression data is an important problem in systems biology. While several techniques have been developed, most cannot handle network reconstruction at the whole-genome scale, and the few that can, require large clusters. In this paper, we present a solution on the Intel Xeon Phi coprocessor, taking advantage of its multi-level parallelism including many x86-based cores, multiple threads per core, and vector processing units. We also present a solution on the Intel® Xeon® processor. Our solution is based on TINGe, a fast parallel network reconstruction technique that uses mutual information and permutation testing for assessing statistical significance. We demonstrate the first ever inference of a plant whole genome regulatory network on a single chip by constructing a 15,575 gene network of the plant Arabidopsis thaliana from 3,137 microarray experiments in only 22 minutes. In addition, our optimization for parallelizing mutual information computation on the Intel Xeon Phi coprocessor holds out lessons that are applicable to other domains.

  9. Research on rebuilding the data information environment for aeronautical manufacturing enterprise

    NASA Astrophysics Data System (ADS)

    Feng, Xilan; Jiang, Zhiqiang; Zong, Xuewen; Shi, Jinfa

    2005-12-01

    The data environment on integrated information system and the basic standard on information resource management are the key effectively of the remote collaborative designing and manufacturing for complex product. A study project on rebuilding the data information environment for aeronautical manufacturing enterprise (Aero-ME) is put forwarded. Firstly, the data environment on integrated information system, the basic standard on information resource management, the basic establishment on corporation's information, the development on integrated information system, and the information education are discussed profoundly based on the practical requirement of information resource and technique for contemporary Aero-ME. Then, the idea and method with the data environment rebuilding based on I-CASE in the corporation is put forward, and the effective method and implement approach for manufacturing enterprise information is brought forwards. It will also the foundation and assurance that rebuilding the corporation data-environment and promoting standardizing information resource management for the development of Aero-ME information engineering.

  10. A Study on Integrated Community Based Flood Mitigation with Remote Sensing Technique in Kota Bharu, Kelantan

    NASA Astrophysics Data System (ADS)

    'Ainullotfi, A. A.; Ibrahim, A. L.; Masron, T.

    2014-02-01

    This study is conducted to establish a community based flood management system that is integrated with remote sensing technique. To understand local knowledge, the demographic of the local society is obtained by using the survey approach. The local authorities are approached first to obtain information regarding the society in the study areas such as the population, the gender and the tabulation of settlement. The information about age, religion, ethnic, occupation, years of experience facing flood in the area, are recorded to understand more on how the local knowledge emerges. Then geographic data is obtained such as rainfall data, land use, land elevation, river discharge data. This information is used to establish a hydrological model of flood in the study area. Analysis were made from the survey approach to understand the pattern of society and how they react to floods while the analysis of geographic data is used to analyse the water extent and damage done by the flood. The final result of this research is to produce a flood mitigation method with a community based framework in the state of Kelantan. With the flood mitigation that involves the community's understanding towards flood also the techniques to forecast heavy rainfall and flood occurrence using remote sensing, it is hope that it could reduce the casualties and damage that might cause to the society and infrastructures in the study area.

  11. Comparison of Clustering Techniques for Residential Energy Behavior using Smart Meter Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Ling; Lee, Doris; Sim, Alex

    Current practice in whole time series clustering of residential meter data focuses on aggregated or subsampled load data at the customer level, which ignores day-to-day differences within customers. This information is critical to determine each customer’s suitability to various demand side management strategies that support intelligent power grids and smart energy management. Clustering daily load shapes provides fine-grained information on customer attributes and sources of variation for subsequent models and customer segmentation. In this paper, we apply 11 clustering methods to daily residential meter data. We evaluate their parameter settings and suitability based on 6 generic performance metrics and post-checkingmore » of resulting clusters. Finally, we recommend suitable techniques and parameters based on the goal of discovering diverse daily load patterns among residential customers. To the authors’ knowledge, this paper is the first robust comparative review of clustering techniques applied to daily residential load shape time series in the power systems’ literature.« less

  12. Imaging nanoscale lattice variations by machine learning of x-ray diffraction microscopy data

    DOE PAGES

    Laanait, Nouamane; Zhang, Zhan; Schlepütz, Christian M.

    2016-08-09

    In this paper, we present a novel methodology based on machine learning to extract lattice variations in crystalline materials, at the nanoscale, from an x-ray Bragg diffraction-based imaging technique. By employing a full-field microscopy setup, we capture real space images of materials, with imaging contrast determined solely by the x-ray diffracted signal. The data sets that emanate from this imaging technique are a hybrid of real space information (image spatial support) and reciprocal lattice space information (image contrast), and are intrinsically multidimensional (5D). By a judicious application of established unsupervised machine learning techniques and multivariate analysis to this multidimensional datamore » cube, we show how to extract features that can be ascribed physical interpretations in terms of common structural distortions, such as lattice tilts and dislocation arrays. Finally, we demonstrate this 'big data' approach to x-ray diffraction microscopy by identifying structural defects present in an epitaxial ferroelectric thin-film of lead zirconate titanate.« less

  13. Imaging nanoscale lattice variations by machine learning of x-ray diffraction microscopy data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laanait, Nouamane; Zhang, Zhan; Schlepütz, Christian M.

    In this paper, we present a novel methodology based on machine learning to extract lattice variations in crystalline materials, at the nanoscale, from an x-ray Bragg diffraction-based imaging technique. By employing a full-field microscopy setup, we capture real space images of materials, with imaging contrast determined solely by the x-ray diffracted signal. The data sets that emanate from this imaging technique are a hybrid of real space information (image spatial support) and reciprocal lattice space information (image contrast), and are intrinsically multidimensional (5D). By a judicious application of established unsupervised machine learning techniques and multivariate analysis to this multidimensional datamore » cube, we show how to extract features that can be ascribed physical interpretations in terms of common structural distortions, such as lattice tilts and dislocation arrays. Finally, we demonstrate this 'big data' approach to x-ray diffraction microscopy by identifying structural defects present in an epitaxial ferroelectric thin-film of lead zirconate titanate.« less

  14. Evaluating structural connectomics in relation to different Q-space sampling techniques.

    PubMed

    Rodrigues, Paulo; Prats-Galino, Alberto; Gallardo-Pujol, David; Villoslada, Pablo; Falcon, Carles; Prckovska, Vesna

    2013-01-01

    Brain networks are becoming forefront research in neuroscience. Network-based analysis on the functional and structural connectomes can lead to powerful imaging markers for brain diseases. However, constructing the structural connectome can be based upon different acquisition and reconstruction techniques whose information content and mutual differences has not yet been properly studied in a unified framework. The variations of the structural connectome if not properly understood can lead to dangerous conclusions when performing these type of studies. In this work we present evaluation of the structural connectome by analysing and comparing graph-based measures on real data acquired by the three most important Diffusion Weighted Imaging techniques: DTI, HARDI and DSI. We thus come to several important conclusions demonstrating that even though the different techniques demonstrate differences in the anatomy of the reconstructed fibers the respective connectomes show variations of 20%.

  15. Strategies for Preserving Owner Privacy in the National Information Management System of the USDA Forest Service's Forest Inventory and Analysis Unit

    Treesearch

    Andrew Lister; Charles Scott; Susan King; Michael Hoppus; Brett Butler; Douglas Griffith

    2005-01-01

    The Food Security Act of 1985 prohibits the disclosure of any information collected by the USDA Forest Service's FIA program that would link individual landowners to inventory plot information. To address this, we developed a technique based on a "swapping" procedure in which plots with similar characteristics are exchanged, and on a ...

  16. A technique for displaying flight information in the field of view of binoculars for use by the pilots of radio controlled models

    NASA Technical Reports Server (NTRS)

    Fuller, H. V.

    1974-01-01

    A display system was developed to provide flight information to the ground based pilots of radio controlled models used in flight research programs. The display system utilizes data received by telemetry from the model, and presents the information numerically in the field of view of the binoculars used by the pilots.

  17. DDC 10 Year Requirements and Planning Study. Interagency Survey Report

    DTIC Science & Technology

    1975-12-12

    RDT&E Management Information Services Traditional bibliographic information storage and retrieval techniques are insufficienL for satisfaction of...a private source. 3.3 Economics and Marketing ERDA’s tradition of absorbing costs for information 1rocessing shows no indication of changing...Libraries. Its products, in addition to traditional library services, include three prime data bases: 0 MEDLINE - journal citations and subject

  18. Digital Mapping Techniques '10-Workshop Proceedings, Sacramento, California, May 16-19, 2010

    USGS Publications Warehouse

    Soller, David R.; Soller, David R.

    2012-01-01

    The Digital Mapping Techniques '10 (DMT'10) workshop was attended by 110 technical experts from 40 agencies, universities, and private companies, including representatives from 19 State geological surveys (see Appendix A). This workshop, hosted by the California Geological Survey, May 16-19, 2010, in Sacramento, California, was similar in nature to the previous 13 meetings (see Appendix B). The meeting was coordinated by the U.S. Geological Survey's (USGS) National Geologic Map Database project. As in the previous meetings, the objective was to foster informal discussion and exchange of technical information. It is with great pleasure that I note that the objective was again successfully met, as attendees continued to share and exchange knowledge and information, and renew friendships and collegial work begun at past DMT workshops. At this meeting, oral and poster presentations and special discussion sessions emphasized (1) methods for creating and publishing map products ("publishing" includes Web-based release); (2) field data capture software and techniques, including the use of LiDAR; (3) digital cartographic techniques; (4) migration of digital maps into ArcGIS Geodatabase format; (5) analytical GIS techniques; and (6) continued development of the National Geologic Map Database.

  19. Digital Mapping Techniques '08—Workshop Proceedings, Moscow, Idaho, May 18–21, 2008

    USGS Publications Warehouse

    Soller, David R.

    2009-01-01

    The Digital Mapping Techniques '08 (DMT'08) workshop was attended by more than 100 technical experts from 40 agencies, universities, and private companies, including representatives from 24 State geological surveys. This year's meeting, the twelfth in the annual series, was hosted by the Idaho Geological Survey, from May 18-21, 2008, on the University of Idaho campus in Moscow, Idaho. Each DMT workshop has been coordinated by the U.S. Geological Survey's National Geologic Map Database Project and the Association of American State Geologists (AASG). As in previous years' meetings, the objective was to foster informal discussion and exchange of technical information, principally in order to develop more efficient methods for digital mapping, cartography, GIS analysis, and information management. At this meeting, oral and poster presentations and special discussion sessions emphasized (1) methods for creating and publishing map products (here, "publishing" includes Web-based release); (2) field data capture software and techniques, including the use of LiDAR; (3) digital cartographic techniques; (4) migration of digital maps into ArcGIS Geodatabase format; (5) analytical GIS techniques; and (6) continued development of the National Geologic Map Database.

  20. An adaptive incremental approach to constructing ensemble classifiers: application in an information-theoretic computer-aided decision system for detection of masses in mammograms.

    PubMed

    Mazurowski, Maciej A; Zurada, Jacek M; Tourassi, Georgia D

    2009-07-01

    Ensemble classifiers have been shown efficient in multiple applications. In this article, the authors explore the effectiveness of ensemble classifiers in a case-based computer-aided diagnosis system for detection of masses in mammograms. They evaluate two general ways of constructing subclassifiers by resampling of the available development dataset: Random division and random selection. Furthermore, they discuss the problem of selecting the ensemble size and propose two adaptive incremental techniques that automatically select the size for the problem at hand. All the techniques are evaluated with respect to a previously proposed information-theoretic CAD system (IT-CAD). The experimental results show that the examined ensemble techniques provide a statistically significant improvement (AUC = 0.905 +/- 0.024) in performance as compared to the original IT-CAD system (AUC = 0.865 +/- 0.029). Some of the techniques allow for a notable reduction in the total number of examples stored in the case base (to 1.3% of the original size), which, in turn, results in lower storage requirements and a shorter response time of the system. Among the methods examined in this article, the two proposed adaptive techniques are by far the most effective for this purpose. Furthermore, the authors provide some discussion and guidance for choosing the ensemble parameters.

  1. Verifying the secure setup of UNIX client/servers and detection of network intrusion

    NASA Astrophysics Data System (ADS)

    Feingold, Richard; Bruestle, Harry R.; Bartoletti, Tony; Saroyan, R. A.; Fisher, John M.

    1996-03-01

    This paper describes our technical approach to developing and delivering Unix host- and network-based security products to meet the increasing challenges in information security. Today's global `Infosphere' presents us with a networked environment that knows no geographical, national, or temporal boundaries, and no ownership, laws, or identity cards. This seamless aggregation of computers, networks, databases, applications, and the like store, transmit, and process information. This information is now recognized as an asset to governments, corporations, and individuals alike. This information must be protected from misuse. The Security Profile Inspector (SPI) performs static analyses of Unix-based clients and servers to check on their security configuration. SPI's broad range of security tests and flexible usage options support the needs of novice and expert system administrators alike. SPI's use within the Department of Energy and Department of Defense has resulted in more secure systems, less vulnerable to hostile intentions. Host-based information protection techniques and tools must also be supported by network-based capabilities. Our experience shows that a weak link in a network of clients and servers presents itself sooner or later, and can be more readily identified by dynamic intrusion detection techniques and tools. The Network Intrusion Detector (NID) is one such tool. NID is designed to monitor and analyze activity on the Ethernet broadcast Local Area Network segment and product transcripts of suspicious user connections. NID's retrospective and real-time modes have proven invaluable to security officers faced with ongoing attacks to their systems and networks.

  2. Simulation and experimental study of 802.11 based networking for vehicular management and safety.

    DOT National Transportation Integrated Search

    2009-03-01

    This work focuses on the use of wireless networking techniques for their potential impact in providing : information for traffic management, control and public safety goals. The premise of this work is based on the : reasonable expectation that vehic...

  3. Conceptual Design of a Communication-Based Deep Space Navigation Network

    NASA Technical Reports Server (NTRS)

    Anzalone, Evan J.; Chuang, C. H.

    2012-01-01

    As the need grows for increased autonomy and position knowledge accuracy to support missions beyond Earth orbit, engineers must push and develop more advanced navigation sensors and systems that operate independent of Earth-based analysis and processing. Several spacecraft are approaching this problem using inter-spacecraft radiometric tracking and onboard autonomous optical navigation methods. This paper proposes an alternative implementation to aid in spacecraft position fixing. The proposed method Network-Based Navigation technique takes advantage of the communication data being sent between spacecraft and between spacecraft and ground control to embed navigation information. The navigation system uses these packets to provide navigation estimates to an onboard navigation filter to augment traditional ground-based radiometric tracking techniques. As opposed to using digital signal measurements to capture inherent information of the transmitted signal itself, this method relies on the embedded navigation packet headers to calculate a navigation estimate. This method is heavily dependent on clock accuracy and the initial results show the promising performance of a notional system.

  4. MS-Based Analytical Techniques: Advances in Spray-Based Methods and EI-LC-MS Applications

    PubMed Central

    Medina, Isabel; Cappiello, Achille; Careri, Maria

    2018-01-01

    Mass spectrometry is the most powerful technique for the detection and identification of organic compounds. It can provide molecular weight information and a wealth of structural details that give a unique fingerprint for each analyte. Due to these characteristics, mass spectrometry-based analytical methods are showing an increasing interest in the scientific community, especially in food safety, environmental, and forensic investigation areas where the simultaneous detection of targeted and nontargeted compounds represents a key factor. In addition, safety risks can be identified at the early stage through online and real-time analytical methodologies. In this context, several efforts have been made to achieve analytical instrumentation able to perform real-time analysis in the native environment of samples and to generate highly informative spectra. This review article provides a survey of some instrumental innovations and their applications with particular attention to spray-based MS methods and food analysis issues. The survey will attempt to cover the state of the art from 2012 up to 2017. PMID:29850370

  5. DNA-based cryptographic methods for data hiding in DNA media.

    PubMed

    Marwan, Samiha; Shawish, Ahmed; Nagaty, Khaled

    2016-12-01

    Information security can be achieved using cryptography, steganography or a combination of them, where data is firstly encrypted using any of the available cryptography techniques and then hid into any hiding medium. Recently, the famous genomic DNA has been introduced as a hiding medium, known as DNA steganography, due to its notable ability to hide huge data sets with a high level of randomness and hence security. Despite the numerous cryptography techniques, to our knowledge only the vigenere cipher and the DNA-based playfair cipher have been combined with the DNA steganography, which keeps space for investigation of other techniques and coming up with new improvements. This paper presents a comprehensive analysis between the DNA-based playfair, vigenere, RSA and the AES ciphers, each combined with a DNA hiding technique. The conducted analysis reports the performance diversity of each combined technique in terms of security, speed, hiding capacity in addition to both key size and data size. Moreover, this paper proposes a modification of the current combined DNA-based playfair cipher technique, which makes it not only simple and fast but also provides a significantly higher hiding capacity and security. The conducted extensive experimental studies confirm such outstanding performance in comparison with all the discussed combined techniques. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Web information retrieval based on ontology

    NASA Astrophysics Data System (ADS)

    Zhang, Jian

    2013-03-01

    The purpose of the Information Retrieval (IR) is to find a set of documents that are relevant for a specific information need of a user. Traditional Information Retrieval model commonly used in commercial search engine is based on keyword indexing system and Boolean logic queries. One big drawback of traditional information retrieval is that they typically retrieve information without an explicitly defined domain of interest to the users so that a lot of no relevance information returns to users, which burden the user to pick up useful answer from these no relevance results. In order to tackle this issue, many semantic web information retrieval models have been proposed recently. The main advantage of Semantic Web is to enhance search mechanisms with the use of Ontology's mechanisms. In this paper, we present our approach to personalize web search engine based on ontology. In addition, key techniques are also discussed in our paper. Compared to previous research, our works concentrate on the semantic similarity and the whole process including query submission and information annotation.

  7. Accelerometer-Based Method for Extracting Respiratory and Cardiac Gating Information for Dual Gating during Nuclear Medicine Imaging

    PubMed Central

    Pänkäälä, Mikko; Paasio, Ari

    2014-01-01

    Both respiratory and cardiac motions reduce the quality and consistency of medical imaging specifically in nuclear medicine imaging. Motion artifacts can be eliminated by gating the image acquisition based on the respiratory phase and cardiac contractions throughout the medical imaging procedure. Electrocardiography (ECG), 3-axis accelerometer, and respiration belt data were processed and analyzed from ten healthy volunteers. Seismocardiography (SCG) is a noninvasive accelerometer-based method that measures accelerations caused by respiration and myocardial movements. This study was conducted to investigate the feasibility of the accelerometer-based method in dual gating technique. The SCG provides accelerometer-derived respiratory (ADR) data and accurate information about quiescent phases within the cardiac cycle. The correct information about the status of ventricles and atria helps us to create an improved estimate for quiescent phases within a cardiac cycle. The correlation of ADR signals with the reference respiration belt was investigated using Pearson correlation. High linear correlation was observed between accelerometer-based measurement and reference measurement methods (ECG and Respiration belt). Above all, due to the simplicity of the proposed method, the technique has high potential to be applied in dual gating in clinical cardiac positron emission tomography (PET) to obtain motion-free images in the future. PMID:25120563

  8. Synchrotron based infrared imaging and spectroscopy via focal plane array on live fibroblasts in D2O enriched medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quaroni, Luca; Zlateva, Theodora; Sarafimov, Blagoj

    2014-03-26

    We tested the viability of using synchrotron based infrared imaging to study biochemical processes inside living cells. As a model system, we studied fibroblast cells exposed to a medium highly enriched with D2O. We could show that the experimental technique allows us to reproduce at the cellular level measurements that are normally performed on purified biological molecules. We can obtain information about lipid conformation and distribution, kinetics of hydrogen/deuterium exchange, and the formation of concentration gradients of H and O isotopes in water that are associated with cell metabolism. The implementation of the full field technique in a sequential imagingmore » format gives a description of cellular biochemistry and biophysics that contains both spatial and temporal information.« less

  9. Image processing and analysis using neural networks for optometry area

    NASA Astrophysics Data System (ADS)

    Netto, Antonio V.; Ferreira de Oliveira, Maria C.

    2002-11-01

    In this work we describe the framework of a functional system for processing and analyzing images of the human eye acquired by the Hartmann-Shack technique (HS), in order to extract information to formulate a diagnosis of eye refractive errors (astigmatism, hypermetropia and myopia). The analysis is to be carried out using an Artificial Intelligence system based on Neural Nets, Fuzzy Logic and Classifier Combination. The major goal is to establish the basis of a new technology to effectively measure ocular refractive errors that is based on methods alternative those adopted in current patented systems. Moreover, analysis of images acquired with the Hartmann-Shack technique may enable the extraction of additional information on the health of an eye under exam from the same image used to detect refraction errors.

  10. A prototype upper-atmospheric data assimilation scheme based on optimal interpolation: 2. Numerical experiments

    NASA Astrophysics Data System (ADS)

    Akmaev, R. a.

    1999-04-01

    In Part 1 of this work ([Akmaev, 1999]), an overview of the theory of optimal interpolation (OI) ([Gandin, 1963]) and related techniques of data assimilation based on linear optimal estimation ([Liebelt, 1967]; [Catlin, 1989]; [Mendel, 1995]) is presented. The approach implies the use in data analysis of additional statistical information in the form of statistical moments, e.g., the mean and covariance (correlation). The a priori statistical characteristics, if available, make it possible to constrain expected errors and obtain optimal in some sense estimates of the true state from a set of observations in a given domain in space and/or time. The primary objective of OI is to provide estimates away from the observations, i.e., to fill in data voids in the domain under consideration. Additionally, OI performs smoothing suppressing the noise, i.e., the spectral components that are presumably not present in the true signal. Usually, the criterion of optimality is minimum variance of the expected errors and the whole approach may be considered constrained least squares or least squares with a priori information. Obviously, data assimilation techniques capable of incorporating any additional information are potentially superior to techniques that have no access to such information as, for example, the conventional least squares (e.g., [Liebelt, 1967]; [Weisberg, 1985]; [Press et al., 1992]; [Mendel, 1995]).

  11. Science Teachers' Information Processing Behaviours in Nepal: A Reflective Comparative Study

    ERIC Educational Resources Information Center

    Acharya, Kamal Prasad

    2017-01-01

    This study examines the investigation of the information processing behaviours of secondary level science teachers. It is based on the data collected from 50 secondary level school science teachers working in Kathmandy valley. The simple random sampling and the Cognitive Style Inventory have been used respectively as the technique and tool to…

  12. Evaluating Federal Information Technology Program Success Based on Earned Value Management

    ERIC Educational Resources Information Center

    Moy, Mae N.

    2016-01-01

    Despite the use of earned value management (EVM) techniques to track development progress, federal information (IT) software programs continue to fail by not meeting identified business requirements. The purpose of this logistic regression study was to examine, using IT software data from federal agencies from 2011 to 2014, whether a relationship…

  13. 76 FR 27380 - Proposed Information Collection (Report of Medical Examination for Disability Evaluation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-11

    ... of automated collection techniques or the use of other forms of information technology. Title: Report... and attendance, and for benefits based on a child's' incapacity of self- support. VA uses the data to determine the level of disability. Affected Public: Individuals or households. Estimated Annual Burden: 45...

  14. Comprehensive Analysis of Semantic Web Reasoners and Tools: A Survey

    ERIC Educational Resources Information Center

    Khamparia, Aditya; Pandey, Babita

    2017-01-01

    Ontologies are emerging as best representation techniques for knowledge based context domains. The continuing need for interoperation, collaboration and effective information retrieval has lead to the creation of semantic web with the help of tools and reasoners which manages personalized information. The future of semantic web lies in an ontology…

  15. Significance of perceptually relevant image decolorization for scene classification

    NASA Astrophysics Data System (ADS)

    Viswanathan, Sowmya; Divakaran, Govind; Soman, Kutti Padanyl

    2017-11-01

    Color images contain luminance and chrominance components representing the intensity and color information, respectively. The objective of this paper is to show the significance of incorporating chrominance information to the task of scene classification. An improved color-to-grayscale image conversion algorithm that effectively incorporates chrominance information is proposed using the color-to-gray structure similarity index and singular value decomposition to improve the perceptual quality of the converted grayscale images. The experimental results based on an image quality assessment for image decolorization and its success rate (using the Cadik and COLOR250 datasets) show that the proposed image decolorization technique performs better than eight existing benchmark algorithms for image decolorization. In the second part of the paper, the effectiveness of incorporating the chrominance component for scene classification tasks is demonstrated using a deep belief network-based image classification system developed using dense scale-invariant feature transforms. The amount of chrominance information incorporated into the proposed image decolorization technique is confirmed with the improvement to the overall scene classification accuracy. Moreover, the overall scene classification performance improved by combining the models obtained using the proposed method and conventional decolorization methods.

  16. SU-F-J-96: Comparison of Frame-Based and Mutual Information Registration Techniques for CT and MR Image Sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popple, R; Bredel, M; Brezovich, I

    Purpose: To compare the accuracy of CT-MR registration using a mutual information method with registration using a frame-based localizer box. Methods: Ten patients having the Leksell head frame and scanned with a modality specific localizer box were imported into the treatment planning system. The fiducial rods of the localizer box were contoured on both the MR and CT scans. The skull was contoured on the CT images. The MR and CT images were registered by two methods. The frame-based method used the transformation that minimized the mean square distance of the centroids of the contours of the fiducial rods frommore » a mathematical model of the localizer. The mutual information method used automated image registration tools in the TPS and was restricted to a volume-of-interest defined by the skull contours with a 5 mm margin. For each case, the two registrations were adjusted by two evaluation teams, each comprised of an experienced radiation oncologist and neurosurgeon, to optimize alignment in the region of the brainstem. The teams were blinded to the registration method. Results: The mean adjustment was 0.4 mm (range 0 to 2 mm) and 0.2 mm (range 0 to 1 mm) for the frame and mutual information methods, respectively. The median difference between the frame and mutual information registrations was 0.3 mm, but was not statistically significant using the Wilcoxon signed rank test (p=0.37). Conclusion: The difference between frame and mutual information registration techniques was neither statistically significant nor, for most applications, clinically important. These results suggest that mutual information is equivalent to frame-based image registration for radiosurgery. Work is ongoing to add additional evaluators and to assess the differences between evaluators.« less

  17. Innovative application of virtual display technique in virtual museum

    NASA Astrophysics Data System (ADS)

    Zhang, Jiankang

    2017-09-01

    Virtual museum refers to display and simulate the functions of real museum on the Internet in the form of 3 Dimensions virtual reality by applying interactive programs. Based on Virtual Reality Modeling Language, virtual museum building and its effective interaction with the offline museum lie in making full use of 3 Dimensions panorama technique, virtual reality technique and augmented reality technique, and innovatively taking advantages of dynamic environment modeling technique, real-time 3 Dimensions graphics generating technique, system integration technique and other key virtual reality techniques to make sure the overall design of virtual museum.3 Dimensions panorama technique, also known as panoramic photography or virtual reality, is a technique based on static images of the reality. Virtual reality technique is a kind of computer simulation system which can create and experience the interactive 3 Dimensions dynamic visual world. Augmented reality, also known as mixed reality, is a technique which simulates and mixes the information (visual, sound, taste, touch, etc.) that is difficult for human to experience in reality. These technologies make virtual museum come true. It will not only bring better experience and convenience to the public, but also be conducive to improve the influence and cultural functions of the real museum.

  18. Automation and hypermedia technology applications

    NASA Technical Reports Server (NTRS)

    Jupin, Joseph H.; Ng, Edward W.; James, Mark L.

    1993-01-01

    This paper represents a progress report on HyLite (Hypermedia Library technology): a research and development activity to produce a versatile system as part of NASA's technology thrusts in automation, information sciences, and communications. HyLite can be used as a system or tool to facilitate the creation and maintenance of large distributed electronic libraries. The contents of such a library may be software components, hardware parts or designs, scientific data sets or databases, configuration management information, etc. Proliferation of computer use has made the diversity and quantity of information too large for any single user to sort, process, and utilize effectively. In response to this information deluge, we have created HyLite to enable the user to process relevant information into a more efficient organization for presentation, retrieval, and readability. To accomplish this end, we have incorporated various AI techniques into the HyLite hypermedia engine to facilitate parameters and properties of the system. The proposed techniques include intelligent searching tools for the libraries, intelligent retrievals, and navigational assistance based on user histories. HyLite itself is based on an earlier project, the Encyclopedia of Software Components (ESC) which used hypermedia to facilitate and encourage software reuse.

  19. Techniques for optimizing human-machine information transfer related to real-time interactive display systems

    NASA Technical Reports Server (NTRS)

    Granaas, Michael M.; Rhea, Donald C.

    1989-01-01

    In recent years the needs of ground-based researcher-analysts to access real-time engineering data in the form of processed information has expanded rapidly. Fortunately, the capacity to deliver that information has also expanded. The development of advanced display systems is essential to the success of a research test activity. Those developed at the National Aeronautics and Space Administration (NASA), Western Aeronautical Test Range (WATR), range from simple alphanumerics to interactive mapping and graphics. These unique display systems are designed not only to meet basic information display requirements of the user, but also to take advantage of techniques for optimizing information display. Future ground-based display systems will rely heavily not only on new technologies, but also on interaction with the human user and the associated productivity with that interaction. The psychological abilities and limitations of the user will become even more important in defining the difference between a usable and a useful display system. This paper reviews the requirements for development of real-time displays; the psychological aspects of design such as the layout, color selection, real-time response rate, and interactivity of displays; and an analysis of some existing WATR displays.

  20. The design and implementation of web mining in web sites security

    NASA Astrophysics Data System (ADS)

    Li, Jian; Zhang, Guo-Yin; Gu, Guo-Chang; Li, Jian-Li

    2003-06-01

    The backdoor or information leak of Web servers can be detected by using Web Mining techniques on some abnormal Web log and Web application log data. The security of Web servers can be enhanced and the damage of illegal access can be avoided. Firstly, the system for discovering the patterns of information leakages in CGI scripts from Web log data was proposed. Secondly, those patterns for system administrators to modify their codes and enhance their Web site security were provided. The following aspects were described: one is to combine web application log with web log to extract more information, so web data mining could be used to mine web log for discovering the information that firewall and Information Detection System cannot find. Another approach is to propose an operation module of web site to enhance Web site security. In cluster server session, Density-Based Clustering technique is used to reduce resource cost and obtain better efficiency.

  1. Illustrative visualization of 3D city models

    NASA Astrophysics Data System (ADS)

    Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian

    2005-03-01

    This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.

  2. Efficient File Sharing by Multicast - P2P Protocol Using Network Coding and Rank Based Peer Selection

    NASA Technical Reports Server (NTRS)

    Stoenescu, Tudor M.; Woo, Simon S.

    2009-01-01

    In this work, we consider information dissemination and sharing in a distributed peer-to-peer (P2P highly dynamic communication network. In particular, we explore a network coding technique for transmission and a rank based peer selection method for network formation. The combined approach has been shown to improve information sharing and delivery to all users when considering the challenges imposed by the space network environments.

  3. Methods for understanding microbial community structures and functions in microbial fuel cells: a review.

    PubMed

    Zhi, Wei; Ge, Zheng; He, Zhen; Zhang, Husen

    2014-11-01

    Microbial fuel cells (MFCs) employ microorganisms to recover electric energy from organic matter. However, fundamental knowledge of electrochemically active bacteria is still required to maximize MFCs power output for practical applications. This review presents microbiological and electrochemical techniques to help researchers choose the appropriate methods for the MFCs study. Pre-genomic and genomic techniques such as 16S rRNA based phylogeny and metagenomics have provided important information in the structure and genetic potential of electrode-colonizing microbial communities. Post-genomic techniques such as metatranscriptomics allow functional characterizations of electrode biofilm communities by quantifying gene expression levels. Isotope-assisted phylogenetic analysis can further link taxonomic information to microbial metabolisms. A combination of electrochemical, phylogenetic, metagenomic, and post-metagenomic techniques offers opportunities to a better understanding of the extracellular electron transfer process, which in turn can lead to process optimization for power output. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Medical Products Research

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Ventrex Laboratories, Inc. develops, manufactures and markets a line of medical diagnostic assays based on biochemical techniques, in particular immunochemical techniques. Their products are sold worldwide to hospitals and medical laboratories for use in testing blood samples and other biological fluids. Analysis of a patient's body fluids, compared with normal values, aids a physician in confirming or otherwise diagnosing a suspected disease condition. NERAC's rapid information retrieval has provided Ventrex invaluable up-to-date information, and has permitted large scale savings. NERAC's service was particularly important in the development of a new product in the company's Ventre/Sep line, which is used in radioimmunoassays.

  5. 'Televaluation' of clinical information systems: an integrative approach to assessing Web-based systems.

    PubMed

    Kushniruk, A W; Patel, C; Patel, V L; Cimino, J J

    2001-04-01

    The World Wide Web provides an unprecedented opportunity for widespread access to health-care applications by both patients and providers. The development of new methods for assessing the effectiveness and usability of these systems is becoming a critical issue. This paper describes the distance evaluation (i.e. 'televaluation') of emerging Web-based information technologies. In health informatics evaluation, there is a need for application of new ideas and methods from the fields of cognitive science and usability engineering. A framework is presented for conducting evaluations of health-care information technologies that integrates a number of methods, ranging from deployment of on-line questionnaires (and Web-based forms) to remote video-based usability testing of user interactions with clinical information systems. Examples illustrating application of these techniques are presented for the assessment of a patient clinical information system (PatCIS), as well as an evaluation of use of Web-based clinical guidelines. Issues in designing, prototyping and iteratively refining evaluation components are discussed, along with description of a 'virtual' usability laboratory.

  6. Road landslide information management and forecasting system base on GIS.

    PubMed

    Wang, Wei Dong; Du, Xiang Gang; Xie, Cui Ming

    2009-09-01

    Take account of the characters of road geological hazard and its supervision, it is very important to develop the Road Landslides Information Management and Forecasting System based on Geographic Information System (GIS). The paper presents the system objective, function, component modules and key techniques in the procedure of system development. The system, based on the spatial information and attribute information of road geological hazard, was developed and applied in Guizhou, a province of China where there are numerous and typical landslides. The manager of communication, using the system, can visually inquire all road landslides information based on regional road network or on the monitoring network of individual landslide. Furthermore, the system, integrated with mathematical prediction models and the GIS's strongpoint on spatial analyzing, can assess and predict landslide developing procedure according to the field monitoring data. Thus, it can efficiently assists the road construction or management units in making decision to control the landslides and to reduce human vulnerability.

  7. Evaluation of Keyphrase Extraction Algorithm and Tiling Process for a Document/Resource Recommender within E-Learning Environments

    ERIC Educational Resources Information Center

    Mangina, Eleni; Kilbride, John

    2008-01-01

    The research presented in this paper is an examination of the applicability of IUI techniques in an online e-learning environment. In particular we make use of user modeling techniques, information retrieval and extraction mechanisms and collaborative filtering methods. The domains of e-learning, web-based training and instruction and intelligent…

  8. Molecular Analysis of Date Palm Genetic Diversity Using Random Amplified Polymorphic DNA (RAPD) and Inter-Simple Sequence Repeats (ISSRs).

    PubMed

    El Sharabasy, Sherif F; Soliman, Khaled A

    2017-01-01

    The date palm is an ancient domesticated plant with great diversity and has been cultivated in the Middle East and North Africa for at last 5000 years. Date palm cultivars are classified based on the fruit moisture content, as dry, semidry, and soft dates. There are a number of biochemical and molecular techniques available for characterization of the date palm variation. This chapter focuses on the DNA-based markers random amplified polymorphic DNA (RAPD) and inter-simple sequence repeats (ISSR) techniques, in addition to biochemical markers based on isozyme analysis. These techniques coupled with appropriate statistical tools proved useful for determining phylogenetic relationships among date palm cultivars and provide information resources for date palm gene banks.

  9. A Survey of Colormaps in Visualization

    PubMed Central

    Zhou, Liang; Hansen, Charles D.

    2016-01-01

    Colormaps are a vital method for users to gain insights into data in a visualization. With a good choice of colormaps, users are able to acquire information in the data more effectively and efficiently. In this survey, we attempt to provide readers with a comprehensive review of colormap generation techniques and provide readers a taxonomy which is helpful for finding appropriate techniques to use for their data and applications. Specifically, we first briefly introduce the basics of color spaces including color appearance models. In the core of our paper, we survey colormap generation techniques, including the latest advances in the field by grouping these techniques into four classes: procedural methods, user-study based methods, rule-based methods, and data-driven methods; we also include a section on methods that are beyond pure data comprehension purposes. We then classify colormapping techniques into a taxonomy for readers to quickly identify the appropriate techniques they might use. Furthermore, a representative set of visualization techniques that explicitly discuss the use of colormaps is reviewed and classified based on the nature of the data in these applications. Our paper is also intended to be a reference of colormap choices for readers when they are faced with similar data and/or tasks. PMID:26513793

  10. Extraction of convective cloud parameters from Doppler Weather Radar MAX(Z) product using Image Processing Technique

    NASA Astrophysics Data System (ADS)

    Arunachalam, M. S.; Puli, Anil; Anuradha, B.

    2016-07-01

    In the present work continuous extraction of convective cloud optical information and reflectivity (MAX(Z) in dBZ) using online retrieval technique for time series data production from Doppler Weather Radar (DWR) located at Indian Meteorological Department, Chennai has been developed in MATLAB. Reflectivity measurements for different locations within the DWR range of 250 Km radii of circular disc area can be retrieved using this technique. It gives both time series reflectivity of point location and also Range Time Intensity (RTI) maps of reflectivity for the corresponding location. The Graphical User Interface (GUI) developed for the cloud reflectivity is user friendly; it also provides the convective cloud optical information such as cloud base height (CBH), cloud top height (CTH) and cloud optical depth (COD). This technique is also applicable for retrieving other DWR products such as Plan Position Indicator (Z, in dBZ), Plan Position Indicator (Z, in dBZ)-Close Range, Volume Velocity Processing (V, in knots), Plan Position Indicator (V, in m/s), Surface Rainfall Intensity (SRI, mm/hr), Precipitation Accumulation (PAC) 24 hrs at 0300UTC. Keywords: Reflectivity, cloud top height, cloud base, cloud optical depth

  11. Following the Social Media: Aspect Evolution of Online Discussion

    NASA Astrophysics Data System (ADS)

    Tang, Xuning; Yang, Christopher C.

    Due to the advance of Internet and Web 2.0 technologies, it is easy to extract thousands of threads about a topic of interest from an online forum but it is nontrivial to capture the blueprint of different aspects (i.e., subtopic, or facet) associated with the topic. To better understand and analyze a forum discussion given topic, it is important to uncover the evolution relationships (temporal dependencies) between different topic aspects (i.e. how the discussion topic is evolving). Traditional Topic Detection and Tracking (TDT) techniques usually organize topics as a flat structure but it does not present the evolution relationships between topic aspects. In addition, the properties of short and sparse messages make the content-based TDT techniques difficult to perform well in identifying evolution relationships. The contributions in this paper are two-folded. We formally define a topic aspect evolution graph modeling framework and propose to utilize social network information, content similarity and temporal proximity to model evolution relationships between topic aspects. The experimental results showed that, by incorporating social network information, our technique significantly outperformed content-based technique in the task of extracting evolution relationships between topic aspects.

  12. A ROle-Oriented Filtering (ROOF) approach for collaborative recommendation

    NASA Astrophysics Data System (ADS)

    Ghani, Imran; Jeong, Seung Ryul

    2016-09-01

    In collaborative filtering (CF) recommender systems, existing techniques frequently focus on determining similarities among users' historical interests. This generally refers to situations in which each user normally plays a single role and his/her taste remains consistent over the long term. However, we note that existing techniques have not been significantly employed in a role-oriented context. This is especially so in situations where users may change their roles over time or play multiple roles simultaneously, while still expecting to access relevant information resources accordingly. Such systems include enterprise architecture management systems, e-commerce sites or journal management systems. In scenarios involving existing techniques, each user needs to build up very different profiles (preferences and interests) based on multiple roles which change over time. Should this not occur to a satisfactory degree, their previous information will either be lost or not utilised at all. To limit the occurrence of such issues, we propose a ROle-Oriented Filtering (ROOF) approach focusing on the manner in which multiple user profiles are obtained and maintained over time. We conducted a number of experiments using an enterprise architecture management scenario. In so doing, we observed that the ROOF approach performs better in comparison with other existing collaborative filtering-based techniques.

  13. Nondestructive evaluation technique guide

    NASA Technical Reports Server (NTRS)

    Vary, A.

    1973-01-01

    A total of 70 individual nondestructive evaluation (NDE) techniques are described. Information is presented that permits ease of comparison of the merits and limitations of each technique with respect to various NDE problems. An NDE technique classification system is presented. It is based on the system that was adopted by the National Materials Advisory Board (NMAB). The classification system presented follows the NMAB system closely with the exception of additional categories that have been added to cover more advanced techniques presently in use. The rationale of the technique is explained. The format provides for a concise description of each technique, the physical principles involved, objectives of interrogation, example applications, limitations of each technique, a schematic illustration, and key reference material. Cross-index tabulations are also provided so that particular NDE problems can be referred to appropriate techniques.

  14. Physically-enhanced data visualisation: towards real time solution of Partial Differential Equations in 3D domains

    NASA Astrophysics Data System (ADS)

    Zlotnik, Sergio

    2017-04-01

    Information provided by visualisation environments can be largely increased if the data shown is combined with some relevant physical processes and the used is allowed to interact with those processes. This is particularly interesting in VR environments where the user has a deep interplay with the data. For example, a geological seismic line in a 3D "cave" shows information of the geological structure of the subsoil. The available information could be enhanced with the thermal state of the region under study, with water-flow patterns in porous rocks or with rock displacements under some stress conditions. The information added by the physical processes is usually the output of some numerical technique applied to solve a Partial Differential Equation (PDE) that describes the underlying physics. Many techniques are available to obtain numerical solutions of PDE (e.g. Finite Elements, Finite Volumes, Finite Differences, etc). Although, all these traditional techniques require very large computational resources (particularly in 3D), making them useless in a real time visualization environment -such as VR- because the time required to compute a solution is measured in minutes or even in hours. We present here a novel alternative for the resolution of PDE-based problems that is able to provide a 3D solutions for a very large family of problems in real time. That is, the solution is evaluated in a one thousands of a second, making the solver ideal to be embedded into VR environments. Based on Model Order Reduction ideas, the proposed technique divides the computational work in to a computationally intensive "offline" phase, that is run only once in a life time, and an "online" phase that allow the real time evaluation of any solution within a family of problems. Preliminary examples of real time solutions of complex PDE-based problems will be presented, including thermal problems, flow problems, wave problems and some simple coupled problems.

  15. Estimating Crop Growth Stage by Combining Meteorological and Remote Sensing Based Techniques

    NASA Astrophysics Data System (ADS)

    Champagne, C.; Alavi-Shoushtari, N.; Davidson, A. M.; Chipanshi, A.; Zhang, Y.; Shang, J.

    2016-12-01

    Estimations of seeding, harvest and phenological growth stage of crops are important sources of information for monitoring crop progress and crop yield forecasting. Growth stage has been traditionally estimated at the regional level through surveys, which rely on field staff to collect the information. Automated techniques to estimate growth stage have included agrometeorological approaches that use temperature and day length information to estimate accumulated heat and photoperiod, with thresholds used to determine when these stages are most likely. These approaches however, are crop and hybrid dependent, and can give widely varying results depending on the method used, particularly if the seeding date is unknown. Methods to estimate growth stage from remote sensing have progressed greatly in the past decade, with time series information from the Normalized Difference Vegetation Index (NDVI) the most common approach. Time series NDVI provide information on growth stage through a variety of techniques, including fitting functions to a series of measured NDVI values or smoothing these values and using thresholds to detect changes in slope that are indicative of rapidly increasing or decreasing `greeness' in the vegetation cover. The key limitations of these techniques for agriculture are frequent cloud cover in optical data that lead to errors in estimating local features in the time series function, and the incongruity between changes in greenness and traditional agricultural growth stages. There is great potential to combine both meteorological approaches and remote sensing to overcome the limitations of each technique. This research will examine the accuracy of both meteorological and remote sensing approaches over several agricultural sites in Canada, and look at the potential to integrate these techniques to provide improved estimates of crop growth stage for common field crops.

  16. Querying and Ranking XML Documents.

    ERIC Educational Resources Information Center

    Schlieder, Torsten; Meuss, Holger

    2002-01-01

    Discussion of XML, information retrieval, precision, and recall focuses on a retrieval technique that adopts the similarity measure of the vector space model, incorporates the document structure, and supports structured queries. Topics include a query model based on tree matching; structured queries and term-based ranking; and term frequency and…

  17. MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, G; Pan, X; Stayman, J

    2014-06-15

    Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within themore » reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical applications. Learning Objectives: Learn the general methodologies associated with model-based 3D image reconstruction. Learn the potential advantages in image quality and dose associated with model-based image reconstruction. Learn the challenges associated with computational load and image quality assessment for such reconstruction methods. Learn how imaging task can be incorporated as a means to drive optimal image acquisition and reconstruction techniques. Learn how model-based reconstruction methods can incorporate prior information to improve image quality, ease sampling requirements, and reduce dose.« less

  18. On the Applications of IBA Techniques to Biological Samples Analysis: PIXE and RBS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falcon-Gonzalez, J. M.; Bernal-Alvarado, J.; Sosa, M.

    2008-08-11

    The analytical techniques based on ion beams or IBA techniques give quantitative information on elemental concentration in samples of a wide variety of nature. In this work, we focus on PIXE technique, analyzing thick target biological specimens (TTPIXE), using 3 MeV protons produced by an electrostatic accelerator. A nuclear microprobe was used performing PIXE and RBS simultaneously, in order to solve the uncertainties produced in the absolute PIXE quantifying. The advantages of using both techniques and a nuclear microprobe are discussed. Quantitative results are shown to illustrate the multielemental resolution of the PIXE technique; for this, a blood standard wasmore » used.« less

  19. A review and content analysis of engagement, functionality, aesthetics, information quality, and change techniques in the most popular commercial apps for weight management.

    PubMed

    Bardus, Marco; van Beurden, Samantha B; Smith, Jane R; Abraham, Charles

    2016-03-10

    There are thousands of apps promoting dietary improvement, increased physical activity (PA) and weight management. Despite a growing number of reviews in this area, popular apps have not been comprehensively analysed in terms of features related to engagement, functionality, aesthetics, information quality, and content, including the types of change techniques employed. The databases containing information about all Health and Fitness apps on GP and iTunes (7,954 and 25,491 apps) were downloaded in April 2015. Database filters were applied to select the most popular apps available in both stores. Two researchers screened the descriptions selecting only weight management apps. Features, app quality and content were independently assessed using the Mobile App Rating Scale (MARS) and previously-defined categories of techniques relevant to behaviour change. Inter-coder reliabilities were calculated, and correlations between features explored. Of the 23 popular apps included in the review 16 were free (70%), 15 (65%) addressed weight control, diet and PA combined; 19 (83%) allowed behavioural tracking. On 5-point MARS scales, apps were of average quality (Md = 3.2, IQR = 1.4); "functionality" (Md = 4.0, IQR = 1.1) was the highest and "information quality" (Md = 2.0, IQR = 1.1) was the lowest domain. On average, 10 techniques were identified per app (range: 1-17) and of the 34 categories applied, goal setting and self-monitoring techniques were most frequently identified. App quality was positively correlated with number of techniques included (rho = .58, p < .01) and number of "technical" features (rho = .48, p < .05), which was also associated with the number of techniques included (rho = .61, p < .01). Apps that provided tracking used significantly more techniques than those that did not. Apps with automated tracking scored significantly higher in engagement, aesthetics, and overall MARS scores. Those that used change techniques previously associated with effectiveness (i.e., goal setting, self-monitoring and feedback) also had better "information quality". Popular apps assessed have overall moderate quality and include behavioural tracking features and a range of change techniques associated with behaviour change. These apps may influence behaviour, although more attention to information quality and evidence-based content are warranted to improve their quality.

  20. Refined genetic algorithm -- Economic dispatch example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheble, G.B.; Brittig, K.

    1995-02-01

    A genetic-based algorithm is used to solve an economic dispatch (ED) problem. The algorithm utilizes payoff information of perspective solutions to evaluate optimality. Thus, the constraints of classical LaGrangian techniques on unit curves are eliminated. Using an economic dispatch problem as a basis for comparison, several different techniques which enhance program efficiency and accuracy, such as mutation prediction, elitism, interval approximation and penalty factors, are explored. Two unique genetic algorithms are also compared. The results are verified for a sample problem using a classical technique.

  1. Statistical techniques for the characterization of partially observed epidemics.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Safta, Cosmin; Ray, Jaideep; Crary, David

    Techniques appear promising to construct and integrate automated detect-and-characterize technique for epidemics - Working off biosurveillance data, and provides information on the particular/ongoing outbreak. Potential use - in crisis management and planning, resource allocation - Parameter estimation capability ideal for providing the input parameters into an agent-based model, Index Cases, Time of Infection, infection rate. Non-communicable diseases are easier than communicable ones - Small anthrax can be characterized well with 7-10 days of data, post-detection; plague takes longer, Large attacks are very easy.

  2. Infrared thermal imaging of atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Watt, David; Mchugh, John

    1990-01-01

    A technique for analyzing infrared atmospheric images to obtain cross-wind measurement is presented. The technique is based on Taylor's frozen turbulence hypothesis and uses cross-correlation of successive images to obtain a measure of the cross-wind velocity in a localized focal region. The technique is appealing because it can possibly be combined with other IR forward look capabilities and may provide information about turbulence intensity. The current research effort, its theoretical basis, and its applicability to windshear detection are described.

  3. Characterizing a New Surface-Based Shortwave Cloud Retrieval Technique, Based on Transmitted Radiance for Soil and Vegetated Surface Types

    NASA Technical Reports Server (NTRS)

    Coddington, Odele; Pilewskie, Peter; Schmidt, K. Sebastian; McBride, Patrick J.; Vukicevic, Tomislava

    2013-01-01

    This paper presents an approach using the GEneralized Nonlinear Retrieval Analysis (GENRA) tool and general inverse theory diagnostics including the maximum likelihood solution and the Shannon information content to investigate the performance of a new spectral technique for the retrieval of cloud optical properties from surface based transmittance measurements. The cumulative retrieval information over broad ranges in cloud optical thickness (tau), droplet effective radius (r(sub e)), and overhead sun angles is quantified under two conditions known to impact transmitted radiation; the variability in land surface albedo and atmospheric water vapor content. Our conclusions are: (1) the retrieved cloud properties are more sensitive to the natural variability in land surface albedo than to water vapor content; (2) the new spectral technique is more accurate (but still imprecise) than a standard approach, in particular for tau between 5 and 60 and r(sub e) less than approximately 20 nm; and (3) the retrieved cloud properties are dependent on sun angle for clouds of tau from 5 to 10 and r(sub e) less than 10 nm, with maximum sensitivity obtained for an overhead sun.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shumway, R.H.; McQuarrie, A.D.

    Robust statistical approaches to the problem of discriminating between regional earthquakes and explosions are developed. We compare linear discriminant analysis using descriptive features like amplitude and spectral ratios with signal discrimination techniques using the original signal waveforms and spectral approximations to the log likelihood function. Robust information theoretic techniques are proposed and all methods are applied to 8 earthquakes and 8 mining explosions in Scandinavia and to an event from Novaya Zemlya of unknown origin. It is noted that signal discrimination approaches based on discrimination information and Renyi entropy perform better in the test sample than conventional methods based onmore » spectral ratios involving the P and S phases. Two techniques for identifying the ripple-firing pattern for typical mining explosions are proposed and shown to work well on simulated data and on several Scandinavian earthquakes and explosions. We use both cepstral analysis in the frequency domain and a time domain method based on the autocorrelation and partial autocorrelation functions. The proposed approach strips off underlying smooth spectral and seasonal spectral components corresponding to the echo pattern induced by two simple ripple-fired models. For two mining explosions, a pattern is identified whereas for two earthquakes, no pattern is evident.« less

  5. A Holistic Approach to Networked Information Systems Design and Analysis

    DTIC Science & Technology

    2016-04-15

    attain quite substantial savings. 11. Optimal algorithms for energy harvesting in wireless networks. We use a Markov- decision-process (MDP) based...approach to obtain optimal policies for transmissions . The key advantage of our approach is that it holistically considers information and energy in a...Coding technique to minimize delays and the number of transmissions in Wireless Systems. As we approach an era of ubiquitous computing with information

  6. A New Femtosecond Laser-Based Three-Dimensional Tomography Technique

    NASA Astrophysics Data System (ADS)

    Echlin, McLean P.

    2011-12-01

    Tomographic imaging has dramatically changed science, most notably in the fields of medicine and biology, by producing 3D views of structures which are too complex to understand in any other way. Current tomographic techniques require extensive time both for post-processing and data collection. Femtosecond laser based tomographic techniques have been developed in both standard atmosphere (femtosecond laser-based serial sectioning technique - FSLSS) and in vacuum (Tri-Beam System) for the fast collection (10 5mum3/s) of mm3 sized 3D datasets. Both techniques use femtosecond laser pulses to selectively remove layer-by-layer areas of material with low collateral damage and a negligible heat affected zone. To the authors knowledge, femtosecond lasers have never been used to serial section and these techniques have been entirely and uniquely developed by the author and his collaborators at the University of Michigan and University of California Santa Barbara. The FSLSS was applied to measure the 3D distribution of TiN particles in a 4330 steel. Single pulse ablation morphologies and rates were measured and collected from literature. Simultaneous two-phase ablation of TiN and steel matrix was shown to occur at fluences of 0.9-2 J/cm2. Laser scanning protocols were developed minimizing surface roughness to 0.1-0.4 mum for laser-based sectioning. The FSLSS technique was used to section and 3D reconstruct titanium nitride (TiN) containing 4330 steel. Statistical analysis of 3D TiN particle sizes, distribution parameters, and particle density were measured. A methodology was developed to use the 3D datasets to produce statistical volume elements (SVEs) for toughness modeling. Six FSLSS TiN datasets were sub-sampled into 48 SVEs for statistical analysis and toughness modeling using the Rice-Tracey and Garrison-Moody models. A two-parameter Weibull analysis was performed and variability in the toughness data agreed well with Ruggieri et al. bulk toughness measurements. The Tri-Beam system combines the benefits of laser based material removal (speed, low-damage, automated) with detectors that collect chemical, structural, and topological information. Multi-modal sectioning information was collected after many laser scanning passes demonstrating the capability of the Tri-Beam system.

  7. A Flexible Hierarchical Bayesian Modeling Technique for Risk Analysis of Major Accidents.

    PubMed

    Yu, Hongyang; Khan, Faisal; Veitch, Brian

    2017-09-01

    Safety analysis of rare events with potentially catastrophic consequences is challenged by data scarcity and uncertainty. Traditional causation-based approaches, such as fault tree and event tree (used to model rare event), suffer from a number of weaknesses. These include the static structure of the event causation, lack of event occurrence data, and need for reliable prior information. In this study, a new hierarchical Bayesian modeling based technique is proposed to overcome these drawbacks. The proposed technique can be used as a flexible technique for risk analysis of major accidents. It enables both forward and backward analysis in quantitative reasoning and the treatment of interdependence among the model parameters. Source-to-source variability in data sources is also taken into account through a robust probabilistic safety analysis. The applicability of the proposed technique has been demonstrated through a case study in marine and offshore industry. © 2017 Society for Risk Analysis.

  8. Geometric Data Perturbation-Based Personal Health Record Transactions in Cloud Computing

    PubMed Central

    Balasubramaniam, S.; Kavitha, V.

    2015-01-01

    Cloud computing is a new delivery model for information technology services and it typically involves the provision of dynamically scalable and often virtualized resources over the Internet. However, cloud computing raises concerns on how cloud service providers, user organizations, and governments should handle such information and interactions. Personal health records represent an emerging patient-centric model for health information exchange, and they are outsourced for storage by third parties, such as cloud providers. With these records, it is necessary for each patient to encrypt their own personal health data before uploading them to cloud servers. Current techniques for encryption primarily rely on conventional cryptographic approaches. However, key management issues remain largely unsolved with these cryptographic-based encryption techniques. We propose that personal health record transactions be managed using geometric data perturbation in cloud computing. In our proposed scheme, the personal health record database is perturbed using geometric data perturbation and outsourced to the Amazon EC2 cloud. PMID:25767826

  9. Geometric data perturbation-based personal health record transactions in cloud computing.

    PubMed

    Balasubramaniam, S; Kavitha, V

    2015-01-01

    Cloud computing is a new delivery model for information technology services and it typically involves the provision of dynamically scalable and often virtualized resources over the Internet. However, cloud computing raises concerns on how cloud service providers, user organizations, and governments should handle such information and interactions. Personal health records represent an emerging patient-centric model for health information exchange, and they are outsourced for storage by third parties, such as cloud providers. With these records, it is necessary for each patient to encrypt their own personal health data before uploading them to cloud servers. Current techniques for encryption primarily rely on conventional cryptographic approaches. However, key management issues remain largely unsolved with these cryptographic-based encryption techniques. We propose that personal health record transactions be managed using geometric data perturbation in cloud computing. In our proposed scheme, the personal health record database is perturbed using geometric data perturbation and outsourced to the Amazon EC2 cloud.

  10. Cross-Linking/Mass Spectrometry for Studying Protein Structures and Protein-Protein Interactions: Where Are We Now and Where Should We Go from Here?

    PubMed

    Sinz, Andrea

    2018-05-28

    Structural mass spectrometry (MS) is gaining increasing importance for deriving valuable three-dimensional structural information on proteins and protein complexes, and it complements existing techniques, such as NMR spectroscopy and X-ray crystallography. Structural MS unites different MS-based techniques, such as hydrogen/deuterium exchange, native MS, ion-mobility MS, protein footprinting, and chemical cross-linking/MS, and it allows fundamental questions in structural biology to be addressed. In this Minireview, I will focus on the cross-linking/MS strategy. This method not only delivers tertiary structural information on proteins, but is also increasingly being used to decipher protein interaction networks, both in vitro and in vivo. Cross-linking/MS is currently one of the most promising MS-based approaches to derive structural information on very large and transient protein assemblies and intrinsically disordered proteins. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. A new chaotic communication scheme based on adaptive synchronization.

    PubMed

    Xiang-Jun, Wu

    2006-12-01

    A new chaotic communication scheme using adaptive synchronization technique of two unified chaotic systems is proposed. Different from the existing secure communication methods, the transmitted signal is modulated into the parameter of chaotic systems. The adaptive synchronization technique is used to synchronize two identical chaotic systems embedded in the transmitter and the receiver. It is assumed that the parameter of the receiver system is unknown. Based on the Lyapunov stability theory, an adaptive control law is derived to make the states of two identical unified chaotic systems with unknown system parameters asymptotically synchronized; thus the parameter of the receiver system is identified. Then the recovery of the original information signal in the receiver is successfully achieved on the basis of the estimated parameter. It is noticed that the time required for recovering the information signal and the accuracy of the recovered signal very sensitively depends on the frequency of the information signal. Numerical results have verified the effectiveness of the proposed scheme.

  12. Automatic indexing of compound words based on mutual information for Korean text retrieval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan Koo Kim; Yoo Kun Cho

    In this paper, we present an automatic indexing technique for compound words suitable to an aggulutinative language, specifically Korean. Firstly, we present the construction conditions to compose compound words as indexing terms. Also we present the decomposition rules applicable to consecutive nouns to extract all contents of text. Finally we propose a measure to estimate the usefulness of a term, mutual information, to calculate the degree of word association of compound words, based on the information theoretic notion. By applying this method, our system has raised the precision rate of compound words from 72% to 87%.

  13. Spatial Paradigm for Information Retrieval and Exploration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The SPIRE system consists of software for visual analysis of primarily text based information sources. This technology enables the content analysis of text documents without reading all the documents. It employs several algorithms for text and word proximity analysis. It identifies the key themes within the text documents. From this analysis, it projects the results onto a visual spatial proximity display (Galaxies or Themescape) where items (documents and/or themes) visually close to each other are known to have content which is close to each other. Innovative interaction techniques then allow for dynamic visual analysis of large text based information spaces.

  14. SPIRE1.03. Spatial Paradigm for Information Retrieval and Exploration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, K.J.; Bohn, S.; Crow, V.

    The SPIRE system consists of software for visual analysis of primarily text based information sources. This technology enables the content analysis of text documents without reading all the documents. It employs several algorithms for text and word proximity analysis. It identifies the key themes within the text documents. From this analysis, it projects the results onto a visual spatial proximity display (Galaxies or Themescape) where items (documents and/or themes) visually close to each other are known to have content which is close to each other. Innovative interaction techniques then allow for dynamic visual analysis of large text based information spaces.

  15. Evidence-Informed, Individual Treatment of a Child with Sexual Behavior Problems: A Case Study.

    PubMed

    Allen, Brian; Berliner, Lucy

    2015-11-01

    Children with sexual behavior problems pose a significant challenge for community-based mental health clinicians. Very few clinical trials are available to guide intervention and those interventions that are available are based in a group format. The current case study demonstrates the application of evidence-informed treatment techniques during the individual treatment of a 10-year-old boy displaying interpersonal sexual behavior problems. Specifically, the clinician adapts and implements a group-based model developed and tested by Bonner et al. (1999) for use with an individual child and his caregivers. Key points of the case study are discussed within the context of implementing evidence-informed treatments for children with sexual behavior problems.

  16. Dragon pulse information management system (DPIMS): A unique model-based approach to implementing domain agnostic system of systems and behaviors

    NASA Astrophysics Data System (ADS)

    Anderson, Thomas S.

    2016-05-01

    The Global Information Network Architecture is an information technology based on Vector Relational Data Modeling, a unique computational paradigm, DoD network certified by USARMY as the Dragon Pulse Informa- tion Management System. This network available modeling environment for modeling models, where models are configured using domain relevant semantics and use network available systems, sensors, databases and services as loosely coupled component objects and are executable applications. Solutions are based on mission tactics, techniques, and procedures and subject matter input. Three recent ARMY use cases are discussed a) ISR SoS. b) Modeling and simulation behavior validation. c) Networked digital library with behaviors.

  17. Information management system study results. Volume 1: IMS study results

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The information management system (IMS) special emphasis task was performed as an adjunct to the modular space station study, with the objective of providing extended depth of analysis and design in selected key areas of the information management system. Specific objectives included: (1) in-depth studies of IMS requirements and design approaches; (2) design and fabricate breadboard hardware for demonstration and verification of design concepts; (3) provide a technological base to identify potential design problems and influence long range planning (4) develop hardware and techniques to permit long duration, low cost, manned space operations; (5) support SR&T areas where techniques or equipment are considered inadequate; and (6) permit an overall understanding of the IMS as an integrated component of the space station.

  18. 77 FR 36983 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-20

    ..., electronic, mechanical, or other technological collection techniques and other forms of information... and base ranch property and the need for additional grazing to round out year long ranching operations...

  19. Groundwater assessment in Salboni Block, West Bengal (India) using remote sensing, geographical information system and multi-criteria decision analysis techniques

    NASA Astrophysics Data System (ADS)

    Jha, Madan K.; Chowdary, V. M.; Chowdhury, Alivia

    2010-11-01

    An approach is presented for the evaluation of groundwater potential using remote sensing, geographic information system, geoelectrical, and multi-criteria decision analysis techniques. The approach divides the available hydrologic and hydrogeologic data into two groups, exogenous (hydrologic) and endogenous (subsurface). A case study in Salboni Block, West Bengal (India), uses six thematic layers of exogenous parameters and four thematic layers of endogenous parameters. These thematic layers and their features were assigned suitable weights which were normalized by analytic hierarchy process and eigenvector techniques. The layers were then integrated using ArcGIS software to generate two groundwater potential maps. The hydrologic parameters-based groundwater potential zone map indicated that the `good' groundwater potential zone covers 27.14% of the area, the `moderate' zone 45.33%, and the `poor' zone 27.53%. A comparison of this map with the groundwater potential map based on subsurface parameters revealed that the hydrologic parameters-based map accurately delineates groundwater potential zones in about 59% of the area, and hence it is dependable to a certain extent. More than 80% of the study area has moderate-to-poor groundwater potential, which necessitates efficient groundwater management for long-term water security. Overall, the integrated technique is useful for the assessment of groundwater resources at a basin or sub-basin scale.

  20. Usability-driven pruning of large ontologies: the case of SNOMED CT

    PubMed Central

    Boeker, Martin; Illarramendi, Arantza; Schulz, Stefan

    2012-01-01

    Objectives To study ontology modularization techniques when applied to SNOMED CT in a scenario in which no previous corpus of information exists and to examine if frequency-based filtering using MEDLINE can reduce subset size without discarding relevant concepts. Materials and Methods Subsets were first extracted using four graph-traversal heuristics and one logic-based technique, and were subsequently filtered with frequency information from MEDLINE. Twenty manually coded discharge summaries from cardiology patients were used as signatures and test sets. The coverage, size, and precision of extracted subsets were measured. Results Graph-traversal heuristics provided high coverage (71–96% of terms in the test sets of discharge summaries) at the expense of subset size (17–51% of the size of SNOMED CT). Pre-computed subsets and logic-based techniques extracted small subsets (1%), but coverage was limited (24–55%). Filtering reduced the size of large subsets to 10% while still providing 80% coverage. Discussion Extracting subsets to annotate discharge summaries is challenging when no previous corpus exists. Ontology modularization provides valuable techniques, but the resulting modules grow as signatures spread across subhierarchies, yielding a very low precision. Conclusion Graph-traversal strategies and frequency data from an authoritative source can prune large biomedical ontologies and produce useful subsets that still exhibit acceptable coverage. However, a clinical corpus closer to the specific use case is preferred when available. PMID:22268217

  1. Semantic similarity-based alignment between clinical archetypes and SNOMED CT: an application to observations.

    PubMed

    Meizoso García, María; Iglesias Allones, José Luis; Martínez Hernández, Diego; Taboada Iglesias, María Jesús

    2012-08-01

    One of the main challenges of eHealth is semantic interoperability of health systems. But, this will only be possible if the capture, representation and access of patient data is standardized. Clinical data models, such as OpenEHR Archetypes, define data structures that are agreed by experts to ensure the accuracy of health information. In addition, they provide an option to normalize clinical data by means of binding terms used in the model definition to standard medical vocabularies. Nevertheless, the effort needed to establish the association between archetype terms and standard terminology concepts is considerable. Therefore, the purpose of this study is to provide an automated approach to bind OpenEHR archetypes terms to the external terminology SNOMED CT, with the capability to do it at a semantic level. This research uses lexical techniques and external terminological tools in combination with context-based techniques, which use information about structural and semantic proximity to identify similarities between terms and so, to find alignments between them. The proposed approach exploits both the structural context of archetypes and the terminology context, in which concepts are logically defined through the relationships (hierarchical and definitional) to other concepts. A set of 25 OBSERVATION archetypes with 477 bound terms was used to test the method. Of these, 342 terms (74.6%) were linked with 96.1% precision, 71.7% recall and 1.23 SNOMED CT concepts on average for each mapping. It has been detected that about one third of the archetype clinical information is grouped logically. Context-based techniques take advantage of this to increase the recall and to validate a 30.4% of the bindings produced by lexical techniques. This research shows that it is possible to automatically map archetype terms to a standard terminology with a high precision and recall, with the help of appropriate contextual and semantic information of both models. Moreover, the semantic-based methods provide a means of validating and disambiguating the resulting bindings. Therefore, this work is a step forward to reduce the human participation in the mapping process. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  2. The Protein Information Management System (PiMS): a generic tool for any structural biology research laboratory

    PubMed Central

    Morris, Chris; Pajon, Anne; Griffiths, Susanne L.; Daniel, Ed; Savitsky, Marc; Lin, Bill; Diprose, Jonathan M.; Wilter da Silva, Alan; Pilicheva, Katya; Troshin, Peter; van Niekerk, Johannes; Isaacs, Neil; Naismith, James; Nave, Colin; Blake, Richard; Wilson, Keith S.; Stuart, David I.; Henrick, Kim; Esnouf, Robert M.

    2011-01-01

    The techniques used in protein production and structural biology have been developing rapidly, but techniques for recording the laboratory information produced have not kept pace. One approach is the development of laboratory information-management systems (LIMS), which typically use a relational database schema to model and store results from a laboratory workflow. The underlying philosophy and implementation of the Protein Information Management System (PiMS), a LIMS development specifically targeted at the flexible and unpredictable workflows of protein-production research laboratories of all scales, is described. PiMS is a web-based Java application that uses either Postgres or Oracle as the underlying relational database-management system. PiMS is available under a free licence to all academic laboratories either for local installation or for use as a managed service. PMID:21460443

  3. Faithful One-way Trip Deterministic Secure Quantum Communication Scheme Against Collective Rotating Noise Based on Order Rearrangement of Photon Pairs

    NASA Astrophysics Data System (ADS)

    Yuan, Hao; Zhang, Qin; Hong, Liang; Yin, Wen-jie; Xu, Dong

    2014-08-01

    We present a novel scheme for deterministic secure quantum communication (DSQC) over collective rotating noisy channel. Four special two-qubit states are found can constitute a noise-free subspaces, and so are utilized as quantum information carriers. In this scheme, the information carriers transmite over the quantum channel only one time, which can effectively reduce the influence of other noise existing in quantum channel. The information receiver need only perform two single-photon collective measurements to decode the secret messages, which can make the present scheme more convenient in practical application. It will be showed that our scheme has a relatively high information capacity and intrisic efficiency. Foremostly, the decoy photon pair checking technique and the order rearrangement of photon pairs technique guarantee that the present scheme is unconditionally secure.

  4. The Protein Information Management System (PiMS): a generic tool for any structural biology research laboratory.

    PubMed

    Morris, Chris; Pajon, Anne; Griffiths, Susanne L; Daniel, Ed; Savitsky, Marc; Lin, Bill; Diprose, Jonathan M; da Silva, Alan Wilter; Pilicheva, Katya; Troshin, Peter; van Niekerk, Johannes; Isaacs, Neil; Naismith, James; Nave, Colin; Blake, Richard; Wilson, Keith S; Stuart, David I; Henrick, Kim; Esnouf, Robert M

    2011-04-01

    The techniques used in protein production and structural biology have been developing rapidly, but techniques for recording the laboratory information produced have not kept pace. One approach is the development of laboratory information-management systems (LIMS), which typically use a relational database schema to model and store results from a laboratory workflow. The underlying philosophy and implementation of the Protein Information Management System (PiMS), a LIMS development specifically targeted at the flexible and unpredictable workflows of protein-production research laboratories of all scales, is described. PiMS is a web-based Java application that uses either Postgres or Oracle as the underlying relational database-management system. PiMS is available under a free licence to all academic laboratories either for local installation or for use as a managed service.

  5. Recent advancements in nanoelectrodes and nanopipettes used in combined scanning electrochemical microscopy techniques.

    PubMed

    Kranz, Christine

    2014-01-21

    In recent years, major developments in scanning electrochemical microscopy (SECM) have significantly broadened the application range of this electroanalytical technique from high-resolution electrochemical imaging via nanoscale probes to large scale mapping using arrays of microelectrodes. A major driving force in advancing the SECM methodology is based on developing more sophisticated probes beyond conventional micro-disc electrodes usually based on noble metals or carbon microwires. This critical review focuses on the design and development of advanced electrochemical probes particularly enabling combinations of SECM with other analytical measurement techniques to provide information beyond exclusively measuring electrochemical sample properties. Consequently, this critical review will focus on recent progress and new developments towards multifunctional imaging.

  6. Rayleigh Scattering Diagnostic for Simultaneous Measurements of Dynamic Density and Velocity

    NASA Technical Reports Server (NTRS)

    Seasholtz, Richard G.; Panda, J.

    2000-01-01

    A flow diagnostic technique based on the molecular Rayleigh scattering of laser light is used to obtain dynamic density and velocity data in turbulent flows. The technique is based on analyzing the Rayleigh scattered light with a Fabry-Perot interferometer and recording information about the interference pattern with a multiple anode photomultiplier tube (PMT). An artificial neural network is used to process the signals from the PMT to recover the velocity time history, which is then used to calculate the velocity power spectrum. The technique is illustrated using simulated data. The results of an experiment to measure the velocity power spectrum in a low speed (100 rn/sec) flow are also presented.

  7. Techniques for cesarean section.

    PubMed

    Hofmeyr, Justus G; Novikova, Natalia; Mathai, Matthews; Shah, Archana

    2009-11-01

    The effects of complete methods of cesarean section (CS) were compared. Metaanalysis of randomized controlled trials of intention to perform CS using different techniques was carried out. Joel-Cohen-based CS compared with Pfannenstiel CS was associated with reduced blood loss, operating time, time to oral intake, fever, duration of postoperative pain, analgesic injections, and time from skin incision to birth of the baby. Misgav-Ladach compared with the traditional method was associated with reduced blood loss, operating time, time to mobilization, and length of postoperative stay for the mother. Joel-Cohen-based methods have advantages compared with Pfannenstiel and traditional (lower midline) CS techniques. However, these trials do not provide information on serious and long-term outcomes.

  8. [An Introduction to A Newly-developed "Acupuncture Needle Manipulation Training-evaluation System" Based on Optical Motion Capture Technique].

    PubMed

    Zhang, Ao; Yan, Xing-Ke; Liu, An-Guo

    2016-12-25

    In the present paper, the authors introduce a newly-developed "Acupuncture Needle Manipulation Training-evaluation System" based on optical motion capture technique. It is composed of two parts, sensor and software, and overcomes some shortages of mechanical motion capture technique. This device is able to analyze the data of operations of the pressing-hand and needle-insertion hand during acupuncture performance and its software contains personal computer (PC) version, Android version, and Internetwork Operating System (IOS) Apple version. It is competent in recording and analyzing information of any ope-rator's needling manipulations, and is quite helpful for teachers in teaching, training and examining students in clinical practice.

  9. A literature review on anesthetic practice for carotid endarterectomy surgery based on cost, hemodynamic stability, and neurologic status.

    PubMed

    Meitzner, Mark C; Skurnowicz, Julie A; Mitchell, Anne

    2007-06-01

    An extensive literature review was undertaken to evaluate the best anesthetic practice for carotid endarterectomy surgery. Two anesthetic techniques were evaluated: general anesthetic with an endotracheal tube and regional anesthetic block. Three variables were reviewed with respect to significant clinical outcomes based on anesthetic technique. Relevant literature was obtained through multiple sources that included professional journals, a professional website, and textbooks. According to the literature, there is an advantage to performing regional anesthesia with respect to cost and neurologic status. Information analyzed was inconclusive with respect to hemodynamic stability and anesthetic technique. We conclude that regional anesthesia may have some slight advantages; however, more investigation is warranted.

  10. Statistical performance and information content of time lag analysis and redundancy analysis in time series modeling.

    PubMed

    Angeler, David G; Viedma, Olga; Moreno, José M

    2009-11-01

    Time lag analysis (TLA) is a distance-based approach used to study temporal dynamics of ecological communities by measuring community dissimilarity over increasing time lags. Despite its increased use in recent years, its performance in comparison with other more direct methods (i.e., canonical ordination) has not been evaluated. This study fills this gap using extensive simulations and real data sets from experimental temporary ponds (true zooplankton communities) and landscape studies (landscape categories as pseudo-communities) that differ in community structure and anthropogenic stress history. Modeling time with a principal coordinate of neighborhood matrices (PCNM) approach, the canonical ordination technique (redundancy analysis; RDA) consistently outperformed the other statistical tests (i.e., TLAs, Mantel test, and RDA based on linear time trends) using all real data. In addition, the RDA-PCNM revealed different patterns of temporal change, and the strength of each individual time pattern, in terms of adjusted variance explained, could be evaluated, It also identified species contributions to these patterns of temporal change. This additional information is not provided by distance-based methods. The simulation study revealed better Type I error properties of the canonical ordination techniques compared with the distance-based approaches when no deterministic component of change was imposed on the communities. The simulation also revealed that strong emphasis on uniform deterministic change and low variability at other temporal scales is needed to result in decreased statistical power of the RDA-PCNM approach relative to the other methods. Based on the statistical performance of and information content provided by RDA-PCNM models, this technique serves ecologists as a powerful tool for modeling temporal change of ecological (pseudo-) communities.

  11. Influence of pansharpening techniques in obtaining accurate vegetation thematic maps

    NASA Astrophysics Data System (ADS)

    Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier

    2016-10-01

    In last decades, there have been a decline in natural resources, becoming important to develop reliable methodologies for their management. The appearance of very high resolution sensors has offered a practical and cost-effective means for a good environmental management. In this context, improvements are needed for obtaining higher quality of the information available in order to get reliable classified images. Thus, pansharpening enhances the spatial resolution of the multispectral band by incorporating information from the panchromatic image. The main goal in the study is to implement pixel and object-based classification techniques applied to the fused imagery using different pansharpening algorithms and the evaluation of thematic maps generated that serve to obtain accurate information for the conservation of natural resources. A vulnerable heterogenic ecosystem from Canary Islands (Spain) was chosen, Teide National Park, and Worldview-2 high resolution imagery was employed. The classes considered of interest were set by the National Park conservation managers. 7 pansharpening techniques (GS, FIHS, HCS, MTF based, Wavelet `à trous' and Weighted Wavelet `à trous' through Fractal Dimension Maps) were chosen in order to improve the data quality with the goal to analyze the vegetation classes. Next, different classification algorithms were applied at pixel-based and object-based approach, moreover, an accuracy assessment of the different thematic maps obtained were performed. The highest classification accuracy was obtained applying Support Vector Machine classifier at object-based approach in the Weighted Wavelet `à trous' through Fractal Dimension Maps fused image. Finally, highlight the difficulty of the classification in Teide ecosystem due to the heterogeneity and the small size of the species. Thus, it is important to obtain accurate thematic maps for further studies in the management and conservation of natural resources.

  12. Exploration of Deaf People's Health Information Sources and Techniques for Information Delivery in Cape Town: A Qualitative Study for the Design and Development of a Mobile Health App.

    PubMed

    Chininthorn, Prangnat; Glaser, Meryl; Tucker, William David; Diehl, Jan Carel

    2016-11-11

    Many cultural and linguistic Deaf people in South Africa face disparity when accessing health information because of social and language barriers. The number of certified South African Sign Language interpreters (SASLIs) is also insufficient to meet the demand of the Deaf population in the country. Our research team, in collaboration with the Deaf communities in Cape Town, devised a mobile health app called SignSupport to bridge the communication gaps in health care contexts. We consequently plan to extend our work with a Health Knowledge Transfer System (HKTS) to provide Deaf people with accessible, understandable, and accurate health information. We conducted an explorative study to prepare the groundwork for the design and development of the system. To investigate the current modes of health information distributed to Deaf people in Cape Town, identify the health information sources Deaf people prefer and their reasons, and define effective techniques for delivering understandable information to generate the groundwork for the mobile health app development with and for Deaf people. A qualitative methodology using semistructured interviews with sensitizing tools was used in a community-based codesign setting. A total of 23 Deaf people and 10 health professionals participated in this study. Inductive and deductive coding was used for the analysis. Deaf people currently have access to 4 modes of health information distribution through: Deaf and other relevant organizations, hearing health professionals, personal interactions, and the mass media. Their preferred and accessible sources are those delivering information in signed language and with communication techniques that match Deaf people's communication needs. Accessible and accurate health information can be delivered to Deaf people by 3 effective techniques: using signed language including its dialects, through health drama with its combined techniques, and accompanying the information with pictures in combination with simple text descriptions. We can apply the knowledge gained from this exploration to build the groundwork of the mobile health information system. We see an opportunity to design an HKTS to assist the information delivery during the patient-health professional interactions in primary health care settings. Deaf people want to understand the information relevant to their diagnosed disease and its self-management. The 3 identified effective techniques will be applied to deliver health information through the mobile health app. ©Prangnat Chininthorn, Meryl Glaser, William David Tucker, Jan Carel Diehl. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 11.11.2016.

  13. Finding clusters of similar events within clinical incident reports: a novel methodology combining case based reasoning and information retrieval

    PubMed Central

    Tsatsoulis, C; Amthauer, H

    2003-01-01

    A novel methodological approach for identifying clusters of similar medical incidents by analyzing large databases of incident reports is described. The discovery of similar events allows the identification of patterns and trends, and makes possible the prediction of future events and the establishment of barriers and best practices. Two techniques from the fields of information science and artificial intelligence have been integrated—namely, case based reasoning and information retrieval—and very good clustering accuracies have been achieved on a test data set of incident reports from transfusion medicine. This work suggests that clustering should integrate the features of an incident captured in traditional form based records together with the detailed information found in the narrative included in event reports. PMID:14645892

  14. An Intelligent Content Discovery Technique for Health Portal Content Management

    PubMed Central

    2014-01-01

    Background Continuous content management of health information portals is a feature vital for its sustainability and widespread acceptance. Knowledge and experience of a domain expert is essential for content management in the health domain. The rate of generation of online health resources is exponential and thereby manual examination for relevance to a specific topic and audience is a formidable challenge for domain experts. Intelligent content discovery for effective content management is a less researched topic. An existing expert-endorsed content repository can provide the necessary leverage to automatically identify relevant resources and evaluate qualitative metrics. Objective This paper reports on the design research towards an intelligent technique for automated content discovery and ranking for health information portals. The proposed technique aims to improve efficiency of the current mostly manual process of portal content management by utilising an existing expert-endorsed content repository as a supporting base and a benchmark to evaluate the suitability of new content Methods A model for content management was established based on a field study of potential users. The proposed technique is integral to this content management model and executes in several phases (ie, query construction, content search, text analytics and fuzzy multi-criteria ranking). The construction of multi-dimensional search queries with input from Wordnet, the use of multi-word and single-word terms as representative semantics for text analytics and the use of fuzzy multi-criteria ranking for subjective evaluation of quality metrics are original contributions reported in this paper. Results The feasibility of the proposed technique was examined with experiments conducted on an actual health information portal, the BCKOnline portal. Both intermediary and final results generated by the technique are presented in the paper and these help to establish benefits of the technique and its contribution towards effective content management. Conclusions The prevalence of large numbers of online health resources is a key obstacle for domain experts involved in content management of health information portals and websites. The proposed technique has proven successful at search and identification of resources and the measurement of their relevance. It can be used to support the domain expert in content management and thereby ensure the health portal is up-to-date and current. PMID:25654440

  15. Computational technique for stepwise quantitative assessment of equation correctness

    NASA Astrophysics Data System (ADS)

    Othman, Nuru'l Izzah; Bakar, Zainab Abu

    2017-04-01

    Many of the computer-aided mathematics assessment systems that are available today possess the capability to implement stepwise correctness checking of a working scheme for solving equations. The computational technique for assessing the correctness of each response in the scheme mainly involves checking the mathematical equivalence and providing qualitative feedback. This paper presents a technique, known as the Stepwise Correctness Checking and Scoring (SCCS) technique that checks the correctness of each equation in terms of structural equivalence and provides quantitative feedback. The technique, which is based on the Multiset framework, adapts certain techniques from textual information retrieval involving tokenization, document modelling and similarity evaluation. The performance of the SCCS technique was tested using worked solutions on solving linear algebraic equations in one variable. 350 working schemes comprising of 1385 responses were collected using a marking engine prototype, which has been developed based on the technique. The results show that both the automated analytical scores and the automated overall scores generated by the marking engine exhibit high percent agreement, high correlation and high degree of agreement with manual scores with small average absolute and mixed errors.

  16. Fukunaga-Koontz transform based dimensionality reduction for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Ochilov, S.; Alam, M. S.; Bal, A.

    2006-05-01

    Fukunaga-Koontz Transform based technique offers some attractive properties for desired class oriented dimensionality reduction in hyperspectral imagery. In FKT, feature selection is performed by transforming into a new space where feature classes have complimentary eigenvectors. Dimensionality reduction technique based on these complimentary eigenvector analysis can be described under two classes, desired class and background clutter, such that each basis function best represent one class while carrying the least amount of information from the second class. By selecting a few eigenvectors which are most relevant to desired class, one can reduce the dimension of hyperspectral cube. Since the FKT based technique reduces data size, it provides significant advantages for near real time detection applications in hyperspectral imagery. Furthermore, the eigenvector selection approach significantly reduces computation burden via the dimensionality reduction processes. The performance of the proposed dimensionality reduction algorithm has been tested using real-world hyperspectral dataset.

  17. Rule groupings in expert systems using nearest neighbour decision rules, and convex hulls

    NASA Technical Reports Server (NTRS)

    Anastasiadis, Stergios

    1991-01-01

    Expert System shells are lacking in many areas of software engineering. Large rule based systems are not semantically comprehensible, difficult to debug, and impossible to modify or validate. Partitioning a set of rules found in CLIPS (C Language Integrated Production System) into groups of rules which reflect the underlying semantic subdomains of the problem, will address adequately the concerns stated above. Techniques are introduced to structure a CLIPS rule base into groups of rules that inherently have common semantic information. The concepts involved are imported from the field of A.I., Pattern Recognition, and Statistical Inference. Techniques focus on the areas of feature selection, classification, and a criteria of how 'good' the classification technique is, based on Bayesian Decision Theory. A variety of distance metrics are discussed for measuring the 'closeness' of CLIPS rules and various Nearest Neighbor classification algorithms are described based on the above metric.

  18. Knowledge-based vision for space station object motion detection, recognition, and tracking

    NASA Technical Reports Server (NTRS)

    Symosek, P.; Panda, D.; Yalamanchili, S.; Wehner, W., III

    1987-01-01

    Computer vision, especially color image analysis and understanding, has much to offer in the area of the automation of Space Station tasks such as construction, satellite servicing, rendezvous and proximity operations, inspection, experiment monitoring, data management and training. Knowledge-based techniques improve the performance of vision algorithms for unstructured environments because of their ability to deal with imprecise a priori information or inaccurately estimated feature data and still produce useful results. Conventional techniques using statistical and purely model-based approaches lack flexibility in dealing with the variabilities anticipated in the unstructured viewing environment of space. Algorithms developed under NASA sponsorship for Space Station applications to demonstrate the value of a hypothesized architecture for a Video Image Processor (VIP) are presented. Approaches to the enhancement of the performance of these algorithms with knowledge-based techniques and the potential for deployment of highly-parallel multi-processor systems for these algorithms are discussed.

  19. IoT security with one-time pad secure algorithm based on the double memory technique

    NASA Astrophysics Data System (ADS)

    Wiśniewski, Remigiusz; Grobelny, Michał; Grobelna, Iwona; Bazydło, Grzegorz

    2017-11-01

    Secure encryption of data in Internet of Things is especially important as many information is exchanged every day and the number of attack vectors on IoT elements still increases. In the paper a novel symmetric encryption method is proposed. The idea bases on the one-time pad technique. The proposed solution applies double memory concept to secure transmitted data. The presented algorithm is considered as a part of communication protocol and it has been initially validated against known security issues.

  20. Man-machine interface issues in space telerobotics: A JPL research and development program

    NASA Technical Reports Server (NTRS)

    Bejczy, A. K.

    1987-01-01

    Technology issues related to the use of robots as man-extension or telerobot systems in space are discussed and exemplified. General considerations are presentd on control and information problems in space teleoperation and on the characteristics of Earth orbital teleoperation. The JPL R and D work in the area of man-machine interface devices and techniques for sensing and computer-based control is briefly summarized. The thrust of this R and D effort is to render space teleoperation efficient and safe through the use of devices and techniques which will permit integrated and task-level (intelligent) two-way control communication between human operator and telerobot machine in Earth orbit. Specific control and information display devices and techniques are discussed and exemplified with development results obtained at JPL in recent years.

  1. Two-dimensional fringe probing of transient liquid temperatures in a mini space.

    PubMed

    Xue, Zhenlan; Qiu, Huihe

    2011-05-01

    A 2D fringe probing transient temperature measurement technique based on photothermal deflection theory was developed. It utilizes material's refractive index dependence on temperature gradient to obtain temperature information from laser deflection. Instead of single beam, this method applies multiple laser beams to obtain 2D temperature information. The laser fringe was generated with a Mach-Zehnder interferometer. A transient heating experiment was conducted using an electric wire to demonstrate this technique. Temperature field around a heating wire and variation with time was obtained utilizing the scattering fringe patterns. This technique provides non-invasive 2D temperature measurements with spatial and temporal resolutions of 3.5 μm and 4 ms, respectively. It is possible to achieve temporal resolution to 500 μs utilizing the existing high speed camera.

  2. Knowledge Based Engineering for Spatial Database Management and Use

    NASA Technical Reports Server (NTRS)

    Peuquet, D. (Principal Investigator)

    1984-01-01

    The use of artificial intelligence techniques that are applicable to Geographic Information Systems (GIS) are examined. Questions involving the performance and modification to the database structure, the definition of spectra in quadtree structures and their use in search heuristics, extension of the knowledge base, and learning algorithm concepts are investigated.

  3. Automatic Recommendations for E-Learning Personalization Based on Web Usage Mining Techniques and Information Retrieval

    ERIC Educational Resources Information Center

    Khribi, Mohamed Koutheair; Jemni, Mohamed; Nasraoui, Olfa

    2009-01-01

    In this paper, we describe an automatic personalization approach aiming to provide online automatic recommendations for active learners without requiring their explicit feedback. Recommended learning resources are computed based on the current learner's recent navigation history, as well as exploiting similarities and dissimilarities among…

  4. Use of GIS-based Site-specific Nitrogen Management for Improving Energy Efficiency

    USDA-ARS?s Scientific Manuscript database

    To our knowledge, geographical information system (GIS)-based site-specific nitrogen management (SSNM) techniques have not been used to assess agricultural energy costs and efficiency. This chapter uses SSNM case studies for corn (Zea mays L.) grown in Missouri and cotton (Gossypium hirsutum L.) gro...

  5. Developing and Assessing E-Learning Techniques for Teaching Forecasting

    ERIC Educational Resources Information Center

    Gel, Yulia R.; O'Hara Hines, R. Jeanette; Chen, He; Noguchi, Kimihiro; Schoner, Vivian

    2014-01-01

    In the modern business environment, managers are increasingly required to perform decision making and evaluate related risks based on quantitative information in the face of uncertainty, which in turn increases demand for business professionals with sound skills and hands-on experience with statistical data analysis. Computer-based training…

  6. The Amateur Scientist: Simple Optical Experiments in Which Spatial Filtering Removes the "Noise" from Pictures.

    ERIC Educational Resources Information Center

    Walker, Jearl

    1982-01-01

    Spatial filtering, based on diffraction/interference of light waves, is a technique by which unwanted information in a picture ("noise") can be separated from wanted information. A series of experiments is described in which students can create a system that functions as an optical computer to create clearer pictures. (Author/JN)

  7. 78 FR 29439 - Proposed Information Collection (Request for Details of Expenses) Activity: Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-20

    ... correspondence. During the comment period, comments may be viewed online through the FDMS. FOR FURTHER... techniques or the use of other forms of information technology. Title: Request for Details of Expenses, VA... current rate of pension. Pension is an income-based program, and the payable rate depends on the claimant...

  8. 75 FR 45205 - Proposed Information Collection (Request for Details of Expenses) Activity: Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-02

    ... correspondence. During the comment period, comments may be viewed online through the FDMS. FOR FURTHER... techniques or the use of other forms of information technology. Title: Request for Details of Expenses, VA... current rate of pension. Pension is an income-based program, and the payable rate depends on the claimant...

  9. Multidimensional Unfolding by Nonmetric Multidimensional Scaling of Spearman Distances in the Extended Permutation Polytope

    ERIC Educational Resources Information Center

    Van Deun, Katrijn; Heiser, Willem J.; Delbeke, Luc

    2007-01-01

    A multidimensional unfolding technique that is not prone to degenerate solutions and is based on multidimensional scaling of a complete data matrix is proposed: distance information about the unfolding data and about the distances both among judges and among objects is included in the complete matrix. The latter information is derived from the…

  10. Learner-Centred Pedagogy for Swim Coaching: A Complex Learning Theory-Informed Approach

    ERIC Educational Resources Information Center

    Light, Richard

    2014-01-01

    While constructivist theories of learning have been widely drawn on to understand and explain learning in games when using game-based approaches their use to inform pedagogy beyond games is limited. In particular, there has been little interest in applying constructivist perspectives on learning to sports in which technique is of prime importance.…

  11. Work-Based Learning: Effectiveness in Information Systems Training and Development

    ERIC Educational Resources Information Center

    Walters, David

    2006-01-01

    The ability to use methodologies is an essential ingredient in the teaching of Information System techniques and approaches. One method to achieve this is to use a practical approach where students undertake "live" projects with local client organisations. They can then reflect on the approach adopted with the aim of producing a "reflective"…

  12. Outcome-Based Practice: Disclosure Rates of Child Sexual Abuse Comparing Allegation Blind and Allegation Informed Structured Interviews.

    ERIC Educational Resources Information Center

    Cantlon, Julie; And Others

    1996-01-01

    This study evaluated the use of "allegation blind" interviews (in which interviewers did not know the specific allegation involved) with children (n=1535) suspected of being victims of child sexual abuse. The "allegation blind" interview technique yielded a statistically higher disclosure rate than the "allegation informed" interviews. (Author/DB)

  13. Use of Mobile Phones for Project Based Learning by Undergraduate Students of Nigerian Private Universities

    ERIC Educational Resources Information Center

    Utulu, Samuel C.; Alonge, Ayodele

    2012-01-01

    A university's objective is to educate its students using information and communication technologies (ICTs) and teaching techniques that would enable its graduates become flexible and life-long learners that can easily adapt to the changes eminent in the information society. Achieving this aim requires among other factors, the adoption of…

  14. A resilient and efficient CFD framework: Statistical learning tools for multi-fidelity and heterogeneous information fusion

    NASA Astrophysics Data System (ADS)

    Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em

    2017-09-01

    Exascale-level simulations require fault-resilient algorithms that are robust against repeated and expected software and/or hardware failures during computations, which may render the simulation results unsatisfactory. If each processor can share some global information about the simulation from a coarse, limited accuracy but relatively costless auxiliary simulator we can effectively fill-in the missing spatial data at the required times by a statistical learning technique - multi-level Gaussian process regression, on the fly; this has been demonstrated in previous work [1]. Based on the previous work, we also employ another (nonlinear) statistical learning technique, Diffusion Maps, that detects computational redundancy in time and hence accelerate the simulation by projective time integration, giving the overall computation a "patch dynamics" flavor. Furthermore, we are now able to perform information fusion with multi-fidelity and heterogeneous data (including stochastic data). Finally, we set the foundations of a new framework in CFD, called patch simulation, that combines information fusion techniques from, in principle, multiple fidelity and resolution simulations (and even experiments) with a new adaptive timestep refinement technique. We present two benchmark problems (the heat equation and the Navier-Stokes equations) to demonstrate the new capability that statistical learning tools can bring to traditional scientific computing algorithms. For each problem, we rely on heterogeneous and multi-fidelity data, either from a coarse simulation of the same equation or from a stochastic, particle-based, more "microscopic" simulation. We consider, as such "auxiliary" models, a Monte Carlo random walk for the heat equation and a dissipative particle dynamics (DPD) model for the Navier-Stokes equations. More broadly, in this paper we demonstrate the symbiotic and synergistic combination of statistical learning, domain decomposition, and scientific computing in exascale simulations.

  15. PAPR reduction in FBMC using an ACE-based linear programming optimization

    NASA Astrophysics Data System (ADS)

    van der Neut, Nuan; Maharaj, Bodhaswar TJ; de Lange, Frederick; González, Gustavo J.; Gregorio, Fernando; Cousseau, Juan

    2014-12-01

    This paper presents four novel techniques for peak-to-average power ratio (PAPR) reduction in filter bank multicarrier (FBMC) modulation systems. The approach extends on current PAPR reduction active constellation extension (ACE) methods, as used in orthogonal frequency division multiplexing (OFDM), to an FBMC implementation as the main contribution. The four techniques introduced can be split up into two: linear programming optimization ACE-based techniques and smart gradient-project (SGP) ACE techniques. The linear programming (LP)-based techniques compensate for the symbol overlaps by utilizing a frame-based approach and provide a theoretical upper bound on achievable performance for the overlapping ACE techniques. The overlapping ACE techniques on the other hand can handle symbol by symbol processing. Furthermore, as a result of FBMC properties, the proposed techniques do not require side information transmission. The PAPR performance of the techniques is shown to match, or in some cases improve, on current PAPR techniques for FBMC. Initial analysis of the computational complexity of the SGP techniques indicates that the complexity issues with PAPR reduction in FBMC implementations can be addressed. The out-of-band interference introduced by the techniques is investigated. As a result, it is shown that the interference can be compensated for, whilst still maintaining decent PAPR performance. Additional results are also provided by means of a study of the PAPR reduction of the proposed techniques at a fixed clipping probability. The bit error rate (BER) degradation is investigated to ensure that the trade-off in terms of BER degradation is not too severe. As illustrated by exhaustive simulations, the SGP ACE-based technique proposed are ideal candidates for practical implementation in systems employing the low-complexity polyphase implementation of FBMC modulators. The methods are shown to offer significant PAPR reduction and increase the feasibility of FBMC as a replacement modulation system for OFDM.

  16. Summary of vulnerability related technologies based on machine learning

    NASA Astrophysics Data System (ADS)

    Zhao, Lei; Chen, Zhihao; Jia, Qiong

    2018-04-01

    As the scale of information system increases by an order of magnitude, the complexity of system software is getting higher. The vulnerability interaction from design, development and deployment to implementation stages greatly increases the risk of the entire information system being attacked successfully. Considering the limitations and lags of the existing mainstream security vulnerability detection techniques, this paper summarizes the development and current status of related technologies based on the machine learning methods applied to deal with massive and irregular data, and handling security vulnerabilities.

  17. Seeing is believing: on the use of image databases for visually exploring plant organelle dynamics.

    PubMed

    Mano, Shoji; Miwa, Tomoki; Nishikawa, Shuh-ichi; Mimura, Tetsuro; Nishimura, Mikio

    2009-12-01

    Organelle dynamics vary dramatically depending on cell type, developmental stage and environmental stimuli, so that various parameters, such as size, number and behavior, are required for the description of the dynamics of each organelle. Imaging techniques are superior to other techniques for describing organelle dynamics because these parameters are visually exhibited. Therefore, as the results can be seen immediately, investigators can more easily grasp organelle dynamics. At present, imaging techniques are emerging as fundamental tools in plant organelle research, and the development of new methodologies to visualize organelles and the improvement of analytical tools and equipment have allowed the large-scale generation of image and movie data. Accordingly, image databases that accumulate information on organelle dynamics are an increasingly indispensable part of modern plant organelle research. In addition, image databases are potentially rich data sources for computational analyses, as image and movie data reposited in the databases contain valuable and significant information, such as size, number, length and velocity. Computational analytical tools support image-based data mining, such as segmentation, quantification and statistical analyses, to extract biologically meaningful information from each database and combine them to construct models. In this review, we outline the image databases that are dedicated to plant organelle research and present their potential as resources for image-based computational analyses.

  18. Image Hashes as Templates for Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janik, Tadeusz; Jarman, Kenneth D.; Robinson, Sean M.

    2012-07-17

    Imaging systems can provide measurements that confidently assess characteristics of nuclear weapons and dismantled weapon components, and such assessment will be needed in future verification for arms control. Yet imaging is often viewed as too intrusive, raising concern about the ability to protect sensitive information. In particular, the prospect of using image-based templates for verifying the presence or absence of a warhead, or of the declared configuration of fissile material in storage, may be rejected out-of-hand as being too vulnerable to violation of information barrier (IB) principles. Development of a rigorous approach for generating and comparing reduced-information templates from images,more » and assessing the security, sensitivity, and robustness of verification using such templates, are needed to address these concerns. We discuss our efforts to develop such a rigorous approach based on a combination of image-feature extraction and encryption-utilizing hash functions to confirm proffered declarations, providing strong classified data security while maintaining high confidence for verification. The proposed work is focused on developing secure, robust, tamper-sensitive and automatic techniques that may enable the comparison of non-sensitive hashed image data outside an IB. It is rooted in research on so-called perceptual hash functions for image comparison, at the interface of signal/image processing, pattern recognition, cryptography, and information theory. Such perceptual or robust image hashing—which, strictly speaking, is not truly cryptographic hashing—has extensive application in content authentication and information retrieval, database search, and security assurance. Applying and extending the principles of perceptual hashing to imaging for arms control, we propose techniques that are sensitive to altering, forging and tampering of the imaged object yet robust and tolerant to content-preserving image distortions and noise. Ensuring that the information contained in the hashed image data (available out-of-IB) cannot be used to extract sensitive information about the imaged object is of primary concern. Thus the techniques are characterized by high unpredictability to guarantee security. We will present an assessment of the performance of our techniques with respect to security, sensitivity and robustness on the basis of a methodical and mathematically precise framework.« less

  19. Nonlinear ultrasonics for material state awareness

    NASA Astrophysics Data System (ADS)

    Jacobs, L. J.

    2014-02-01

    Predictive health monitoring of structural components will require the development of advanced sensing techniques capable of providing quantitative information on the damage state of structural materials. By focusing on nonlinear acoustic techniques, it is possible to measure absolute, strength based material parameters that can then be coupled with uncertainty models to enable accurate and quantitative life prediction. Starting at the material level, this review will present current research that involves a combination of sensing techniques and physics-based models to characterize damage in metallic materials. In metals, these nonlinear ultrasonic measurements can sense material state, before the formation of micro- and macro-cracks. Typically, cracks of a measurable size appear quite late in a component's total life, while the material's integrity in terms of toughness and strength gradually decreases due to the microplasticity (dislocations) and associated change in the material's microstructure. This review focuses on second harmonic generation techniques. Since these nonlinear acoustic techniques are acoustic wave based, component interrogation can be performed with bulk, surface and guided waves using the same underlying material physics; these nonlinear ultrasonic techniques provide results which are independent of the wave type used. Recent physics-based models consider the evolution of damage due to dislocations, slip bands, interstitials, and precipitates in the lattice structure, which can lead to localized damage.

  20. Rule-based statistical data mining agents for an e-commerce application

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Zhang, Yan-Qing; King, K. N.; Sunderraman, Rajshekhar

    2003-03-01

    Intelligent data mining techniques have useful e-Business applications. Because an e-Commerce application is related to multiple domains such as statistical analysis, market competition, price comparison, profit improvement and personal preferences, this paper presents a hybrid knowledge-based e-Commerce system fusing intelligent techniques, statistical data mining, and personal information to enhance QoS (Quality of Service) of e-Commerce. A Web-based e-Commerce application software system, eDVD Web Shopping Center, is successfully implemented uisng Java servlets and an Oracle81 database server. Simulation results have shown that the hybrid intelligent e-Commerce system is able to make smart decisions for different customers.

  1. [A review on the advancement of internet-based public health surveillance program].

    PubMed

    Zhao, Y Q; Ma, W J

    2017-02-10

    Internet data is introduced into public health arena under the features of fast updating and tremendous volume. Mining and analyzing internet data, researchers can model the internet-based surveillance system to assess the distribution of health-related events. There are two main types of internet-based surveillance systems, i.e. active and passive, which are distinguished by the sources of information. Through passive surveillance system, information is collected from search engine and social media while the active system gathers information through provision of the volunteers. Except for serving as a real-time and convenient complementary approach to traditional disease, food safety and adverse drug reaction surveillance program, Internet-based surveillance system can also play a role in health-related behavior surveillance and policy evaluation. Although several techniques have been applied to filter information, the accuracy of internet-based surveillance system is still bothered by the false positive information. In this article, we have summarized the development and application of internet-based surveillance system in public health to provide reference for a better surveillance program in China.

  2. [Application of electronic fence technology based on GIS in Oncomelania hupensis snail monitoring].

    PubMed

    Zhi-Hua, Chen; Yi-Sheng, Zhu; Zhi-Qiang, Xue; Xue-Bing, Li; Yi-Min, Ding; Li-Jun, Bi; Kai-Min, Gao; You, Zhang

    2017-07-27

    To study the application of Geographic Information System (GIS) electronic fence technique in Oncomelania hupensis snail monitoring. The electronic fence was set around the history and existing snail environments in the electronic map, the information about snail monitoring and controlling was linked to the electronic fence, and the snail monitoring information system was established on these bases. The monitoring information was input through the computer and smart phone. The electronic fence around the history and existing snail environments was set in the electronic map (Baidu map), and the snail monitoring information system and smart phone APP were established. The monitoring information was input and upload real-time, and the snail monitoring information was demonstrated in real time on Baidu map. By using the electronic fence technology based on GIS, the unique "environment electronic archives" for each snail monitoring environment can be established in the electronic map, and real-time, dynamic monitoring and visual management can be realized.

  3. Secure biometric image sensor and authentication scheme based on compressed sensing.

    PubMed

    Suzuki, Hiroyuki; Suzuki, Masamichi; Urabe, Takuya; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2013-11-20

    It is important to ensure the security of biometric authentication information, because its leakage causes serious risks, such as replay attacks using the stolen biometric data, and also because it is almost impossible to replace raw biometric information. In this paper, we propose a secure biometric authentication scheme that protects such information by employing an optical data ciphering technique based on compressed sensing. The proposed scheme is based on two-factor authentication, the biometric information being supplemented by secret information that is used as a random seed for a cipher key. In this scheme, a biometric image is optically encrypted at the time of image capture, and a pair of restored biometric images for enrollment and verification are verified in the authentication server. If any of the biometric information is exposed to risk, it can be reenrolled by changing the secret information. Through numerical experiments, we confirm that finger vein images can be restored from the compressed sensing measurement data. We also present results that verify the accuracy of the scheme.

  4. Surface-Water Techniques: On Demand Training Opportunities

    USGS Publications Warehouse

    ,

    2007-01-01

    The U.S. Geological Survey (USGS) has been collecting streamflow information since 1889 using nationally consistent methods. The need for such information was envisioned by John Wesley Powell as a key component for settlement of the arid western United States. Because of Powell?s vision the nation now has a rich streamflow data base that can be analyzed with confidence in both space and time. This means that data collected at a stream gaging station in Maine in 1903 can be compared to data collected in 2007 at the same gage in Maine or at a different gage in California. Such comparisons are becoming increasingly important as we work to assess climate variability and anthropogenic effects on streamflow. Training employees in proper and consistent techniques to collect and analyze streamflow data forms a cornerstone for maintaining the integrity of this rich data base.

  5. Patch-Based Super-Resolution of MR Spectroscopic Images: Application to Multiple Sclerosis

    PubMed Central

    Jain, Saurabh; Sima, Diana M.; Sanaei Nezhad, Faezeh; Hangel, Gilbert; Bogner, Wolfgang; Williams, Stephen; Van Huffel, Sabine; Maes, Frederik; Smeets, Dirk

    2017-01-01

    Purpose: Magnetic resonance spectroscopic imaging (MRSI) provides complementary information to conventional magnetic resonance imaging. Acquiring high resolution MRSI is time consuming and requires complex reconstruction techniques. Methods: In this paper, a patch-based super-resolution method is presented to increase the spatial resolution of metabolite maps computed from MRSI. The proposed method uses high resolution anatomical MR images (T1-weighted and Fluid-attenuated inversion recovery) to regularize the super-resolution process. The accuracy of the method is validated against conventional interpolation techniques using a phantom, as well as simulated and in vivo acquired human brain images of multiple sclerosis subjects. Results: The method preserves tissue contrast and structural information, and matches well with the trend of acquired high resolution MRSI. Conclusions: These results suggest that the method has potential for clinically relevant neuroimaging applications. PMID:28197066

  6. Artificial intelligence within the chemical laboratory.

    PubMed

    Winkel, P

    1994-01-01

    Various techniques within the area of artificial intelligence such as expert systems and neural networks may play a role during the problem-solving processes within the clinical biochemical laboratory. Neural network analysis provides a non-algorithmic approach to information processing, which results in the ability of the computer to form associations and to recognize patterns or classes among data. It belongs to the machine learning techniques which also include probabilistic techniques such as discriminant function analysis and logistic regression and information theoretical techniques. These techniques may be used to extract knowledge from example patients to optimize decision limits and identify clinically important laboratory quantities. An expert system may be defined as a computer program that can give advice in a well-defined area of expertise and is able to explain its reasoning. Declarative knowledge consists of statements about logical or empirical relationships between things. Expert systems typically separate declarative knowledge residing in a knowledge base from the inference engine: an algorithm that dynamically directs and controls the system when it searches its knowledge base. A tool is an expert system without a knowledge base. The developer of an expert system uses a tool by entering knowledge into the system. Many, if not the majority of problems encountered at the laboratory level are procedural. A problem is procedural if it is possible to write up a step-by-step description of the expert's work or if it can be represented by a decision tree. To solve problems of this type only small expert system tools and/or conventional programming are required.(ABSTRACT TRUNCATED AT 250 WORDS)

  7. A Novel Image Steganography Technique for Secured Online Transaction Using DWT and Visual Cryptography

    NASA Astrophysics Data System (ADS)

    Anitha Devi, M. D.; ShivaKumar, K. B.

    2017-08-01

    Online payment eco system is the main target especially for cyber frauds. Therefore end to end encryption is very much needed in order to maintain the integrity of secret information related to transactions carried online. With access to payment related sensitive information, which enables lot of money transactions every day, the payment infrastructure is a major target for hackers. The proposed system highlights, an ideal approach for secure online transaction for fund transfer with a unique combination of visual cryptography and Haar based discrete wavelet transform steganography technique. This combination of data hiding technique reduces the amount of information shared between consumer and online merchant needed for successful online transaction along with providing enhanced security to customer’s account details and thereby increasing customer’s confidence preventing “Identity theft” and “Phishing”. To evaluate the effectiveness of proposed algorithm Root mean square error, Peak signal to noise ratio have been used as evaluation parameters

  8. Photogrammetry for Archaeology: Collecting Pieces Together

    NASA Astrophysics Data System (ADS)

    Chibunichev, A. G.; Knyaz, V. A.; Zhuravlev, D. V.; Kurkov, V. M.

    2018-05-01

    The complexity of retrieving and understanding the archaeological data requires to apply different techniques, tools and sensors for information gathering, processing and documenting. Archaeological research now has the interdisciplinary nature involving technologies based on different physical principles for retrieving information about archaeological findings. The important part of archaeological data is visual and spatial information which allows reconstructing the appearance of the findings and relation between them. Photogrammetry has a great potential for accurate acquiring of spatial and visual data of different scale and resolution allowing to create archaeological documents of new type and quality. The aim of the presented study is to develop an approach for creating new forms of archaeological documents, a pipeline for their producing and collecting in one holistic model, describing an archaeological site. A set of techniques is developed for acquiring and integration of spatial and visual data of different level of details. The application of the developed techniques is demonstrated for documenting of Bosporus archaeological expedition of Russian State Historical Museum.

  9. Radar-raingauge data combination techniques: a revision and analysis of their suitability for urban hydrology.

    PubMed

    Wang, Li-Pen; Ochoa-Rodríguez, Susana; Simões, Nuno Eduardo; Onof, Christian; Maksimović, Cedo

    2013-01-01

    The applicability of the operational radar and raingauge networks for urban hydrology is insufficient. Radar rainfall estimates provide a good description of the spatiotemporal variability of rainfall; however, their accuracy is in general insufficient. It is therefore necessary to adjust radar measurements using raingauge data, which provide accurate point rainfall information. Several gauge-based radar rainfall adjustment techniques have been developed and mainly applied at coarser spatial and temporal scales; however, their suitability for small-scale urban hydrology is seldom explored. In this paper a review of gauge-based adjustment techniques is first provided. After that, two techniques, respectively based upon the ideas of mean bias reduction and error variance minimisation, were selected and tested using as case study an urban catchment (∼8.65 km(2)) in North-East London. The radar rainfall estimates of four historical events (2010-2012) were adjusted using in situ raingauge estimates and the adjusted rainfall fields were applied to the hydraulic model of the study area. The results show that both techniques can effectively reduce mean bias; however, the technique based upon error variance minimisation can in general better reproduce the spatial and temporal variability of rainfall, which proved to have a significant impact on the subsequent hydraulic outputs. This suggests that error variance minimisation based methods may be more appropriate for urban-scale hydrological applications.

  10. An Ontology Based Approach to Information Security

    NASA Astrophysics Data System (ADS)

    Pereira, Teresa; Santos, Henrique

    The semantically structure of knowledge, based on ontology approaches have been increasingly adopted by several expertise from diverse domains. Recently ontologies have been moved from the philosophical and metaphysics disciplines to be used in the construction of models to describe a specific theory of a domain. The development and the use of ontologies promote the creation of a unique standard to represent concepts within a specific knowledge domain. In the scope of information security systems the use of an ontology to formalize and represent the concepts of security information challenge the mechanisms and techniques currently used. This paper intends to present a conceptual implementation model of an ontology defined in the security domain. The model presented contains the semantic concepts based on the information security standard ISO/IEC_JTC1, and their relationships to other concepts, defined in a subset of the information security domain.

  11. Crosstalk: The Journal of Defense Software Engineering. Volume 22, Number 3

    DTIC Science & Technology

    2009-04-01

    international standard for information security management systems like ISO /IEC 27001 :2005 [1] existed. Since that time, the organization has developed control...of ISO /IEC 27001 and the desire to make decisions based on business value and risk has prompted Ford’s IT Security and Controls organi- zation to begin...their conventional application security operation.u References 1. ISO /IEC 27001 :2005. “Information Technology – Security Techniques – Information

  12. An ensemble framework for clustering protein-protein interaction networks.

    PubMed

    Asur, Sitaram; Ucar, Duygu; Parthasarathy, Srinivasan

    2007-07-01

    Protein-Protein Interaction (PPI) networks are believed to be important sources of information related to biological processes and complex metabolic functions of the cell. The presence of biologically relevant functional modules in these networks has been theorized by many researchers. However, the application of traditional clustering algorithms for extracting these modules has not been successful, largely due to the presence of noisy false positive interactions as well as specific topological challenges in the network. In this article, we propose an ensemble clustering framework to address this problem. For base clustering, we introduce two topology-based distance metrics to counteract the effects of noise. We develop a PCA-based consensus clustering technique, designed to reduce the dimensionality of the consensus problem and yield informative clusters. We also develop a soft consensus clustering variant to assign multifaceted proteins to multiple functional groups. We conduct an empirical evaluation of different consensus techniques using topology-based, information theoretic and domain-specific validation metrics and show that our approaches can provide significant benefits over other state-of-the-art approaches. Our analysis of the consensus clusters obtained demonstrates that ensemble clustering can (a) produce improved biologically significant functional groupings; and (b) facilitate soft clustering by discovering multiple functional associations for proteins. Supplementary data are available at Bioinformatics online.

  13. Verifying the secure setup of Unix client/servers and detection of network intrusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feingold, R.; Bruestle, H.R.; Bartoletti, T.

    1995-07-01

    This paper describes our technical approach to developing and delivering Unix host- and network-based security products to meet the increasing challenges in information security. Today`s global ``Infosphere`` presents us with a networked environment that knows no geographical, national, or temporal boundaries, and no ownership, laws, or identity cards. This seamless aggregation of computers, networks, databases, applications, and the like store, transmit, and process information. This information is now recognized as an asset to governments, corporations, and individuals alike. This information must be protected from misuse. The Security Profile Inspector (SPI) performs static analyses of Unix-based clients and servers to checkmore » on their security configuration. SPI`s broad range of security tests and flexible usage options support the needs of novice and expert system administrators alike. SPI`s use within the Department of Energy and Department of Defense has resulted in more secure systems, less vulnerable to hostile intentions. Host-based information protection techniques and tools must also be supported by network-based capabilities. Our experience shows that a weak link in a network of clients and servers presents itself sooner or later, and can be more readily identified by dynamic intrusion detection techniques and tools. The Network Intrusion Detector (NID) is one such tool. NID is designed to monitor and analyze activity on an Ethernet broadcast Local Area Network segment and produce transcripts of suspicious user connections. NID`s retrospective and real-time modes have proven invaluable to security officers faced with ongoing attacks to their systems and networks.« less

  14. A new Information publishing system Based on Internet of things

    NASA Astrophysics Data System (ADS)

    Zhu, Li; Ma, Guoguang

    2018-03-01

    A new information publishing system based on Internet of things is proposed, which is composed of four level hierarchical structure, including the screen identification layer, the network transport layer, the service management layer and the publishing application layer. In the architecture, the screen identification layer has realized the internet of screens in which geographically dispersed independent screens are connected to the internet by the customized set-top boxes. The service management layer uses MQTT protocol to implement a lightweight broker-based publish/subscribe messaging mechanism in constrained environments such as internet of things to solve the bandwidth bottleneck. Meanwhile the cloud-based storage technique is used to storage and manage the promptly increasing multimedia publishing information. The paper has designed and realized a prototype SzIoScreen, and give some related test results.

  15. Using cognitive task analysis to develop simulation-based training for medical tasks.

    PubMed

    Cannon-Bowers, Jan; Bowers, Clint; Stout, Renee; Ricci, Katrina; Hildabrand, Annette

    2013-10-01

    Pressures to increase the efficacy and effectiveness of medical training are causing the Department of Defense to investigate the use of simulation technologies. This article describes a comprehensive cognitive task analysis technique that can be used to simultaneously generate training requirements, performance metrics, scenario requirements, and simulator/simulation requirements for medical tasks. On the basis of a variety of existing techniques, we developed a scenario-based approach that asks experts to perform the targeted task multiple times, with each pass probing a different dimension of the training development process. In contrast to many cognitive task analysis approaches, we argue that our technique can be highly cost effective because it is designed to accomplish multiple goals. The technique was pilot tested with expert instructors from a large military medical training command. These instructors were employed to generate requirements for two selected combat casualty care tasks-cricothyroidotomy and hemorrhage control. Results indicated that the technique is feasible to use and generates usable data to inform simulation-based training system design. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.

  16. Enhanced automatic artifact detection based on independent component analysis and Renyi's entropy.

    PubMed

    Mammone, Nadia; Morabito, Francesco Carlo

    2008-09-01

    Artifacts are disturbances that may occur during signal acquisition and may affect their processing. The aim of this paper is to propose a technique for automatically detecting artifacts from the electroencephalographic (EEG) recordings. In particular, a technique based on both Independent Component Analysis (ICA) to extract artifactual signals and on Renyi's entropy to automatically detect them is presented. This technique is compared to the widely known approach based on ICA and the joint use of kurtosis and Shannon's entropy. The novel processing technique is shown to detect on average 92.6% of the artifactual signals against the average 68.7% of the previous technique on the studied available database. Moreover, Renyi's entropy is shown to be able to detect muscle and very low frequency activity as well as to discriminate them from other kinds of artifacts. In order to achieve an efficient rejection of the artifacts while minimizing the information loss, future efforts will be devoted to the improvement of blind artifact separation from EEG in order to ensure a very efficient isolation of the artifactual activity from any signals deriving from other brain tasks.

  17. Stable adaptive PI control for permanent magnet synchronous motor drive based on improved JITL technique.

    PubMed

    Zheng, Shiqi; Tang, Xiaoqi; Song, Bao; Lu, Shaowu; Ye, Bosheng

    2013-07-01

    In this paper, a stable adaptive PI control strategy based on the improved just-in-time learning (IJITL) technique is proposed for permanent magnet synchronous motor (PMSM) drive. Firstly, the traditional JITL technique is improved. The new IJITL technique has less computational burden and is more suitable for online identification of the PMSM drive system which is highly real-time compared to traditional JITL. In this way, the PMSM drive system is identified by IJITL technique, which provides information to an adaptive PI controller. Secondly, the adaptive PI controller is designed in discrete time domain which is composed of a PI controller and a supervisory controller. The PI controller is capable of automatically online tuning the control gains based on the gradient descent method and the supervisory controller is developed to eliminate the effect of the approximation error introduced by the PI controller upon the system stability in the Lyapunov sense. Finally, experimental results on the PMSM drive system show accurate identification and favorable tracking performance. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  18. A knowledge based system for scientific data visualization

    NASA Technical Reports Server (NTRS)

    Senay, Hikmet; Ignatius, Eve

    1992-01-01

    A knowledge-based system, called visualization tool assistant (VISTA), which was developed to assist scientists in the design of scientific data visualization techniques, is described. The system derives its knowledge from several sources which provide information about data characteristics, visualization primitives, and effective visual perception. The design methodology employed by the system is based on a sequence of transformations which decomposes a data set into a set of data partitions, maps this set of partitions to visualization primitives, and combines these primitives into a composite visualization technique design. Although the primary function of the system is to generate an effective visualization technique design for a given data set by using principles of visual perception the system also allows users to interactively modify the design, and renders the resulting image using a variety of rendering algorithms. The current version of the system primarily supports visualization techniques having applicability in earth and space sciences, although it may easily be extended to include other techniques useful in other disciplines such as computational fluid dynamics, finite-element analysis and medical imaging.

  19. Distributed acoustic sensing technique and its field trial in SAGD well

    NASA Astrophysics Data System (ADS)

    Han, Li; He, Xiangge; Pan, Yong; Liu, Fei; Yi, Duo; Hu, Chengjun; Zhang, Min; Gu, Lijuan

    2017-10-01

    Steam assisted gravity drainage (SAGD) is a very promising way for the development of heavy oil, extra heavy oil and tight oil reservoirs. Proper monitoring of the SAGD operations is essential to avoid operational issues and improve efficiency. Among all the monitoring techniques, micro-seismic monitoring and related interpretation method can give useful information about the steam chamber development and has been extensively studied. Distributed acoustic sensor (DAS) based on Rayleigh backscattering is a newly developed technique that can measure acoustic signal at all points along the sensing fiber. In this paper, we demonstrate a DAS system based on dual-pulse heterodyne demodulation technique and did field trial in SAGD well located in Xinjiang Oilfield, China. The field trail results validated the performance of the DAS system and indicated its applicability in steam-chamber monitoring and hydraulic monitoring.

  20. An adaptive incremental approach to constructing ensemble classifiers: Application in an information-theoretic computer-aided decision system for detection of masses in mammograms

    PubMed Central

    Mazurowski, Maciej A.; Zurada, Jacek M.; Tourassi, Georgia D.

    2009-01-01

    Ensemble classifiers have been shown efficient in multiple applications. In this article, the authors explore the effectiveness of ensemble classifiers in a case-based computer-aided diagnosis system for detection of masses in mammograms. They evaluate two general ways of constructing subclassifiers by resampling of the available development dataset: Random division and random selection. Furthermore, they discuss the problem of selecting the ensemble size and propose two adaptive incremental techniques that automatically select the size for the problem at hand. All the techniques are evaluated with respect to a previously proposed information-theoretic CAD system (IT-CAD). The experimental results show that the examined ensemble techniques provide a statistically significant improvement (AUC=0.905±0.024) in performance as compared to the original IT-CAD system (AUC=0.865±0.029). Some of the techniques allow for a notable reduction in the total number of examples stored in the case base (to 1.3% of the original size), which, in turn, results in lower storage requirements and a shorter response time of the system. Among the methods examined in this article, the two proposed adaptive techniques are by far the most effective for this purpose. Furthermore, the authors provide some discussion and guidance for choosing the ensemble parameters. PMID:19673196

  1. An adaptive technique to maximize lossless image data compression of satellite images

    NASA Technical Reports Server (NTRS)

    Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe

    1994-01-01

    Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.

  2. Determination of a Limited Scope Network's Lightning Detection Efficiency

    NASA Technical Reports Server (NTRS)

    Rompala, John T.; Blakeslee, R.

    2008-01-01

    This paper outlines a modeling technique to map lightning detection efficiency variations over a region surveyed by a sparse array of ground based detectors. A reliable flash peak current distribution (PCD) for the region serves as the technique's base. This distribution is recast as an event probability distribution function. The technique then uses the PCD together with information regarding: site signal detection thresholds, type of solution algorithm used, and range attenuation; to formulate the probability that a flash at a specified location will yield a solution. Applying this technique to the full region produces detection efficiency contour maps specific to the parameters employed. These contours facilitate a comparative analysis of each parameter's effect on the network's detection efficiency. In an alternate application, this modeling technique gives an estimate of the number, strength, and distribution of events going undetected. This approach leads to a variety of event density contour maps. This application is also illustrated. The technique's base PCD can be empirical or analytical. A process for formulating an empirical PCD specific to the region and network being studied is presented. A new method for producing an analytical representation of the empirical PCD is also introduced.

  3. The influence of a biopsychosocial educational internet-based intervention on pain, dysfunction, quality of life, and pain cognition in chronic low back pain patients in primary care: a mixed methods approach.

    PubMed

    Valenzuela-Pascual, Fran; Molina, Fidel; Corbi, Francisco; Blanco-Blanco, Joan; Gil, Rosa M; Soler-Gonzalez, Jorge

    2015-11-23

    Low back pain is the highest reported musculoskeletal problem worldwide. Up to 90 % of patients with low back pain have no clear explanation for the source and origin of their pain. These individuals commonly receive a diagnosis of non-specific low back pain. Patient education is a way to provide information and advice aimed at changing patients' cognition and knowledge about their chronic state through the reduction of fear of anticipatory outcomes and the resumption of normal activities. Information technology and the expedited communication processes associated with this technology can be used to deliver health care information to patients. Hence, this technology and its ability to deliver life-changing information has grown as a powerful and alternative health promotion tool. Several studies have demonstrated that websites can change and improve chronic patients' knowledge and have a positive impact on patients' attitudes and behaviors. The aim of this project is to identify chronic low back pain patients' beliefs about the origin and meaning of pain to develop a web-based educational tool using different educational formats and gamification techniques. This study has a mixed-method sequential exploratory design. The participants are chronic low back pain patients between 18-65 years of age who are attending a primary care setting. For the qualitative phase, subjects will be contacted by their family physician and invited to participate in a personal semi-structured interview. The quantitative phase will be a randomized controlled trial. Subjects will be randomly allocated using a simple random sample technique. The intervention group will be provided access to the web site where they will find information related to their chronic low back pain. This information will be provided in different formats. All of this material will be based on the information obtained in the qualitative phase. The control group will follow conventional treatment provided by their family physician. The main outcome of this project is to identify chronic low back pain patients' beliefs about the origin and meaning of pain to develop a web-based educational tool using different educational formats and gamification techniques. ClinicalTrials.gov NCT02369120 Date: 02/20/2015.

  4. Systems and methods for analyzing building operations sensor data

    DOEpatents

    Mezic, Igor; Eisenhower, Bryan A.

    2015-05-26

    Systems and methods are disclosed for analyzing building sensor information and decomposing the information therein to a more manageable and more useful form. Certain embodiments integrate energy-based and spectral-based analysis methods with parameter sampling and uncertainty/sensitivity analysis to achieve a more comprehensive perspective of building behavior. The results of this analysis may be presented to a user via a plurality of visualizations and/or used to automatically adjust certain building operations. In certain embodiments, advanced spectral techniques, including Koopman-based operations, are employed to discern features from the collected building sensor data.

  5. Improved Anvil Forecasting

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred C.

    2000-01-01

    This report describes the outcome of Phase 1 of the AMU's Improved Anvil Forecasting task. Forecasters in the 45th Weather Squadron and the Spaceflight Meteorology Group have found that anvil forecasting is a difficult task when predicting LCC and FR violations. The purpose of this task is to determine the technical feasibility of creating an anvil-forecasting tool. Work on this study was separated into three steps: literature search, forecaster discussions, and determination of technical feasibility. The literature search revealed no existing anvil-forecasting techniques. However, there appears to be growing interest in anvils in recent years. If this interest continues to grow, more information will be available to aid in developing a reliable anvil-forecasting tool. The forecaster discussion step revealed an array of methods on how better forecasting techniques could be developed. The forecasters have ideas based on sound meteorological principles and personal experience in forecasting and analyzing anvils. Based on the information gathered in the discussions with the forecasters, the conclusion of this report is that it is technically feasible at this time to develop an anvil forecasting technique that will significantly contribute to the confidence in anvil forecasts.

  6. Mechanisms of behavioural maintenance: Long-term effects of theory-based interventions to promote safe water consumption.

    PubMed

    Inauen, Jennifer; Mosler, Hans-Joachim

    2016-01-01

    Theory-based interventions can enhance people's safe water consumption, but the sustainability of these interventions and the mechanisms of maintenance remain unclear. We investigated these questions based on an extended theory of planned behaviour. Seven hundred and ten (445 analysed) randomly selected households participated in two cluster-randomised controlled trials in Bangladesh. Study 1 promoted switching to neighbours' arsenic-safe wells, and Study 2 promoted switching to arsenic-safe deep wells. Both studies included two intervention phases. Structured interviews were conducted at baseline (T1), and at 1-month (T2), 2-month (T3) and 9-month (T4) follow-ups. In intervention phase 1 (between T1 and T2), commitment-based behaviour change techniques--reminders, implementation intentions and public commitment--were combined with information and compared to an information-only control group. In phase 2 (between T2 and T3), half of each phase 1 intervention group was randomly assigned to receive either commitment-based techniques once more or coping planning with reminders and information. Initial well-switching rates of up to 60% significantly declined by T4: 38.3% of T2 safe water users stopped consuming arsenic-safe water. The decline depended on the intervention. Perceived behavioural control, intentions, commitment strength and coping planning were associated with maintenance. In line with previous studies, the results indicate that commitment and reminders engender long-term behavioural change.

  7. Obstacle detection by recognizing binary expansion patterns

    NASA Technical Reports Server (NTRS)

    Baram, Yoram; Barniv, Yair

    1993-01-01

    This paper describes a technique for obstacle detection, based on the expansion of the image-plane projection of a textured object, as its distance from the sensor decreases. Information is conveyed by vectors whose components represent first-order temporal and spatial derivatives of the image intensity, which are related to the time to collision through the local divergence. Such vectors may be characterized as patterns corresponding to 'safe' or 'dangerous' situations. We show that essential information is conveyed by single-bit vector components, representing the signs of the relevant derivatives. We use two recently developed, high capacity classifiers, employing neural learning techniques, to recognize the imminence of collision from such patterns.

  8. Evaluation of a problem-solving (PS) techniques-based intervention for informal carers of patients with dementia receiving in-home care.

    PubMed

    Chiu, Mary; Pauley, Tim; Wesson, Virginia; Pushpakumar, Dunstan; Sadavoy, Joel

    2015-06-01

    The value of care provided by informal carers in Canada is estimated at $26 billion annually (Hollander et al., 2009). However, carers' needs are often overlooked, limiting their capacity to provide care. Problem-solving therapy (PST), a structured approach to problem solving (PS) and a core principle of the Reitman Centre CARERS Program, has been shown to alleviate emotional distress and improve carers' competence (Chiu et al., 2013). This study evaluated the effectiveness of problem-solving techniques-based intervention based on adapted PST methods, in enhancing carers' physical and emotional capacity to care for relatives with dementia living in the community. 56 carers were equally allocated to a problem-solving techniques-based intervention group or a control arm. Carers in the intervention group received three 1 hr visits by a care coordinator (CC) who had been given advanced training in PS techniques-based intervention. Coping, mastery, competence, burden, and perceived stress of the carers were evaluated at baseline and post-intervention using standardized assessment tools. An intention-to-treat analysis utilizing repeated measures ANOVA was performed on the data. Post-intervention measures completion rate was 82% and 92% for the intervention and control groups, respectively. Carers in the intervention group showed significantly improved task-oriented coping, mastery, and competence and significantly reduced emotion-oriented coping, burden and stress (p < 0.01-0.001). Control carers showed no change. PS techniques, when learned and delivered by CCs as a tool to coach carers in their day-to-day caregiving, improves carers' caregiving competence, coping, burden, and perceived stress. This may reduce dependence on primary, psychiatric, and institutional care. Results provide evidence that establishing effective partnerships between inter-professional clinicians in academic clinical health science centers, and community agencies can extend the reach of the expertise of specialized health care institutions.

  9. Unsupervised Feature Selection Based on the Morisita Index for Hyperspectral Images

    NASA Astrophysics Data System (ADS)

    Golay, Jean; Kanevski, Mikhail

    2017-04-01

    Hyperspectral sensors are capable of acquiring images with hundreds of narrow and contiguous spectral bands. Compared with traditional multispectral imagery, the use of hyperspectral images allows better performance in discriminating between land-cover classes, but it also results in large redundancy and high computational data processing. To alleviate such issues, unsupervised feature selection techniques for redundancy minimization can be implemented. Their goal is to select the smallest subset of features (or bands) in such a way that all the information content of a data set is preserved as much as possible. The present research deals with the application to hyperspectral images of a recently introduced technique of unsupervised feature selection: the Morisita-Based filter for Redundancy Minimization (MBRM). MBRM is based on the (multipoint) Morisita index of clustering and on the Morisita estimator of Intrinsic Dimension (ID). The fundamental idea of the technique is to retain only the bands which contribute to increasing the ID of an image. In this way, redundant bands are disregarded, since they have no impact on the ID. Besides, MBRM has several advantages over benchmark techniques: in addition to its ability to deal with large data sets, it can capture highly-nonlinear dependences and its implementation is straightforward in any programming environment. Experimental results on freely available hyperspectral images show the good effectiveness of MBRM in remote sensing data processing. Comparisons with benchmark techniques are carried out and random forests are used to assess the performance of MBRM in reducing the data dimensionality without loss of relevant information. References [1] C. Traina Jr., A.J.M. Traina, L. Wu, C. Faloutsos, Fast feature selection using fractal dimension, in: Proceedings of the XV Brazilian Symposium on Databases, SBBD, pp. 158-171, 2000. [2] J. Golay, M. Kanevski, A new estimator of intrinsic dimension based on the multipoint Morisita index, Pattern Recognition 48(12), pp. 4070-4081, 2015. [3] J. Golay, M. Kanevski, Unsupervised feature selection based on the Morisita estimator of intrinsic dimension, arXiv:1608.05581, 2016.

  10. Abstracting of suspected illegal land use in urban areas using case-based classification of remote sensing images

    NASA Astrophysics Data System (ADS)

    Chen, Fulong; Wang, Chao; Yang, Chengyun; Zhang, Hong; Wu, Fan; Lin, Wenjuan; Zhang, Bo

    2008-11-01

    This paper proposed a method that uses a case-based classification of remote sensing images and applied this method to abstract the information of suspected illegal land use in urban areas. Because of the discrete cases for imagery classification, the proposed method dealt with the oscillation of spectrum or backscatter within the same land use category, and it not only overcame the deficiency of maximum likelihood classification (the prior probability of land use could not be obtained) but also inherited the advantages of the knowledge-based classification system, such as artificial intelligence and automatic characteristics. Consequently, the proposed method could do the classifying better. Then the researchers used the object-oriented technique for shadow removal in highly dense city zones. With multi-temporal SPOT 5 images whose resolution was 2.5×2.5 meters, the researchers found that the method can abstract suspected illegal land use information in urban areas using post-classification comparison technique.

  11. Wind Tunnel Test Technique and Instrumentation Development at LaRC

    NASA Technical Reports Server (NTRS)

    Putnam, Lawrence E.

    1999-01-01

    LaRC has an aggressive test technique development program underway. This program has been developed using 3rd Generation R&D management techniques and is a closely coordinated program between suppliers and wind tunnel operators- wind tunnel customers' informal input relative to their needs has been an essential ingredient in developing the research portfolio. An attempt has been made to balance this portfolio to meet near term and long term test technique needs. Major efforts are underway to develop techniques for determining model wing twist and location of boundary layer transition in the NTF (National Transonic Facility). The foundation of all new instrumentation developments, procurements, and upgrades will be based on uncertainty analysis.

  12. JANE, A new information retrieval system for the Radiation Shielding Information Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trubey, D.K.

    A new information storage and retrieval system has been developed for the Radiation Shielding Information Center (RSIC) at Oak Ridge National Laboratory to replace mainframe systems that have become obsolete. The database contains citations and abstracts of literature which were selected by RSIC analysts and indexed with terms from a controlled vocabulary. The database, begun in 1963, has been maintained continuously since that time. The new system, called JANE, incorporates automatic indexing techniques and on-line retrieval using the RSIC Data General Eclipse MV/4000 minicomputer, Automatic indexing and retrieval techniques based on fuzzy-set theory allow the presentation of results in ordermore » of Retrieval Status Value. The fuzzy-set membership function depends on term frequency in the titles and abstracts and on Term Discrimination Values which indicate the resolving power of the individual terms. These values are determined by the Cover Coefficient method. The use of a commercial database base to store and retrieve the indexing information permits rapid retrieval of the stored documents. Comparisons of the new and presently-used systems for actual searches of the literature indicate that it is practical to replace the mainframe systems with a minicomputer system similar to the present version of JANE. 18 refs., 10 figs.« less

  13. A qualitative study on personal information management (PIM) in clinical and basic sciences faculty members of a medical university in Iran

    PubMed Central

    Sedghi, Shahram; Abdolahi, Nida; Azimi, Ali; Tahamtan, Iman; Abdollahi, Leila

    2015-01-01

    Background: Personal Information Management (PIM) refers to the tools and activities to save and retrieve personal information for future uses. This study examined the PIM activities of faculty members of Iran University of Medical Sciences (IUMS) regarding their preferred PIM tools and four aspects of acquiring, organizing, storing and retrieving personal information. Methods: The qualitative design was based on phenomenology approach and we carried out 37 interviews with clinical and basic sciences faculty members of IUMS in 2014. The participants were selected using a random sampling method. All interviews were recorded by a digital voice recorder, and then transcribed, codified and finally analyzed using NVivo 8 software. Results: The use of PIM electronic tools (e-tools) was below expectation among the studied sample and just 37% had reasonable knowledge of PIM e-tools such as, external hard drivers, flash memories etc. However, all participants used both paper and electronic devices to store and access information. Internal mass memories (in Laptops) and flash memories were the most used e-tools to save information. Most participants used "subject" (41.00%) and "file name" (33.7 %) to save, organize and retrieve their stored information. Most users preferred paper-based rather than electronic tools to keep their personal information. Conclusion: Faculty members had little knowledge about PIM techniques and tools. Those who organized personal information could easier retrieve the stored information for future uses. Enhancing familiarity with PIM tools and training courses of PIM tools and techniques are suggested. PMID:26793648

  14. Comparison of three artificial intelligence techniques for discharge routing

    NASA Astrophysics Data System (ADS)

    Khatibi, Rahman; Ghorbani, Mohammad Ali; Kashani, Mahsa Hasanpour; Kisi, Ozgur

    2011-06-01

    SummaryThe inter-comparison of three artificial intelligence (AI) techniques are presented using the results of river flow/stage timeseries, that are otherwise handled by traditional discharge routing techniques. These models comprise Artificial Neural Network (ANN), Adaptive Nero-Fuzzy Inference System (ANFIS) and Genetic Programming (GP), which are for discharge routing of Kizilirmak River, Turkey. The daily mean river discharge data with a period between 1999 and 2003 were used for training and testing the models. The comparison includes both visual and parametric approaches using such statistic as Coefficient of Correlation (CC), Mean Absolute Error (MAE) and Mean Square Relative Error (MSRE), as well as a basic scoring system. Overall, the results indicate that ANN and ANFIS have mixed fortunes in discharge routing, and both have different abilities in capturing and reproducing some of the observed information. However, the performance of GP displays a better edge over the other two modelling approaches in most of the respects. Attention is given to the information contents of recorded timeseries in terms of their peak values and timings, where one performance measure may capture some of the information contents but be ineffective in others. Thus, this makes a case for compiling knowledge base for various modelling techniques.

  15. Active learning for semi-supervised clustering based on locally linear propagation reconstruction.

    PubMed

    Chang, Chin-Chun; Lin, Po-Yi

    2015-03-01

    The success of semi-supervised clustering relies on the effectiveness of side information. To get effective side information, a new active learner learning pairwise constraints known as must-link and cannot-link constraints is proposed in this paper. Three novel techniques are developed for learning effective pairwise constraints. The first technique is used to identify samples less important to cluster structures. This technique makes use of a kernel version of locally linear embedding for manifold learning. Samples neither important to locally linear propagation reconstructions of other samples nor on flat patches in the learned manifold are regarded as unimportant samples. The second is a novel criterion for query selection. This criterion considers not only the importance of a sample to expanding the space coverage of the learned samples but also the expected number of queries needed to learn the sample. To facilitate semi-supervised clustering, the third technique yields inferred must-links for passing information about flat patches in the learned manifold to semi-supervised clustering algorithms. Experimental results have shown that the learned pairwise constraints can capture the underlying cluster structures and proven the feasibility of the proposed approach. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Trace-fiber color discrimination by electrospray ionization mass spectrometry: a tool for the analysis of dyes extracted from submillimeter nylon fibers.

    PubMed

    Tuinman, Albert A; Lewis, Linda A; Lewis, Samuel A

    2003-06-01

    The application of electrospray ionization mass spectrometry (ESI-MS) to trace-fiber color analysis is explored using acidic dyes commonly employed to color nylon-based fibers, as well as extracts from dyed nylon fibers. Qualitative information about constituent dyes and quantitative information about the relative amounts of those dyes present on a single fiber become readily available using this technique. Sample requirements for establishing the color identity of different samples (i.e., comparative trace-fiber analysis) are shown to be submillimeter. Absolute verification of dye mixture identity (beyond the comparison of molecular weights derived from ESI-MS) can be obtained by expanding the technique to include tandem mass spectrometry (ESI-MS/MS). For dyes of unknown origin, the ESI-MS/MS analyses may offer insights into the chemical structure of the compound-information not available from chromatographic techniques alone. This research demonstrates that ESI-MS is viable as a sensitive technique for distinguishing dye constituents extracted from a minute amount of trace-fiber evidence. A protocol is suggested to establish/refute the proposition that two fibers--one of which is available in minute quantity only--are of the same origin.

  17. Hide and Seek: Exploiting and Hardening Leakage-Resilient Code Randomization

    DTIC Science & Technology

    2016-05-30

    Hide and Seek: Exploiting and Hardening Leakage -Resilient Code Randomization Robert Rudd MIT Lincoln Laboratory Thomas Hobson MIT Lincoln Laboratory...Irvine Ahmad-Reza Sadeghi TU Darmstadt Hamed Okhravi MIT Lincoln Laboratory Abstract Information leakage vulnerabilities can allow adversaries to...bypass mitigations based on code randomization. This discovery motivates numerous techniques that diminish direct and indirect information leakage : (i

  18. Hide and Seek: Exploiting and Hardening Leakage-Resilient Code Randomization

    DTIC Science & Technology

    2016-03-30

    Hide and Seek: Exploiting and Hardening Leakage -Resilient Code Randomization Robert Rudd MIT Lincoln Laboratory Thomas Hobson MIT Lincoln Laboratory...Irvine Ahmad-Reza Sadeghi TU Darmstadt Hamed Okhravi MIT Lincoln Laboratory Abstract Information leakage vulnerabilities can allow adversaries to...bypass mitigations based on code randomization. This discovery motivates numerous techniques that diminish direct and indirect information leakage : (i

  19. Using Information Technologies in Professional Training of Future Security Specialists in the USA, Great Britain, Poland and Israel

    ERIC Educational Resources Information Center

    Kyslenko, Dmytro

    2017-01-01

    The paper discusses the use of information technologies in professional training of future security specialists in the United States, Great Britain, Poland and Israel. The probable use of computer-based techniques being available within the integrated Web-sites have been systematized. It has been suggested that the presented scheme may be of great…

  20. Device Independent Layout and Style Editing Using Multi-Level Style Sheets

    NASA Astrophysics Data System (ADS)

    Dees, Walter

    This paper describes a layout and styling framework that is based on the multi-level style sheets approach. It shows some of the techniques that can be used to add layout and style information to a UI in a device-independent manner, and how to reuse the layout and style information to create user interfaces for different devices

  1. Using Participatory Methods and Geographic Information Systems (GIS) to Prepare for an HIV Community-Based Trial in Vulindlela, South Africa (Project Accept-HPTN 043)

    ERIC Educational Resources Information Center

    Chirowodza, Admire; van Rooyen, Heidi; Joseph, Philip; Sikotoyi, Sindisiwe; Richter, Linda; Coates, Thomas

    2009-01-01

    Recent attempts to integrate geographic information systems (GIS) and participatory techniques, have given rise to terminologies such as "participatory GIS" and "community-integrated GIS". Although GIS was initially developed for physical geographic application, it can be used for the management and analysis of health and…

  2. Application of the Near Miss Strategy and Edit Distance to Handle Dirty Data

    NASA Astrophysics Data System (ADS)

    Varol, Cihan; Bayrak, Coskun; Wagner, Rick; Goff, Dana

    In today’s information age, processing customer information in a standardized and accurate manner is known to be a difficult task. Data collection methods vary from source to source by format, volume, and media type. Therefore, it is advantageous to deploy customized data hygiene techniques to standardize the data for meaningfulness and usefulness based on the organization.

  3. Application of the Near Miss Strategy and Edit Distance to Handle Dirty Data

    NASA Astrophysics Data System (ADS)

    Varol, Cihan; Bayrak, Coskun; Wagner, Rick; Goff, Dana

    In today's information age, processing customer information in a standardized and accurate manner is known to be a difficult task. Data collection methods vary from source to source by format, volume, and media type. Therefore, it is advantageous to deploy customized data hygiene techniques to standardize the data for meaningfulness and usefulness based on the organization.

  4. Auditory Evidence Grids

    DTIC Science & Technology

    2006-01-01

    information of the robot (Figure 1) acquired via laser-based localization techniques. The results are maps of the global soundscape . The algorithmic...environments than noise maps. Furthermore, provided the acoustic localization algorithm can detect the sources, the soundscape can be mapped with many...gathering information about the auditory soundscape in which it is working. In addition to robustness in the presence of noise, it has also been

  5. An adaptive Hidden Markov Model for activity recognition based on a wearable multi-sensor device

    USDA-ARS?s Scientific Manuscript database

    Human activity recognition is important in the study of personal health, wellness and lifestyle. In order to acquire human activity information from the personal space, many wearable multi-sensor devices have been developed. In this paper, a novel technique for automatic activity recognition based o...

  6. A Hybrid Computing Testbed for Mobile Threat Detection and Enhanced Research and Education in Information

    DTIC Science & Technology

    2014-11-20

    techniques to defend against stealthy malware, i.e., rootkits. For example, we have been developing new virtualization-based security service called AirBag ...for mobile devices. AirBag is a virtualization-based system that enables dynamic switching of (guest) Android im- ages in one VM, with one image

  7. Definitions for the No Child Left Behind Act of 2001: Scientifically-Based Research

    ERIC Educational Resources Information Center

    Wilde, Judith

    2004-01-01

    The purpose of scientifically-based research within No Child Left Behind (NCLB) is to identify and disseminate conclusive information on ?what works? in education, with an emphasis on determining what instructional input (curricula, instructional techniques) works to increase student outcomes such as academic achievement and language proficiency.…

  8. Wavelet-based hierarchical surface approximation from height fields

    Treesearch

    Sang-Mook Lee; A. Lynn Abbott; Daniel L. Schmoldt

    2004-01-01

    This paper presents a novel hierarchical approach to triangular mesh generation from height fields. A wavelet-based multiresolution analysis technique is used to estimate local shape information at different levels of resolution. Using predefined templates at the coarsest level, the method constructs an initial triangulation in which underlying object shapes are well...

  9. Career Goal-Based E-Learning Recommendation Using Enhanced Collaborative Filtering and PrefixSpan

    ERIC Educational Resources Information Center

    Ma, Xueying; Ye, Lu

    2018-01-01

    This article describes how e-learning recommender systems nowadays have applied different kinds of techniques to recommend personalized learning content for users based on their preference, goals, interests and background information. However, the cold-start problem which exists in traditional recommendation algorithms are still left over in…

  10. Performance Evaluation of Hyperbolic Position Location Technique in Cellular Wireless Networks

    DTIC Science & Technology

    2002-03-13

    number of applications called location - based services which can be defined as value added services that utilize the knowledge of the mobile user’s... Location based services “4-1-1”, location specific information such as local weather, mobile yellow pages, etc. and • Mobile e-commerce, wireless

  11. Development of scan analysis techniques employing a small computer. Final report, February 1, 1963--July 31, 1976

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuhl, D.E.

    1976-08-05

    During the thirteen year duration of this contract the goal has been to develop and apply computer based analysis of radionuclide scan data so as to make available improved diagnostic information based on a knowledge of localized quantitative estimates of radionuclide concentration. Results are summarized. (CH)

  12. Improved Search Techniques

    NASA Technical Reports Server (NTRS)

    Albornoz, Caleb Ronald

    2012-01-01

    Thousands of millions of documents are stored and updated daily in the World Wide Web. Most of the information is not efficiently organized to build knowledge from the stored data. Nowadays, search engines are mainly used by users who rely on their skills to look for the information needed. This paper presents different techniques search engine users can apply in Google Search to improve the relevancy of search results. According to the Pew Research Center, the average person spends eight hours a month searching for the right information. For instance, a company that employs 1000 employees wastes $2.5 million dollars on looking for nonexistent and/or not found information. The cost is very high because decisions are made based on the information that is readily available to use. Whenever the information necessary to formulate an argument is not available or found, poor decisions may be made and mistakes will be more likely to occur. Also, the survey indicates that only 56% of Google users feel confident with their current search skills. Moreover, just 76% of the information that is available on the Internet is accurate.

  13. Tweets and Facebook Posts, the Novelty Techniques in the Creation of Origin-Destination Models

    NASA Astrophysics Data System (ADS)

    Malema, H. K.; Musakwa, W.

    2016-06-01

    Social media and big data have emerged to be a useful source of information that can be used for planning purposes, particularly transportation planning and trip-distribution studies. Cities in developing countries such as South Africa often struggle with out-dated, unreliable and cumbersome techniques such as traffic counts and household surveys to conduct origin and destination studies. The emergence of ubiquitous crowd sourced data, big data, social media and geolocation based services has shown huge potential in providing useful information for origin and destination studies. Perhaps such information can be utilised to determine the origin and destination of commuters using the Gautrain, a high-speed railway in Gauteng province South Africa. To date little is known about the origins and destinations of Gautrain commuters. Accordingly, this study assesses the viability of using geolocation-based services namely Facebook and Twitter in mapping out the network movements of Gautrain commuters. Explorative Spatial Data Analysis (ESDA), Echo-social and ArcGis software were used to extract social media data, i.e. tweets and Facebook posts as well as to visualize the concentration of Gautrain commuters. The results demonstrate that big data and geolocation based services have the significant potential to predict movement network patterns of commuters and this information can thus, be used to inform and improve transportation planning. Nevertheless use of crowd sourced data and big data has privacy concerns that still need to be addressed.

  14. Research on application information system integration platform in medicine manufacturing enterprise.

    PubMed

    Deng, Wu; Zhao, Huimin; Zou, Li; Li, Yuanyuan; Li, Zhengguang

    2012-08-01

    Computer and information technology popularizes in the medicine manufacturing enterprise for its potentials in working efficiency and service quality. In allusion to the explosive data and information of application system in current medicine manufacturing enterprise, we desire to propose a novel application information system integration platform in medicine manufacturing enterprise, which based on a combination of RFID technology and SOA, to implement information sharing and alternation. This method exploits the application integration platform across service interface layer to invoke the RFID middleware. The loose coupling in integration solution is realized by Web services. The key techniques in RFID event components and expanded role-based security access mechanism are studied in detail. Finally, a case study is implemented and tested to evidence our understanding on application system integration platform in medicine manufacturing enterprise.

  15. Design of Composite Structures Using Knowledge-Based and Case Based Reasoning

    NASA Technical Reports Server (NTRS)

    Lambright, Jonathan Paul

    1996-01-01

    A method of using knowledge based and case based reasoning to assist designers during conceptual design tasks of composite structures was proposed. The cooperative use of heuristics, procedural knowledge, and previous similar design cases suggests a potential reduction in design cycle time and ultimately product lead time. The hypothesis of this work is that the design process of composite structures can be improved by using Case-Based Reasoning (CBR) and Knowledge-Based (KB) reasoning in the early design stages. The technique of using knowledge-based and case-based reasoning facilitates the gathering of disparate information into one location that is easily and readily available. The method suggests that the inclusion of downstream life-cycle issues into the conceptual design phase reduces potential of defective, and sub-optimal composite structures. Three industry experts were interviewed extensively. The experts provided design rules, previous design cases, and test problems. A Knowledge Based Reasoning system was developed using the CLIPS (C Language Interpretive Procedural System) environment and a Case Based Reasoning System was developed using the Design Memory Utility For Sharing Experiences (MUSE) xviii environment. A Design Characteristic State (DCS) was used to document the design specifications, constraints, and problem areas using attribute-value pair relationships. The DCS provided consistent design information between the knowledge base and case base. Results indicated that the use of knowledge based and case based reasoning provided a robust design environment for composite structures. The knowledge base provided design guidance from well defined rules and procedural knowledge. The case base provided suggestions on design and manufacturing techniques based on previous similar designs and warnings of potential problems and pitfalls. The case base complemented the knowledge base and extended the problem solving capability beyond the existence of limited well defined rules. The findings indicated that the technique is most effective when used as a design aid and not as a tool to totally automate the composites design process. Other areas of application and implications for future research are discussed.

  16. Image Alignment for Multiple Camera High Dynamic Range Microscopy.

    PubMed

    Eastwood, Brian S; Childs, Elisabeth C

    2012-01-09

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.

  17. Image Alignment for Multiple Camera High Dynamic Range Microscopy

    PubMed Central

    Eastwood, Brian S.; Childs, Elisabeth C.

    2012-01-01

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera. PMID:22545028

  18. Opponent Classification in Poker

    NASA Astrophysics Data System (ADS)

    Ahmad, Muhammad Aurangzeb; Elidrisi, Mohamed

    Modeling games has a long history in the Artificial Intelligence community. Most of the games that have been considered solved in AI are perfect information games. Imperfect information games like Poker and Bridge represent a domain where there is a great deal of uncertainty involved and additional challenges with respect to modeling the behavior of the opponent etc. Techniques developed for playing imperfect games also have many real world applications like repeated online auctions, human computer interaction, opponent modeling for military applications etc. In this paper we explore different techniques for playing poker, the core of these techniques is opponent modeling via classifying the behavior of opponent according to classes provided by domain experts. We utilize windows of full observation in the game to classify the opponent. In Poker, the behavior of an opponent is classified into four standard poker-playing styles based on a subjective function.

  19. Remote sensing for urban planning

    NASA Technical Reports Server (NTRS)

    Davis, Bruce A.; Schmidt, Nicholas; Jensen, John R.; Cowen, Dave J.; Halls, Joanne; Narumalani, Sunil; Burgess, Bryan

    1994-01-01

    Utility companies are challenged to provide services to a highly dynamic customer base. With factory closures and shifts in employment becoming a routine occurrence, the utility industry must develop new techniques to maintain records and plan for expected growth. BellSouth Telecommunications, the largest of the Bell telephone companies, currently serves over 13 million residences and 2 million commercial customers. Tracking the movement of customers and scheduling the delivery of service are major tasks for BellSouth that require intensive manpower and sophisticated information management techniques. Through NASA's Commercial Remote Sensing Program Office, BellSouth is investigating the utility of remote sensing and geographic information system techniques to forecast residential development. This paper highlights the initial results of this project, which indicate a high correlation between the U.S. Bureau of Census block group statistics and statistics derived from remote sensing data.

  20. Augmenting atlas-based liver segmentation for radiotherapy treatment planning by incorporating image features proximal to the atlas contours

    NASA Astrophysics Data System (ADS)

    Li, Dengwang; Liu, Li; Chen, Jinhu; Li, Hongsheng; Yin, Yong; Ibragimov, Bulat; Xing, Lei

    2017-01-01

    Atlas-based segmentation utilizes a library of previously delineated contours of similar cases to facilitate automatic segmentation. The problem, however, remains challenging because of limited information carried by the contours in the library. In this studying, we developed a narrow-shell strategy to enhance the information of each contour in the library and to improve the accuracy of the exiting atlas-based approach. This study presented a new concept of atlas based segmentation method. Instead of using the complete volume of the target organs, only information along the organ contours from the atlas images was used for guiding segmentation of the new image. In setting up an atlas-based library, we included not only the coordinates of contour points, but also the image features adjacent to the contour. In this work, 139 CT images with normal appearing livers collected for radiotherapy treatment planning were used to construct the library. The CT images within the library were first registered to each other using affine registration. The nonlinear narrow shell was generated alongside the object contours of registered images. Matching voxels were selected inside common narrow shell image features of a library case and a new case using a speed-up robust features (SURF) strategy. A deformable registration was then performed using a thin plate splines (TPS) technique. The contour associated with the library case was propagated automatically onto the new image by exploiting the deformation field vectors. The liver contour was finally obtained by employing level set based energy optimization within the narrow shell. The performance of the proposed method was evaluated by comparing quantitatively the auto-segmentation results with that delineated by physicians. A novel atlas-based segmentation technique with inclusion of neighborhood image features through the introduction of a narrow-shell surrounding the target objects was established. Application of the technique to 30 liver cases suggested that the technique was capable to reliably segment liver cases from CT, 4D-CT, and CBCT images with little human interaction. The accuracy and speed of the proposed method are quantitatively validated by comparing automatic segmentation results with the manual delineation results. The Jaccard similarity metric between the automatically generated liver contours obtained by the proposed method and the physician delineated results are on an average 90%-96% for planning images. Incorporation of image features into the library contours improves the currently available atlas-based auto-contouring techniques and provides a clinically practical solution for auto-segmentation. The proposed mountainous narrow shell atlas based method can achieve efficient automatic liver propagation for CT, 4D-CT and CBCT images with following treatment planning and should find widespread application in future treatment planning systems.

  1. Augmenting atlas-based liver segmentation for radiotherapy treatment planning by incorporating image features proximal to the atlas contours.

    PubMed

    Li, Dengwang; Liu, Li; Chen, Jinhu; Li, Hongsheng; Yin, Yong; Ibragimov, Bulat; Xing, Lei

    2017-01-07

    Atlas-based segmentation utilizes a library of previously delineated contours of similar cases to facilitate automatic segmentation. The problem, however, remains challenging because of limited information carried by the contours in the library. In this studying, we developed a narrow-shell strategy to enhance the information of each contour in the library and to improve the accuracy of the exiting atlas-based approach. This study presented a new concept of atlas based segmentation method. Instead of using the complete volume of the target organs, only information along the organ contours from the atlas images was used for guiding segmentation of the new image. In setting up an atlas-based library, we included not only the coordinates of contour points, but also the image features adjacent to the contour. In this work, 139 CT images with normal appearing livers collected for radiotherapy treatment planning were used to construct the library. The CT images within the library were first registered to each other using affine registration. The nonlinear narrow shell was generated alongside the object contours of registered images. Matching voxels were selected inside common narrow shell image features of a library case and a new case using a speed-up robust features (SURF) strategy. A deformable registration was then performed using a thin plate splines (TPS) technique. The contour associated with the library case was propagated automatically onto the new image by exploiting the deformation field vectors. The liver contour was finally obtained by employing level set based energy optimization within the narrow shell. The performance of the proposed method was evaluated by comparing quantitatively the auto-segmentation results with that delineated by physicians. A novel atlas-based segmentation technique with inclusion of neighborhood image features through the introduction of a narrow-shell surrounding the target objects was established. Application of the technique to 30 liver cases suggested that the technique was capable to reliably segment liver cases from CT, 4D-CT, and CBCT images with little human interaction. The accuracy and speed of the proposed method are quantitatively validated by comparing automatic segmentation results with the manual delineation results. The Jaccard similarity metric between the automatically generated liver contours obtained by the proposed method and the physician delineated results are on an average 90%-96% for planning images. Incorporation of image features into the library contours improves the currently available atlas-based auto-contouring techniques and provides a clinically practical solution for auto-segmentation. The proposed mountainous narrow shell atlas based method can achieve efficient automatic liver propagation for CT, 4D-CT and CBCT images with following treatment planning and should find widespread application in future treatment planning systems.

  2. Saturation-Transfer Difference (STD) NMR: A Simple and Fast Method for Ligand Screening and Characterization of Protein Binding

    ERIC Educational Resources Information Center

    Viegas, Aldino; Manso, Joao; Nobrega, Franklin L.; Cabrita, Eurico J.

    2011-01-01

    Saturation transfer difference (STD) NMR has emerged as one of the most popular ligand-based NMR techniques for the study of protein-ligand interactions. The success of this technique is a consequence of its robustness and the fact that it is focused on the signals of the ligand, without any need of processing NMR information about the receptor…

  3. Multi-Autonomous Ground-robotic International Challenge (MAGIC) 2010

    DTIC Science & Technology

    2010-12-14

    SLAM technique since this setup, having a LIDAR with long-range high-accuracy measurement capability, allows accurate localization and mapping more...achieve the accuracy of 25cm due to the use of multi-dimensional information. OGM is, similarly to SLAM , carried out by using LIDAR data. The OGM...a result of the development and implementation of the hybrid feature-based/scan-matching Simultaneous Localization and Mapping ( SLAM ) technique, the

  4. Performance-Based Logistics, Contractor Logistics Support, and Stryker

    DTIC Science & Technology

    2007-06-15

    automotive , armament, missile, communications, special devices, and ground equipment repair. The essential maintenance task for the FMC is to maintain...technologies and welding techniques into their production processes. Finally, GDLS’s use of progressive management techniques and supply chain information...C4ISR, MEP) per the NMC criteria in the -10 manual, the contractors system only focuses on the platform or automotive status. Thus a vehicle “up” for

  5. Community Detection for Correlation Matrices

    NASA Astrophysics Data System (ADS)

    MacMahon, Mel; Garlaschelli, Diego

    2015-04-01

    A challenging problem in the study of complex systems is that of resolving, without prior information, the emergent, mesoscopic organization determined by groups of units whose dynamical activity is more strongly correlated internally than with the rest of the system. The existing techniques to filter correlations are not explicitly oriented towards identifying such modules and can suffer from an unavoidable information loss. A promising alternative is that of employing community detection techniques developed in network theory. Unfortunately, this approach has focused predominantly on replacing network data with correlation matrices, a procedure that we show to be intrinsically biased because of its inconsistency with the null hypotheses underlying the existing algorithms. Here, we introduce, via a consistent redefinition of null models based on random matrix theory, the appropriate correlation-based counterparts of the most popular community detection techniques. Our methods can filter out both unit-specific noise and system-wide dependencies, and the resulting communities are internally correlated and mutually anticorrelated. We also implement multiresolution and multifrequency approaches revealing hierarchically nested subcommunities with "hard" cores and "soft" peripheries. We apply our techniques to several financial time series and identify mesoscopic groups of stocks which are irreducible to a standard, sectorial taxonomy; detect "soft stocks" that alternate between communities; and discuss implications for portfolio optimization and risk management.

  6. An Information-Based Machine Learning Approach to Elasticity Imaging

    PubMed Central

    Hoerig, Cameron; Ghaboussi, Jamshid; Insana, Michael. F.

    2016-01-01

    An information-based technique is described for applications in mechanical-property imaging of soft biological media under quasi-static loads. We adapted the Autoprogressive method that was originally developed for civil engineering applications for this purpose. The Autoprogressive method is a computational technique that combines knowledge of object shape and a sparse distribution of force and displacement measurements with finite-element analyses and artificial neural networks to estimate a complete set of stress and strain vectors. Elasticity imaging parameters are then computed from estimated stresses and strains. We introduce the technique using ultrasonic pulse-echo measurements in simple gelatin imaging phantoms having linear-elastic properties so that conventional finite-element modeling can be used to validate results. The Autoprogressive algorithm does not require any assumptions about the material properties and can, in principle, be used to image media with arbitrary properties. We show that by selecting a few well-chosen force-displacement measurements that are appropriately applied during training and establish convergence, we can estimate all nontrivial stress and strain vectors throughout an object and accurately estimate an elastic modulus at high spatial resolution. This new method of modeling the mechanical properties of tissue-like materials introduces a unique method of solving the inverse problem and is the first technique for imaging stress without assuming the underlying constitutive model. PMID:27858175

  7. Conceptual recurrence plots: revealing patterns in human discourse.

    PubMed

    Angus, Daniel; Smith, Andrew; Wiles, Janet

    2012-06-01

    Human discourse contains a rich mixture of conceptual information. Visualization of the global and local patterns within this data stream is a complex and challenging problem. Recurrence plots are an information visualization technique that can reveal trends and features in complex time series data. The recurrence plot technique works by measuring the similarity of points in a time series to all other points in the same time series and plotting the results in two dimensions. Previous studies have applied recurrence plotting techniques to textual data; however, these approaches plot recurrence using term-based similarity rather than conceptual similarity of the text. We introduce conceptual recurrence plots, which use a model of language to measure similarity between pairs of text utterances, and the similarity of all utterances is measured and displayed. In this paper, we explore how the descriptive power of the recurrence plotting technique can be used to discover patterns of interaction across a series of conversation transcripts. The results suggest that the conceptual recurrence plotting technique is a useful tool for exploring the structure of human discourse.

  8. Quantitative optical metrology with CMOS cameras

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Kolenovic, Ervin; Ferguson, Curtis F.

    2004-08-01

    Recent advances in laser technology, optical sensing, and computer processing of data, have lead to the development of advanced quantitative optical metrology techniques for high accuracy measurements of absolute shapes and deformations of objects. These techniques provide noninvasive, remote, and full field of view information about the objects of interest. The information obtained relates to changes in shape and/or size of the objects, characterizes anomalies, and provides tools to enhance fabrication processes. Factors that influence selection and applicability of an optical technique include the required sensitivity, accuracy, and precision that are necessary for a particular application. In this paper, sensitivity, accuracy, and precision characteristics in quantitative optical metrology techniques, and specifically in optoelectronic holography (OEH) based on CMOS cameras, are discussed. Sensitivity, accuracy, and precision are investigated with the aid of National Institute of Standards and Technology (NIST) traceable gauges, demonstrating the applicability of CMOS cameras in quantitative optical metrology techniques. It is shown that the advanced nature of CMOS technology can be applied to challenging engineering applications, including the study of rapidly evolving phenomena occurring in MEMS and micromechatronics.

  9. Beyond Information Retrieval—Medical Question Answering

    PubMed Central

    Lee, Minsuk; Cimino, James; Zhu, Hai Ran; Sable, Carl; Shanker, Vijay; Ely, John; Yu, Hong

    2006-01-01

    Physicians have many questions when caring for patients, and frequently need to seek answers for their questions. Information retrieval systems (e.g., PubMed) typically return a list of documents in response to a user’s query. Frequently the number of returned documents is large and makes physicians’ information seeking “practical only ‘after hours’ and not in the clinical settings”. Question answering techniques are based on automatically analyzing thousands of electronic documents to generate short-text answers in response to clinical questions that are posed by physicians. The authors address physicians’ information needs and described the design, implementation, and evaluation of the medical question answering system (MedQA). Although our long term goal is to enable MedQA to answer all types of medical questions, currently, we currently implement MedQA to integrate information retrieval, extraction, and summarization techniques to automatically generate paragraph-level text for definitional questions (i.e., “What is X?”). MedQA can be accessed at http://www.dbmi.columbia.edu/~yuh9001/research/MedQA.html. PMID:17238385

  10. Dynamic Reconstruction Algorithm of Three-Dimensional Temperature Field Measurement by Acoustic Tomography

    PubMed Central

    Li, Yanqiu; Liu, Shi; Inaki, Schlaberg H.

    2017-01-01

    Accuracy and speed of algorithms play an important role in the reconstruction of temperature field measurements by acoustic tomography. Existing algorithms are based on static models which only consider the measurement information. A dynamic model of three-dimensional temperature reconstruction by acoustic tomography is established in this paper. A dynamic algorithm is proposed considering both acoustic measurement information and the dynamic evolution information of the temperature field. An objective function is built which fuses measurement information and the space constraint of the temperature field with its dynamic evolution information. Robust estimation is used to extend the objective function. The method combines a tunneling algorithm and a local minimization technique to solve the objective function. Numerical simulations show that the image quality and noise immunity of the dynamic reconstruction algorithm are better when compared with static algorithms such as least square method, algebraic reconstruction technique and standard Tikhonov regularization algorithms. An effective method is provided for temperature field reconstruction by acoustic tomography. PMID:28895930

  11. Using cooperative learning for a drug information assignment.

    PubMed

    Earl, Grace L

    2009-11-12

    To implement a cooperative learning activity to engage students in analyzing tertiary drug information resources in a literature evaluation course. The class was divided into 4 sections to form expert groups and each group researched a different set of references using the jigsaw technique. Each member of each expert group was reassigned to a jigsaw group so that each new group was composed of 4 students from 4 different expert groups. The jigsaw groups met to discuss search strategies and rate the usefulness of the references. In addition to group-based learning, teaching methods included students' writing an independent research paper to enhance their abilities to search and analyze drug information resources. The assignment and final course grades improved after implementation of the activity. Students agreed that class discussions were a useful learning experience and 75% (77/102) said they would use the drug information references for other courses. The jigsaw technique was successful in engaging students in cooperative learning to improve critical thinking skills regarding drug information.

  12. Detection, characterization and quantification of inorganic engineered nanomaterials: A review of techniques and methodological approaches for the analysis of complex samples.

    PubMed

    Laborda, Francisco; Bolea, Eduardo; Cepriá, Gemma; Gómez, María T; Jiménez, María S; Pérez-Arantegui, Josefina; Castillo, Juan R

    2016-01-21

    The increasing demand of analytical information related to inorganic engineered nanomaterials requires the adaptation of existing techniques and methods, or the development of new ones. The challenge for the analytical sciences has been to consider the nanoparticles as a new sort of analytes, involving both chemical (composition, mass and number concentration) and physical information (e.g. size, shape, aggregation). Moreover, information about the species derived from the nanoparticles themselves and their transformations must also be supplied. Whereas techniques commonly used for nanoparticle characterization, such as light scattering techniques, show serious limitations when applied to complex samples, other well-established techniques, like electron microscopy and atomic spectrometry, can provide useful information in most cases. Furthermore, separation techniques, including flow field flow fractionation, capillary electrophoresis and hydrodynamic chromatography, are moving to the nano domain, mostly hyphenated to inductively coupled plasma mass spectrometry as element specific detector. Emerging techniques based on the detection of single nanoparticles by using ICP-MS, but also coulometry, are in their way to gain a position. Chemical sensors selective to nanoparticles are in their early stages, but they are very promising considering their portability and simplicity. Although the field is in continuous evolution, at this moment it is moving from proofs-of-concept in simple matrices to methods dealing with matrices of higher complexity and relevant analyte concentrations. To achieve this goal, sample preparation methods are essential to manage such complex situations. Apart from size fractionation methods, matrix digestion, extraction and concentration methods capable of preserving the nature of the nanoparticles are being developed. This review presents and discusses the state-of-the-art analytical techniques and sample preparation methods suitable for dealing with complex samples. Single- and multi-method approaches applied to solve the nanometrological challenges posed by a variety of stakeholders are also presented. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. VIRTIS on Venus Express: retrieval of real surface emissivity on global scales

    NASA Astrophysics Data System (ADS)

    Arnold, Gabriele E.; Kappel, David; Haus, Rainer; Telléz Pedroza, Laura; Piccioni, Giuseppe; Drossart, Pierre

    2015-09-01

    The extraction of surface emissivity data provides the data base for surface composition analyses and enables to evaluate Venus' geology. The Visible and InfraRed Thermal Imaging Spectrometer (VIRTIS) aboard ESA's Venus Express mission measured, inter alia, the nightside thermal emission of Venus in the near infrared atmospheric windows between 1.0 and 1.2 μm. These data can be used to determine information about surface properties on global scales. This requires a sophisticated approach to understand and consider the effects and interferences of different atmospheric and surface parameters influencing the retrieved values. In the present work, results of a new technique for retrieval of the 1.0 - 1.2 μm - surface emissivity are summarized. It includes a Multi-Window Retrieval Technique, a Multi-Spectrum Retrieval technique (MSR), and a detailed reliability analysis. The MWT bases on a detailed radiative transfer model making simultaneous use of information from different atmospheric windows of an individual spectrum. MSR regularizes the retrieval by incorporating available a priori mean values, standard deviations as well as spatial-temporal correlations of parameters to be retrieved. The capability of this method is shown for a selected surface target area. Implications for geologic investigations are discussed. Based on these results, the work draws conclusions for future Venus surface composition analyses on global scales using spectral remote sensing techniques. In that context, requirements for observational scenarios and instrumental performances are investigated, and recommendations are derived to optimize spectral measurements for Venus' surface studies.

  14. Image sharpening for mixed spatial and spectral resolution satellite systems

    NASA Technical Reports Server (NTRS)

    Hallada, W. A.; Cox, S.

    1983-01-01

    Two methods of image sharpening (reconstruction) are compared. The first, a spatial filtering technique, extrapolates edge information from a high spatial resolution panchromatic band at 10 meters and adds it to the low spatial resolution narrow spectral bands. The second method, a color normalizing technique, is based on the ability to separate image hue and brightness components in spectral data. Using both techniques, multispectral images are sharpened from 30, 50, 70, and 90 meter resolutions. Error rates are calculated for the two methods and all sharpened resolutions. The results indicate that the color normalizing method is superior to the spatial filtering technique.

  15. Contribution of multiple inert gas elimination technique to pulmonary medicine. 1. Principles and information content of the multiple inert gas elimination technique.

    PubMed Central

    Roca, J.; Wagner, P. D.

    1994-01-01

    This introductory review summarises four different aspects of the multiple inert gas elimination technique (MIGET). Firstly, the historical background that facilitated, in the mid 1970s, the development of the MIGET as a tool to obtain more information about the entire spectrum of VA/Q distribution in the lung by measuring the exchange of six gases of different solubility in trace concentrations. Its principle is based on the observation that the retention (or excretion) of any gas is dependent on the solubility (lambda) of that gas and the VA/Q distribution. A second major aspect is the analysis of the information content and limitations of the technique. During the last 15 years a substantial amount of clinical research using the MIGET has been generated by several groups around the world. The technique has been shown to be adequate in understanding the mechanisms of hypoxaemia in different forms of pulmonary disease and the effects of therapeutic interventions, but also in separately determining the quantitative role of each extrapulmonary factor on systemic arterial PO2 when they change between two conditions of MIGET measurement. This information will be extensively reviewed in the forthcoming articles of this series. Next, the different modalities of the MIGET, practical considerations involved in the measurements and the guidelines for quality control have been indicated. Finally, a section has been devoted to the analysis of available data in healthy subjects under different conditions. The lack of systematic information on the VA/Q distributions of older healthy subjects is emphasised, since it will be required to fully understand the changes brought about by diseases that affect the older population. PMID:8091330

  16. Timber Resources Inventory and Monitoring Joint Research Project

    NASA Technical Reports Server (NTRS)

    Hill, C. L.

    1985-01-01

    Primary objectives were to develop remote sensing analysis techniques for extracting forest related information from LANDSAT Multispectral Scanner (MMS) and Thematic Mapper data and to determine the extent to which International Paper Company information needs can be addressed with remote sensing information. The company actively manages 8.4 million acres of forest land. Traditionally, their forest inventories, updated on a three year cycle, are conducted through field surveys and aerial photography. The results reside in a digital forest data base containing 240 descriptive parameteres for individual forest stands. The information in the data base is used to develop seasonal and long range management strategies. Forest stand condition assessements (species composition, age, and density stratification) and identification of silvicultural activities (site preparation, planting, thinning, and harvest) are addressed.

  17. Link Correlation Based Transmit Sector Antenna Selection for Alamouti Coded OFDM

    NASA Astrophysics Data System (ADS)

    Ahn, Chang-Jun

    In MIMO systems, the deployment of a multiple antenna technique can enhance the system performance. However, since the cost of RF transmitters is much higher than that of antennas, there is growing interest in techniques that use a larger number of antennas than the number of RF transmitters. These methods rely on selecting the optimal transmitter antennas and connecting them to the respective. In this case, feedback information (FBI) is required to select the optimal transmitter antenna elements. Since FBI is control overhead, the rate of the feedback is limited. This motivates the study of limited feedback techniques where only partial or quantized information from the receiver is conveyed back to the transmitter. However, in MIMO/OFDM systems, it is difficult to develop an effective FBI quantization method for choosing the space-time, space-frequency, or space-time-frequency processing due to the numerous subchannels. Moreover, MIMO/OFDM systems require antenna separation of 5 ∼ 10 wavelengths to keep the correlation coefficient below 0.7 to achieve a diversity gain. In this case, the base station requires a large space to set up multiple antennas. To reduce these problems, in this paper, we propose the link correlation based transmit sector antenna selection for Alamouti coded OFDM without FBI.

  18. Distribution of Health Resource Allocation in the Fars Province Using the Scalogram Analysis Technique in 2011.

    PubMed

    Hatam, Nahid; Kafashi, Shahnaz; Kavosi, Zahra

    2015-07-01

    The importance of health indicators in the recent years has created challenges in resource allocation. Balanced and fair distribution of health resources is one of the main principles in achieving equity. The goal of this cross-sectional descriptive study, conducted in 2010, was to classify health structural indicators in the Fars province using the scalogram technique. Health structural indicators were selected and classified in three categories; namely institutional, human resources, and rural health. The data were obtained from the statistical yearbook of Iran and was analyzed according to the scalogram technique. The distribution map of the Fars province was drawn using ArcGIS (geographic information system). The results showed an interesting health structural indicator map across the province. Our findings revealed that the city of Mohr with 85 and Zarindasht with 36 had the highest and the lowest scores, respectively. This information is valuable to provincial health policymakers to plan appropriately based on factual data and minimize chaos in allocating health resources. Based on such data and reflecting on the local needs, one could develop equity based resource allocation policies and prevent inequality. It is concluded that, as top priority, the provincial policymakers should place dedicated deprivation programs for Farashband, Eghlid and Zaindasht regions.

  19. Innovative Visualization Techniques applied to a Flood Scenario

    NASA Astrophysics Data System (ADS)

    Falcão, António; Ho, Quan; Lopes, Pedro; Malamud, Bruce D.; Ribeiro, Rita; Jern, Mikael

    2013-04-01

    The large and ever-increasing amounts of multi-dimensional, time-varying and geospatial digital information from multiple sources represent a major challenge for today's analysts. We present a set of visualization techniques that can be used for the interactive analysis of geo-referenced and time sampled data sets, providing an integrated mechanism and that aids the user to collaboratively explore, present and communicate visually complex and dynamic data. Here we present these concepts in the context of a 4 hour flood scenario from Lisbon in 2010, with data that includes measures of water column (flood height) every 10 minutes at a 4.5 m x 4.5 m resolution, topography, building damage, building information, and online base maps. Techniques we use include web-based linked views, multiple charts, map layers and storytelling. We explain two of these in more detail that are not currently in common use for visualization of data: storytelling and web-based linked views. Visual storytelling is a method for providing a guided but interactive process of visualizing data, allowing more engaging data exploration through interactive web-enabled visualizations. Within storytelling, a snapshot mechanism helps the author of a story to highlight data views of particular interest and subsequently share or guide others within the data analysis process. This allows a particular person to select relevant attributes for a snapshot, such as highlighted regions for comparisons, time step, class values for colour legend, etc. and provide a snapshot of the current application state, which can then be provided as a hyperlink and recreated by someone else. Since data can be embedded within this snapshot, it is possible to interactively visualize and manipulate it. The second technique, web-based linked views, includes multiple windows which interactively respond to the user selections, so that when selecting an object and changing it one window, it will automatically update in all the other windows. These concepts can be part of a collaborative platform, where multiple people share and work together on the data, via online access, which also allows its remote usage from a mobile platform. Storytelling augments analysis and decision-making capabilities allowing to assimilate complex situations and reach informed decisions, in addition to helping the public visualize information. In our visualization scenario, developed in the context of the VA-4D project for the European Space Agency (see http://www.ca3-uninova.org/project_va4d), we make use of the GAV (GeoAnalytics Visualization) framework, a web-oriented visual analytics application based on multiple interactive views. The final visualization that we produce includes multiple interactive views, including a dynamic multi-layer map surrounded by other visualizations such as bar charts, time graphs and scatter plots. The map provides flood and building information, on top of a base city map (street maps and/or satellite imagery provided by online map services such as Google Maps, Bing Maps etc.). Damage over time for selected buildings, damage for all buildings at a chosen time period, correlation between damage and water depth can be analysed in the other views. This interactive web-based visualization that incorporates the ideas of storytelling, web-based linked views, and other visualization techniques, for a 4 hour flood event in Lisbon in 2010, can be found online at http://www.ncomva.se/flash/projects/esa/flooding/.

  20. Coupling Front-End Separations, Ion Mobility Spectrometry, and Mass Spectrometry For Enhanced Multidimensional Biological and Environmental Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Xueyun; Wojcik, Roza; Zhang, Xing

    Ion mobility spectrometry (IMS) is a widely used analytical technique for rapid molecular separations in the gas phase. IMS alone is useful, but its coupling with mass spectrometry (MS) and front-end separations has been extremely beneficial for increasing measurement sensitivity, peak capacity of complex mixtures, and the scope of molecular information in biological and environmental sample analyses. Multiple studies in disease screening and environmental evaluations have even shown these IMS-based multidimensional separations extract information not possible with each technique individually. This review highlights 3-dimensional separations using IMS-MS in conjunction with a range of front-end techniques, such as gas chromatography (GC),more » supercritical fluid chromatography (SFC), liquid chromatography (LC), solid phase extractions (SPE), capillary electrophoresis (CE), field asymmetric ion mobility spectrometry (FAIMS), and microfluidic devices. The origination, current state, various applications, and future capabilities for these multidimensional approaches are described to provide insight into the utility and potential of each technique.« less

  1. Security of Color Image Data Designed by Public-Key Cryptosystem Associated with 2D-DWT

    NASA Astrophysics Data System (ADS)

    Mishra, D. C.; Sharma, R. K.; Kumar, Manish; Kumar, Kuldeep

    2014-08-01

    In present times the security of image data is a major issue. So, we have proposed a novel technique for security of color image data by public-key cryptosystem or asymmetric cryptosystem. In this technique, we have developed security of color image data using RSA (Rivest-Shamir-Adleman) cryptosystem with two-dimensional discrete wavelet transform (2D-DWT). Earlier proposed schemes for security of color images designed on the basis of keys, but this approach provides security of color images with the help of keys and correct arrangement of RSA parameters. If the attacker knows about exact keys, but has no information of exact arrangement of RSA parameters, then the original information cannot be recovered from the encrypted data. Computer simulation based on standard example is critically examining the behavior of the proposed technique. Security analysis and a detailed comparison between earlier developed schemes for security of color images and proposed technique are also mentioned for the robustness of the cryptosystem.

  2. An Automated Technique to Construct a Knowledge Base of Traditional Chinese Herbal Medicine for Cancers: An Exploratory Study for Breast Cancer.

    PubMed

    Nguyen, Phung Anh; Yang, Hsuan-Chia; Xu, Rong; Li, Yu-Chuan Jack

    2018-01-01

    Traditional Chinese Medicine utilization has rapidly increased worldwide. However, there is limited database provides the information of TCM herbs and diseases. The study aims to identify and evaluate the meaningful associations between TCM herbs and breast cancer by using the association rule mining (ARM) techniques. We employed the ARM techniques for 19.9 million TCM prescriptions by using Taiwan National Health Insurance claim database from 1999 to 2013. 364 TCM herbs-breast cancer associations were derived from those prescriptions and were then filtered by their support of 20. Resulting of 296 associations were evaluated by comparing to a gold-standard that was curated information from Chinese-Wikipedia with the following terms, cancer, tumor, malignant. All 14 TCM herbs-breast cancer associations with their confidence of 1% were valid when compared to gold-standard. For other confidences, the statistical results showed consistently with high precisions. We thus succeed to identify the TCM herbs-breast cancer associations with useful techniques.

  3. A comparison of the energy use of in situ product recovery techniques for the Acetone Butanol Ethanol fermentation.

    PubMed

    Outram, Victoria; Lalander, Carl-Axel; Lee, Jonathan G M; Davis, E Timothy; Harvey, Adam P

    2016-11-01

    The productivity of the Acetone Butanol Ethanol (ABE) fermentation can be significantly increased by application of various in situ product recovery (ISPR) techniques. There are numerous technically viable processes, but it is not clear which is the most economically viable in practice. There is little available information about the energy requirements and economics of ISPR for the ABE fermentation. This work compares various ISPR techniques based on UniSim process simulations of the ABE fermentation. The simulations provide information on the process energy and separation efficiency, which is fed into an economic assessment. Perstraction was the only technique to reduce the energy demand below that of a batch process, by approximately 5%. Perstraction also had the highest profit increase over a batch process, by 175%. However, perstraction is an immature technology, so would need significant development before being integrated to an industrial process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Random technique to encode complex valued holograms with on axis reconstruction onto phase-only displays.

    PubMed

    Luis Martínez Fuentes, Jose; Moreno, Ignacio

    2018-03-05

    A new technique for encoding the amplitude and phase of diffracted fields in digital holography is proposed. It is based on a random spatial multiplexing of two phase-only diffractive patterns. The first one is the phase information of the intended pattern, while the second one is a diverging optical element whose purpose is the control of the amplitude. A random number determines the choice between these two diffractive patterns at each pixel, and the amplitude information of the desired field governs its discrimination threshold. This proposed technique is computationally fast and does not require iterative methods, and the complex field reconstruction appears on axis. We experimentally demonstrate this new encoding technique with holograms implemented onto a flicker-free phase-only spatial light modulator (SLM), which allows the axial generation of such holograms. The experimental verification includes the phase measurement of generated patterns with a phase-shifting polarization interferometer implemented in the same experimental setup.

  5. An evaluation of machine processing techniques of ERTS-1 data for user applications. [urban land use and soil association mapping in Indiana

    NASA Technical Reports Server (NTRS)

    Landgrebe, D.

    1974-01-01

    A broad study is described to evaluate a set of machine analysis and processing techniques applied to ERTS-1 data. Based on the analysis results in urban land use analysis and soil association mapping together with previously reported results in general earth surface feature identification and crop species classification, a profile of general applicability of this procedure is beginning to emerge. Put in the hands of a user who knows well the information needed from the data and also is familiar with the region to be analyzed it appears that significantly useful information can be generated by these methods. When supported by preprocessing techniques such as the geometric correction and temporal registration capabilities, final products readily useable by user agencies appear possible. In parallel with application, through further research, there is much potential for further development of these techniques both with regard to providing higher performance and in new situations not yet studied.

  6. Automated quantification of the synchrogram by recurrence plot analysis.

    PubMed

    Nguyen, Chinh Duc; Wilson, Stephen James; Crozier, Stuart

    2012-04-01

    Recently, the concept of phase synchronization of two weakly coupled oscillators has raised a great research interest and has been applied to characterize synchronization phenomenon in physiological data. Phase synchronization of cardiorespiratory coupling is often studied by a synchrogram analysis, a graphical tool investigating the relationship between instantaneous phases of two signals. Although several techniques have been proposed to automatically quantify the synchrogram, most of them require a preselection of a phase-locking ratio by trial and error. One technique does not require this information; however, it is based on the power spectrum of phase's distribution in the synchrogram, which is vulnerable to noise. This study aims to introduce a new technique to automatically quantify the synchrogram by studying its dynamic structure. Our technique exploits recurrence plot analysis, which is a well-established tool for characterizing recurring patterns and nonstationarities in experiments. We applied our technique to detect synchronization in simulated and measured infants' cardiorespiratory data. Our results suggest that the proposed technique is able to systematically detect synchronization in noisy and chaotic data without preselecting the phase-locking ratio. By embedding phase information of the synchrogram into phase space, the phase-locking ratio is automatically unveiled as the number of attractors.

  7. Estimation of Phase in Fringe Projection Technique Using High-order Instantaneous Moments Based Method

    NASA Astrophysics Data System (ADS)

    Gorthi, Sai Siva; Rajshekhar, G.; Rastogi, Pramod

    2010-04-01

    For three-dimensional (3D) shape measurement using fringe projection techniques, the information about the 3D shape of an object is encoded in the phase of a recorded fringe pattern. The paper proposes a high-order instantaneous moments based method to estimate phase from a single fringe pattern in fringe projection. The proposed method works by approximating the phase as a piece-wise polynomial and subsequently determining the polynomial coefficients using high-order instantaneous moments to construct the polynomial phase. Simulation results are presented to show the method's potential.

  8. Operator based integration of information in multimodal radiological search mission with applications to anomaly detection

    NASA Astrophysics Data System (ADS)

    Benedetto, J.; Cloninger, A.; Czaja, W.; Doster, T.; Kochersberger, K.; Manning, B.; McCullough, T.; McLane, M.

    2014-05-01

    Successful performance of radiological search mission is dependent on effective utilization of mixture of signals. Examples of modalities include, e.g., EO imagery and gamma radiation data, or radiation data collected during multiple events. In addition, elevation data or spatial proximity can be used to enhance the performance of acquisition systems. State of the art techniques in processing and exploitation of complex information manifolds rely on diffusion operators. Our approach involves machine learning techniques based on analysis of joint data- dependent graphs and their associated diffusion kernels. Then, the significant eigenvectors of the derived fused graph Laplace and Schroedinger operators form the new representation, which provides integrated features from the heterogeneous input data. The families of data-dependent Laplace and Schroedinger operators on joint data graphs, shall be integrated by means of appropriately designed fusion metrics. These fused representations are used for target and anomaly detection.

  9. Robust fault detection of turbofan engines subject to adaptive controllers via a Total Measurable Fault Information Residual (ToMFIR) technique.

    PubMed

    Chen, Wen; Chowdhury, Fahmida N; Djuric, Ana; Yeh, Chih-Ping

    2014-09-01

    This paper provides a new design of robust fault detection for turbofan engines with adaptive controllers. The critical issue is that the adaptive controllers can depress the faulty effects such that the actual system outputs remain the pre-specified values, making it difficult to detect faults/failures. To solve this problem, a Total Measurable Fault Information Residual (ToMFIR) technique with the aid of system transformation is adopted to detect faults in turbofan engines with adaptive controllers. This design is a ToMFIR-redundancy-based robust fault detection. The ToMFIR is first introduced and existing results are also summarized. The Detailed design process of the ToMFIRs is presented and a turbofan engine model is simulated to verify the effectiveness of the proposed ToMFIR-based fault-detection strategy. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Evaluation of the diagnostic power of thermography in breast cancer using Bayesian network classifiers.

    PubMed

    Nicandro, Cruz-Ramírez; Efrén, Mezura-Montes; María Yaneli, Ameca-Alducin; Enrique, Martín-Del-Campo-Mena; Héctor Gabriel, Acosta-Mesa; Nancy, Pérez-Castro; Alejandro, Guerra-Hernández; Guillermo de Jesús, Hoyos-Rivera; Rocío Erandi, Barrientos-Martínez

    2013-01-01

    Breast cancer is one of the leading causes of death among women worldwide. There are a number of techniques used for diagnosing this disease: mammography, ultrasound, and biopsy, among others. Each of these has well-known advantages and disadvantages. A relatively new method, based on the temperature a tumor may produce, has recently been explored: thermography. In this paper, we will evaluate the diagnostic power of thermography in breast cancer using Bayesian network classifiers. We will show how the information provided by the thermal image can be used in order to characterize patients suspected of having cancer. Our main contribution is the proposal of a score, based on the aforementioned information, that could help distinguish sick patients from healthy ones. Our main results suggest the potential of this technique in such a goal but also show its main limitations that have to be overcome to consider it as an effective diagnosis complementary tool.

  11. A deep learning-based reconstruction of cosmic ray-induced air showers

    NASA Astrophysics Data System (ADS)

    Erdmann, M.; Glombitza, J.; Walz, D.

    2018-01-01

    We describe a method of reconstructing air showers induced by cosmic rays using deep learning techniques. We simulate an observatory consisting of ground-based particle detectors with fixed locations on a regular grid. The detector's responses to traversing shower particles are signal amplitudes as a function of time, which provide information on transverse and longitudinal shower properties. In order to take advantage of convolutional network techniques specialized in local pattern recognition, we convert all information to the image-like grid of the detectors. In this way, multiple features, such as arrival times of the first particles and optimized characterizations of time traces, are processed by the network. The reconstruction quality of the cosmic ray arrival direction turns out to be competitive with an analytic reconstruction algorithm. The reconstructed shower direction, energy and shower depth show the expected improvement in resolution for higher cosmic ray energy.

  12. Analysis on laser plasma emission for characterization of colloids by video-based computer program

    NASA Astrophysics Data System (ADS)

    Putri, Kirana Yuniati; Lumbantoruan, Hendra Damos; Isnaeni

    2016-02-01

    Laser-induced breakdown detection (LIBD) is a sensitive technique for characterization of colloids with small size and low concentration. There are two types of detection, optical and acoustic. Optical LIBD employs CCD camera to capture the plasma emission and uses the information to quantify the colloids. This technique requires sophisticated technology which is often pricey. In order to build a simple, home-made LIBD system, a dedicated computer program based on MATLAB™ for analyzing laser plasma emission was developed. The analysis was conducted by counting the number of plasma emissions (breakdowns) during a certain period of time. Breakdown probability provided information on colloid size and concentration. Validation experiment showed that the computer program performed well on analyzing the plasma emissions. Optical LIBD has A graphical user interface (GUI) was also developed to make the program more user-friendly.

  13. Molecular basis of structural make-up of feeds in relation to nutrient absorption in ruminants, revealed with advanced molecular spectroscopy: A review on techniques and models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rahman, Md. Mostafizar; Yu, Peiqiang

    Progress in ruminant feed research is no more feasible only based on wet chemical analysis, which is merely able to provide information on chemical composition of feeds regardless of their digestive features and nutritive value in ruminants. Studying internal structural make-up of functional groups/feed nutrients is often vital for understanding the digestive behaviors and nutritive values of feeds in ruminant because the intrinsic structure of feed nutrients is more related to its overall absorption. In this article, the detail information on the recent developments in molecular spectroscopic techniques to reveal microstructural information of feed nutrients and the use of nutritionmore » models in regards to ruminant feed research was reviewed. The emphasis of this review was on (1) the technological progress in the use of molecular spectroscopic techniques in ruminant feed research; (2) revealing spectral analysis of functional groups of biomolecules/feed nutrients; (3) the use of advanced nutrition models for better prediction of nutrient availability in ruminant systems; and (4) the application of these molecular techniques and combination of nutrient models in cereals, co-products and pulse crop research. The information described in this article will promote better insight in the progress of research on molecular structural make-up of feed nutrients in ruminants.« less

  14. [Dental education for college students based on WeChat public platform].

    PubMed

    Chen, Chuan-Jun; Sun, Tan

    2016-06-01

    The authors proposed a model for dental education based on WeChat public platform. In this model, teachers send various kinds of digital teaching information such as PPT,word and video to the WeChat public platform and students share the information for preview before class and differentiate the key-point knowledge from those information for in-depth learning in class. Teachers also send reference materials for expansive learning after class. Questionaire through the WeChat public platform is used to evaluate teaching effect of teachers and improvement may be taken based on the feedback questionnaire. A discussion and interaction based on WeCchat between students and teacher can be aroused on a specific topic to reach a proper solution. With technique development of mobile terminal, mobile class will come true in near future.

  15. Information security system based on virtual-optics imaging methodology and public key infrastructure

    NASA Astrophysics Data System (ADS)

    Peng, Xiang; Zhang, Peng; Cai, Lilong

    In this paper, we present a virtual-optical based information security system model with the aid of public-key-infrastructure (PKI) techniques. The proposed model employs a hybrid architecture in which our previously published encryption algorithm based on virtual-optics imaging methodology (VOIM) can be used to encipher and decipher data while an asymmetric algorithm, for example RSA, is applied for enciphering and deciphering the session key(s). For an asymmetric system, given an encryption key, it is computationally infeasible to determine the decryption key and vice versa. The whole information security model is run under the framework of PKI, which is on basis of public-key cryptography and digital signatures. This PKI-based VOIM security approach has additional features like confidentiality, authentication, and integrity for the purpose of data encryption under the environment of network.

  16. An Abstraction-Based Data Model for Information Retrieval

    NASA Astrophysics Data System (ADS)

    McAllister, Richard A.; Angryk, Rafal A.

    Language ontologies provide an avenue for automated lexical analysis that may be used to supplement existing information retrieval methods. This paper presents a method of information retrieval that takes advantage of WordNet, a lexical database, to generate paths of abstraction, and uses them as the basis for an inverted index structure to be used in the retrieval of documents from an indexed corpus. We present this method as a entree to a line of research on using ontologies to perform word-sense disambiguation and improve the precision of existing information retrieval techniques.

  17. Inverse analysis of aerodynamic loads from strain information using structural models and neural networks

    NASA Astrophysics Data System (ADS)

    Wada, Daichi; Sugimoto, Yohei

    2017-04-01

    Aerodynamic loads on aircraft wings are one of the key parameters to be monitored for reliable and effective aircraft operations and management. Flight data of the aerodynamic loads would be used onboard to control the aircraft and accumulated data would be used for the condition-based maintenance and the feedback for the fatigue and critical load modeling. The effective sensing techniques such as fiber optic distributed sensing have been developed and demonstrated promising capability of monitoring structural responses, i.e., strains on the surface of the aircraft wings. By using the developed techniques, load identification methods for structural health monitoring are expected to be established. The typical inverse analysis for load identification using strains calculates the loads in a discrete form of concentrated forces, however, the distributed form of the loads is essential for the accurate and reliable estimation of the critical stress at structural parts. In this study, we demonstrate an inverse analysis to identify the distributed loads from measured strain information. The introduced inverse analysis technique calculates aerodynamic loads not in a discrete but in a distributed manner based on a finite element model. In order to verify the technique through numerical simulations, we apply static aerodynamic loads on a flat panel model, and conduct the inverse identification of the load distributions. We take two approaches to build the inverse system between loads and strains. The first one uses structural models and the second one uses neural networks. We compare the performance of the two approaches, and discuss the effect of the amount of the strain sensing information.

  18. Determining flexor-tendon repair techniques via soft computing

    NASA Technical Reports Server (NTRS)

    Johnson, M.; Firoozbakhsh, K.; Moniem, M.; Jamshidi, M.

    2001-01-01

    An SC-based multi-objective decision-making method for determining the optimal flexor-tendon repair technique from experimental and clinical survey data, and with variable circumstances, was presented. Results were compared with those from the Taguchi method. Using the Taguchi method results in the need to perform ad-hoc decisions when the outcomes for individual objectives are contradictory to a particular preference or circumstance, whereas the SC-based multi-objective technique provides a rigorous straightforward computational process in which changing preferences and importance of differing objectives are easily accommodated. Also, adding more objectives is straightforward and easily accomplished. The use of fuzzy-set representations of information categories provides insight into their performance throughout the range of their universe of discourse. The ability of the technique to provide a "best" medical decision given a particular physician, hospital, patient, situation, and other criteria was also demonstrated.

  19. Femtosecond imaging of nonlinear acoustics in gold.

    PubMed

    Pezeril, Thomas; Klieber, Christoph; Shalagatskyi, Viktor; Vaudel, Gwenaelle; Temnov, Vasily; Schmidt, Oliver G; Makarov, Denys

    2014-02-24

    We have developed a high-sensitivity, low-noise femtosecond imaging technique based on pump-probe time-resolved measurements with a standard CCD camera. The approach used in the experiment is based on lock-in acquisitions of images generated by a femtosecond laser probe synchronized to modulation of a femtosecond laser pump at the same rate. This technique allows time-resolved imaging of laser-excited phenomena with femtosecond time resolution. We illustrate the technique by time-resolved imaging of the nonlinear reshaping of a laser-excited picosecond acoustic pulse after propagation through a thin gold layer. Image analysis reveals the direct 2D visualization of the nonlinear acoustic propagation of the picosecond acoustic pulse. Many ultrafast pump-probe investigations can profit from this technique because of the wealth of information it provides over a typical single diode and lock-in amplifier setup, for example it can be used to image ultrasonic echoes in biological samples.

  20. Structural Damage Detection Using Virtual Passive Controllers

    NASA Technical Reports Server (NTRS)

    Lew, Jiann-Shiun; Juang, Jer-Nan

    2001-01-01

    This paper presents novel approaches for structural damage detection which uses the virtual passive controllers attached to structures, where passive controllers are energy dissipative devices and thus guarantee the closed-loop stability. The use of the identified parameters of various closed-loop systems can solve the problem that reliable identified parameters, such as natural frequencies of the open-loop system may not provide enough information for damage detection. Only a small number of sensors are required for the proposed approaches. The identified natural frequencies, which are generally much less sensitive to noise and more reliable than the identified natural frequencies, are used for damage detection. Two damage detection techniques are presented. One technique is based on the structures with direct output feedback controllers while the other technique uses the second-order dynamic feedback controllers. A least-squares technique, which is based on the sensitivity of natural frequencies to damage variables, is used for accurately identifying the damage variables.

Top