Science.gov

Sample records for automated relation extraction

  1. Large Scale Application of Neural Network Based Semantic Role Labeling for Automated Relation Extraction from Biomedical Texts

    PubMed Central

    Barnickel, Thorsten; Weston, Jason; Collobert, Ronan; Mewes, Hans-Werner; Stümpflen, Volker

    2009-01-01

    To reduce the increasing amount of time spent on literature search in the life sciences, several methods for automated knowledge extraction have been developed. Co-occurrence based approaches can deal with large text corpora like MEDLINE in an acceptable time but are not able to extract any specific type of semantic relation. Semantic relation extraction methods based on syntax trees, on the other hand, are computationally expensive and the interpretation of the generated trees is difficult. Several natural language processing (NLP) approaches for the biomedical domain exist focusing specifically on the detection of a limited set of relation types. For systems biology, generic approaches for the detection of a multitude of relation types which in addition are able to process large text corpora are needed but the number of systems meeting both requirements is very limited. We introduce the use of SENNA (“Semantic Extraction using a Neural Network Architecture”), a fast and accurate neural network based Semantic Role Labeling (SRL) program, for the large scale extraction of semantic relations from the biomedical literature. A comparison of processing times of SENNA and other SRL systems or syntactical parsers used in the biomedical domain revealed that SENNA is the fastest Proposition Bank (PropBank) conforming SRL program currently available. 89 million biomedical sentences were tagged with SENNA on a 100 node cluster within three days. The accuracy of the presented relation extraction approach was evaluated on two test sets of annotated sentences resulting in precision/recall values of 0.71/0.43. We show that the accuracy as well as processing speed of the proposed semantic relation extraction approach is sufficient for its large scale application on biomedical text. The proposed approach is highly generalizable regarding the supported relation types and appears to be especially suited for general-purpose, broad-scale text mining systems. The presented approach

  2. Automated Extraction of Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne (Technical Monitor); Haimes, Robert

    2005-01-01

    Computational Fluid Dynamics (CFD) simulations are routinely performed as part of the design process of most fluid handling devices. In order to efficiently and effectively use the results of a CFD simulation, visualization tools are often used. These tools are used in all stages of the CFD simulation including pre-processing, interim-processing, and post-processing, to interpret the results. Each of these stages requires visualization tools that allow one to examine the geometry of the device, as well as the partial or final results of the simulation. An engineer will typically generate a series of contour and vector plots to better understand the physics of how the fluid is interacting with the physical device. Of particular interest are detecting features such as shocks, re-circulation zones, and vortices (which will highlight areas of stress and loss). As the demand for CFD analyses continues to increase the need for automated feature extraction capabilities has become vital. In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like; isc-surface, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snapshot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments). Methods must be developed to abstract the feature of interest and display it in a manner that physically makes sense.

  3. Automated Extraction of Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne (Technical Monitor); Haimes, Robert

    2004-01-01

    Computational Fluid Dynamics (CFD) simulations are routinely performed as part of the design process of most fluid handling devices. In order to efficiently and effectively use the results of a CFD simulation, visualization tools are often used. These tools are used in all stages of the CFD simulation including pre-processing, interim-processing, and post-processing, to interpret the results. Each of these stages requires visualization tools that allow one to examine the geometry of the device, as well as the partial or final results of the simulation. An engineer will typically generate a series of contour and vector plots to better understand the physics of how the fluid is interacting with the physical device. Of particular interest are detecting features such as shocks, recirculation zones, and vortices (which will highlight areas of stress and loss). As the demand for CFD analyses continues to increase the need for automated feature extraction capabilities has become vital. In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like; iso-surface, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snapshot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for (co-processing environments). Methods must be developed to abstract the feature of interest and display it in a manner that physically makes sense.

  4. Automated DNA extraction from pollen in honey.

    PubMed

    Guertler, Patrick; Eicheldinger, Adelina; Muschler, Paul; Goerlich, Ottmar; Busch, Ulrich

    2014-04-15

    In recent years, honey has become subject of DNA analysis due to potential risks evoked by microorganisms, allergens or genetically modified organisms. However, so far, only a few DNA extraction procedures are available, mostly time-consuming and laborious. Therefore, we developed an automated DNA extraction method from pollen in honey based on a CTAB buffer-based DNA extraction using the Maxwell 16 instrument and the Maxwell 16 FFS Nucleic Acid Extraction System, Custom-Kit. We altered several components and extraction parameters and compared the optimised method with a manual CTAB buffer-based DNA isolation method. The automated DNA extraction was faster and resulted in higher DNA yield and sufficient DNA purity. Real-time PCR results obtained after automated DNA extraction are comparable to results after manual DNA extraction. No PCR inhibition was observed. The applicability of this method was further successfully confirmed by analysis of different routine honey samples. PMID:24295710

  5. Multiple automated headspace in-tube extraction for the accurate analysis of relevant wine aroma compounds and for the estimation of their relative liquid-gas transfer rates.

    PubMed

    Zapata, Julián; Lopez, Ricardo; Herrero, Paula; Ferreira, Vicente

    2012-11-30

    An automated headspace in-tube extraction (ITEX) method combined with multiple headspace extraction (MHE) has been developed to provide simultaneously information about the accurate wine content in 20 relevant aroma compounds and about their relative transfer rates to the headspace and hence about the relative strength of their interactions with the matrix. In the method, 5 μL (for alcohols, acetates and carbonyl alcohols) or 200 μL (for ethyl esters) of wine sample were introduced in a 2 mL vial, heated at 35°C and extracted with 32 (for alcohols, acetates and carbonyl alcohols) or 16 (for ethyl esters) 0.5 mL pumping strokes in four consecutive extraction and analysis cycles. The application of the classical theory of Multiple Extractions makes it possible to obtain a highly reliable estimate of the total amount of volatile compound present in the sample and a second parameter, β, which is simply the proportion of volatile not transferred to the trap in one extraction cycle, but that seems to be a reliable indicator of the actual volatility of the compound in that particular wine. A study with 20 wines of different types and 1 synthetic sample has revealed the existence of significant differences in the relative volatility of 15 out of 20 odorants. Differences are particularly intense for acetaldehyde and other carbonyls, but are also notable for alcohols and long chain fatty acid ethyl esters. It is expected that these differences, linked likely to sulphur dioxide and some unknown specific compositional aspects of the wine matrix, can be responsible for relevant sensory changes, and may even be the cause explaining why the same aroma composition can produce different aroma perceptions in two different wines. PMID:23102525

  6. Automated Extraction of Secondary Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne M.; Haimes, Robert

    2005-01-01

    The use of Computational Fluid Dynamics (CFD) has become standard practice in the design and development of the major components used for air and space propulsion. To aid in the post-processing and analysis phase of CFD many researchers now use automated feature extraction utilities. These tools can be used to detect the existence of such features as shocks, vortex cores and separation and re-attachment lines. The existence of secondary flow is another feature of significant importance to CFD engineers. Although the concept of secondary flow is relatively understood there is no commonly accepted mathematical definition for secondary flow. This paper will present a definition for secondary flow and one approach for automatically detecting and visualizing secondary flow.

  7. Automated building extraction using dense elevation matrices

    NASA Astrophysics Data System (ADS)

    Bendett, A. A.; Rauhala, Urho A.; Pearson, James J.

    1997-02-01

    The identification and measurement of buildings in imagery is important to a number of applications including cartography, modeling and simulation, and weapon targeting. Extracting large numbers of buildings manually can be time- consuming and expensive, so the automation of the process is highly desirable. This paper describes and demonstrates such an automated process for extracting rectilinear buildings from stereo imagery. The first step is the generation of a dense elevation matrix registered to the imagery. In the examples shown, this was accomplished using global minimum residual matching (GMRM). GMRM automatically removes y- parallax from the stereo imagery and produces a dense matrix of x-parallax values which are proportional to the local elevation, and, of course, registered to the imagery. The second step is to form a joint probability distribution of the image gray levels and the corresponding height values from the elevation matrix. Based on the peaks of that distribution, the area of interest is segmented into feature and non-feature areas. The feature areas are further refined using length, width and height constraints to yield promising building hypotheses with their corresponding vertices. The gray shade image is used in the third step to verify the hypotheses and to determine precise edge locations corresponding to the approximate vertices and satisfying appropriate orthogonality constraints. Examples of successful application of this process to imagery are presented, and extensions involving the use of dense elevation matrices from other sources are possible.

  8. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert; Lovely, David

    1999-01-01

    In the past, feature extraction and identification were interesting concepts, but not required to understand the underlying physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of much interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snap-shot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense. The following is a list of the important physical phenomena found in transient (and steady-state) fluid flow: (1) Shocks, (2) Vortex cores, (3) Regions of recirculation, (4) Boundary layers, (5) Wakes. Three papers and an initial specification for the (The Fluid eXtraction tool kit) FX Programmer's guide were included. The papers, submitted to the AIAA Computational Fluid Dynamics Conference, are entitled : (1) Using Residence Time for the Extraction of Recirculation Regions, (2) Shock Detection from Computational Fluid Dynamics results and (3) On the Velocity Gradient Tensor and Fluid Feature Extraction.

  9. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    2000-01-01

    In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one 'snap-shot' of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense.

  10. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1998-01-01

    In the past, feature extraction and identification were interesting concepts, but not required to understand the underlying physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of much interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one 'snap-shot' of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense. The following is a list of the important physical phenomena found in transient (and steady-state) fluid flow: Shocks; Vortex ores; Regions of Recirculation; Boundary Layers; Wakes.

  11. Automated feature extraction and classification from image sources

    USGS Publications Warehouse

    U.S. Geological Survey

    1995-01-01

    The U.S. Department of the Interior, U.S. Geological Survey (USGS), and Unisys Corporation have completed a cooperative research and development agreement (CRADA) to explore automated feature extraction and classification from image sources. The CRADA helped the USGS define the spectral and spatial resolution characteristics of airborne and satellite imaging sensors necessary to meet base cartographic and land use and land cover feature classification requirements and help develop future automated geographic and cartographic data production capabilities. The USGS is seeking a new commercial partner to continue automated feature extraction and classification research and development.

  12. Automating Relational Database Design for Microcomputer Users.

    ERIC Educational Resources Information Center

    Pu, Hao-Che

    1991-01-01

    Discusses issues involved in automating the relational database design process for microcomputer users and presents a prototype of a microcomputer-based system (RA, Relation Assistant) that is based on expert systems technology and helps avoid database maintenance problems. Relational database design is explained and the importance of easy input…

  13. Automated extraction of free-text from pathology reports.

    PubMed

    Currie, Anne-Marie; Fricke, Travis; Gawne, Agnes; Johnston, Ric; Liu, John; Stein, Barbara

    2006-01-01

    Manually populating a cancer registry from free-text pathology reports is labor intensive and costly. This poster describes a method of automated text extraction to improve the efficiency of this process and reduce cost. FineTooth, a software company, provides an automated service to the Fred Hutchinson Cancer Research Center (FHCRC) to help populate their breast and prostate cancer clinical research database by electronically abstracting over 80 data fields from pathology text reports. PMID:17238518

  14. Automated knowledge extraction from MEDLINE citations.

    PubMed

    Mendonça, E A; Cimino, J J

    2000-01-01

    As part of preliminary studies for the development of a digital library, we have studied the possibility of using the co-occurrence of MeSH terms in MEDLINE citations associated with the search strategies optimal for evidence-based medicine to automate construction of a knowledge base. We use the UMLS semantic types in order to analyze search results to determine which semantic types are most relevant for different types of questions (etiology, diagnosis, therapy, and prognosis). The automated process generated a large amount of information. Seven to eight percent of the semantic pairs generated in each clinical task group co-occur significantly more often than can be accounted for by chance. A pilot study showed good specificity and sensitivity for the intended purposes of this project in all groups. PMID:11079949

  15. Text Mining approaches for automated literature knowledge extraction and representation.

    PubMed

    Nuzzo, Angelo; Mulas, Francesca; Gabetta, Matteo; Arbustini, Eloisa; Zupan, Blaz; Larizza, Cristiana; Bellazzi, Riccardo

    2010-01-01

    Due to the overwhelming volume of published scientific papers, information tools for automated literature analysis are essential to support current biomedical research. We have developed a knowledge extraction tool to help researcher in discovering useful information which can support their reasoning process. The tool is composed of a search engine based on Text Mining and Natural Language Processing techniques, and an analysis module which process the search results in order to build annotation similarity networks. We tested our approach on the available knowledge about the genetic mechanism of cardiac diseases, where the target is to find both known and possible hypothetical relations between specific candidate genes and the trait of interest. We show that the system i) is able to effectively retrieve medical concepts and genes and ii) plays a relevant role assisting researchers in the formulation and evaluation of novel literature-based hypotheses. PMID:20841825

  16. Automated sea floor extraction from underwater video

    NASA Astrophysics Data System (ADS)

    Kelly, Lauren; Rahmes, Mark; Stiver, James; McCluskey, Mike

    2016-05-01

    Ocean floor mapping using video is a method to simply and cost-effectively record large areas of the seafloor. Obtaining visual and elevation models has noteworthy applications in search and recovery missions. Hazards to navigation are abundant and pose a significant threat to the safety, effectiveness, and speed of naval operations and commercial vessels. This project's objective was to develop a workflow to automatically extract metadata from marine video and create image optical and elevation surface mosaics. Three developments made this possible. First, optical character recognition (OCR) by means of two-dimensional correlation, using a known character set, allowed for the capture of metadata from image files. Second, exploiting the image metadata (i.e., latitude, longitude, heading, camera angle, and depth readings) allowed for the determination of location and orientation of the image frame in mosaic. Image registration improved the accuracy of mosaicking. Finally, overlapping data allowed us to determine height information. A disparity map was created using the parallax from overlapping viewpoints of a given area and the relative height data was utilized to create a three-dimensional, textured elevation map.

  17. Automated Extraction of Family History Information from Clinical Notes

    PubMed Central

    Bill, Robert; Pakhomov, Serguei; Chen, Elizabeth S.; Winden, Tamara J.; Carter, Elizabeth W.; Melton, Genevieve B.

    2014-01-01

    Despite increased functionality for obtaining family history in a structured format within electronic health record systems, clinical notes often still contain this information. We developed and evaluated an Unstructured Information Management Application (UIMA)-based natural language processing (NLP) module for automated extraction of family history information with functionality for identifying statements, observations (e.g., disease or procedure), relative or side of family with attributes (i.e., vital status, age of diagnosis, certainty, and negation), and predication (“indicator phrases”), the latter of which was used to establish relationships between observations and family member. The family history NLP system demonstrated F-scores of 66.9, 92.4, 82.9, 57.3, 97.7, and 61.9 for detection of family history statements, family member identification, observation identification, negation identification, vital status, and overall extraction of the predications between family members and observations, respectively. While the system performed well for detection of family history statements and predication constituents, further work is needed to improve extraction of certainty and temporal modifications. PMID:25954443

  18. Docking automation related technology, Phase 2 report

    SciTech Connect

    Jatko, W.B.; Goddard, J.S.; Gleason, S.S.; Ferrell, R.K.

    1995-04-01

    This report generalizes the progress for Phase II of the Docking Automated Related Technologies task component within the Modular Artillery Ammunition Delivery System (MAADS) technology demonstrator of the Future Armored Resupply Vehicle (FARV) project. This report also covers development activity at Oak Ridge National Laboratory (ORNL) during the period from January to July 1994.

  19. Automated vasculature extraction from placenta images

    NASA Astrophysics Data System (ADS)

    Almoussa, Nizar; Dutra, Brittany; Lampe, Bryce; Getreuer, Pascal; Wittman, Todd; Salafia, Carolyn; Vese, Luminita

    2011-03-01

    Recent research in perinatal pathology argues that analyzing properties of the placenta may reveal important information on how certain diseases progress. One important property is the structure of the placental blood vessels, which supply a fetus with all of its oxygen and nutrition. An essential step in the analysis of the vascular network pattern is the extraction of the blood vessels, which has only been done manually through a costly and time-consuming process. There is no existing method to automatically detect placental blood vessels; in addition, the large variation in the shape, color, and texture of the placenta makes it difficult to apply standard edge-detection algorithms. We describe a method to automatically detect and extract blood vessels from a given image by using image processing techniques and neural networks. We evaluate several local features for every pixel, in addition to a novel modification to an existing road detector. Pixels belonging to blood vessel regions have recognizable responses; hence, we use an artificial neural network to identify the pattern of blood vessels. A set of images where blood vessels are manually highlighted is used to train the network. We then apply the neural network to recognize blood vessels in new images. The network is effective in capturing the most prominent vascular structures of the placenta.

  20. Automated Image Registration Using Morphological Region of Interest Feature Extraction

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2005-01-01

    With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.

  1. Automated Boundary-Extraction and Region-Growing Techniques Applied to Solar Magnetograms

    NASA Technical Reports Server (NTRS)

    McAteer, R. T. James; Gallagher, Peter; Ireland, Jack; Young, C Alex

    2005-01-01

    We present an automated approach to active region extraction from full disc MDI longitudinal magnetograms. This uses a region-growing technique in conjunction with boundary-extraction to define a number of enclosed contours as belonging to separate regions of magnetic significance on the solar disc. This provides an objective definition of active regions and areas of plage on the Sun. A number of parameters relating to the flare-potential of each region is discussed.

  2. Automated extraction of radiation dose information for CT examinations.

    PubMed

    Cook, Tessa S; Zimmerman, Stefan; Maidment, Andrew D A; Kim, Woojin; Boonn, William W

    2010-11-01

    Exposure to radiation as a result of medical imaging is currently in the spotlight, receiving attention from Congress as well as the lay press. Although scanner manufacturers are moving toward including effective dose information in the Digital Imaging and Communications in Medicine headers of imaging studies, there is a vast repository of retrospective CT data at every imaging center that stores dose information in an image-based dose sheet. As such, it is difficult for imaging centers to participate in the ACR's Dose Index Registry. The authors have designed an automated extraction system to query their PACS archive and parse CT examinations to extract the dose information stored in each dose sheet. First, an open-source optical character recognition program processes each dose sheet and converts the information to American Standard Code for Information Interchange (ASCII) text. Each text file is parsed, and radiation dose information is extracted and stored in a database which can be queried using an existing pathology and radiology enterprise search tool. Using this automated extraction pipeline, it is possible to perform dose analysis on the >800,000 CT examinations in the PACS archive and generate dose reports for all of these patients. It is also possible to more effectively educate technologists, radiologists, and referring physicians about exposure to radiation from CT by generating report cards for interpreted and performed studies. The automated extraction pipeline enables compliance with the ACR's reporting guidelines and greater awareness of radiation dose to patients, thus resulting in improved patient care and management. PMID:21040869

  3. Improved Automated Seismic Event Extraction Using Machine Learning

    NASA Astrophysics Data System (ADS)

    Mackey, L.; Kleiner, A.; Jordan, M. I.

    2009-12-01

    Like many organizations engaged in seismic monitoring, the Preparatory Commission for the Comprehensive Test Ban Treaty Organization collects and processes seismic data from a large network of sensors. This data is continuously transmitted to a central data center, and bulletins of seismic events are automatically extracted. However, as for many such automated systems at present, the inaccuracy of this extraction necessitates substantial human analyst review effort. A significant opportunity for improvement thus lies in the fact that these systems currently fail to fully utilize the valuable repository of historical data provided by prior analyst reviews. In this work, we present the results of the application of machine learning approaches to several fundamental sub-tasks in seismic event extraction. These methods share as a common theme the use of historical analyst-reviewed bulletins as ground truth from which they extract relevant patterns to accomplish the desired goals. For instance, we demonstrate the effectiveness of classification and ranking methods for the identification of false events -- that is, those which will be invalidated and discarded by analysts -- in automated bulletins. We also show gains in the accuracy of seismic phase identification via the use of classification techniques to automatically assign seismic phase labels to station detections. Furthermore, we examine the potential of historical association data to inform the direct association of new signal detections with their corresponding seismic events. Empirical results are based upon parametric historical seismic detection and event data received from the Preparatory Commission for the Comprehensive Test Ban Treaty Organization.

  4. Automated RNA Extraction and Purification for Multiplexed Pathogen Detection

    SciTech Connect

    Bruzek, Amy K.; Bruckner-Lea, Cindy J.

    2005-01-01

    Pathogen detection has become an extremely important part of our nation?s defense in this post 9/11 world where the threat of bioterrorist attacks are a grim reality. When a biological attack takes place, response time is critical. The faster the biothreat is assessed, the faster countermeasures can be put in place to protect the health of the general public. Today some of the most widely used methods for detecting pathogens are either time consuming or not reliable [1]. Therefore, a method that can detect multiple pathogens that is inherently reliable, rapid, automated and field portable is needed. To that end, we are developing automated fluidics systems for the recovery, cleanup, and direct labeling of community RNA from suspect environmental samples. The advantage of using RNA for detection is that there are multiple copies of mRNA in a cell, whereas there are normally only one or two copies of DNA [2]. Because there are multiple copies of mRNA in a cell for highly expressed genes, no amplification of the genetic material may be necessary, and thus rapid and direct detection of only a few cells may be possible [3]. This report outlines the development of both manual and automated methods for the extraction and purification of mRNA. The methods were evaluated using cell lysates from Escherichia coli 25922 (nonpathogenic), Salmonella typhimurium (pathogenic), and Shigella spp (pathogenic). Automated RNA purification was achieved using a custom sequential injection fluidics system consisting of a syringe pump, a multi-port valve and a magnetic capture cell. mRNA was captured using silica coated superparamagnetic beads that were trapped in the tubing by a rare earth magnet. RNA was detected by gel electrophoresis and/or by hybridization of the RNA to microarrays. The versatility of the fluidics systems and the ability to automate these systems allows for quick and easy processing of samples and eliminates the need for an experienced operator.

  5. Arduino-based automation of a DNA extraction system.

    PubMed

    Kim, Kyung-Won; Lee, Mi-So; Ryu, Mun-Ho; Kim, Jong-Won

    2015-01-01

    There have been many studies to detect infectious diseases with the molecular genetic method. This study presents an automation process for a DNA extraction system based on microfluidics and magnetic bead, which is part of a portable molecular genetic test system. This DNA extraction system consists of a cartridge with chambers, syringes, four linear stepper actuators, and a rotary stepper actuator. The actuators provide a sequence of steps in the DNA extraction process, such as transporting, mixing, and washing for the gene specimen, magnetic bead, and reagent solutions. The proposed automation system consists of a PC-based host application and an Arduino-based controller. The host application compiles a G code sequence file and interfaces with the controller to execute the compiled sequence. The controller executes stepper motor axis motion, time delay, and input-output manipulation. It drives the stepper motor with an open library, which provides a smooth linear acceleration profile. The controller also provides a homing sequence to establish the motor's reference position, and hard limit checking to prevent any over-travelling. The proposed system was implemented and its functionality was investigated, especially regarding positioning accuracy and velocity profile. PMID:26409535

  6. Automated tools for phenotype extraction from medical records.

    PubMed

    Yetisgen-Yildiz, Meliha; Bejan, Cosmin A; Vanderwende, Lucy; Xia, Fei; Evans, Heather L; Wurfel, Mark M

    2013-01-01

    Clinical research studying critical illness phenotypes relies on the identification of clinical syndromes defined by consensus definitions. Historically, identifying phenotypes has required manual chart review, a time and resource intensive process. The overall research goal of C ritical I llness PH enotype E xt R action (deCIPHER) project is to develop automated approaches based on natural language processing and machine learning that accurately identify phenotypes from EMR. We chose pneumonia as our first critical illness phenotype and conducted preliminary experiments to explore the problem space. In this abstract, we outline the tools we built for processing clinical records, present our preliminary findings for pneumonia extraction, and describe future steps. PMID:24303281

  7. Automated labeling of bibliographic data extracted from biomedical online journals

    NASA Astrophysics Data System (ADS)

    Kim, Jongwoo; Le, Daniel X.; Thoma, George R.

    2003-01-01

    A prototype system has been designed to automate the extraction of bibliographic data (e.g., article title, authors, abstract, affiliation and others) from online biomedical journals to populate the National Library of Medicine"s MEDLINE database. This paper describes a key module in this system: the labeling module that employs statistics and fuzzy rule-based algorithms to identify segmented zones in an article"s HTML pages as specific bibliographic data. Results from experiments conducted with 1,149 medical articles from forty-seven journal issues are presented.

  8. Feature extraction from Doppler ultrasound signals for automated diagnostic systems.

    PubMed

    Ubeyli, Elif Derya; Güler, Inan

    2005-11-01

    This paper presented the assessment of feature extraction methods used in automated diagnosis of arterial diseases. Since classification is more accurate when the pattern is simplified through representation by important features, feature extraction and selection play an important role in classifying systems such as neural networks. Different feature extraction methods were used to obtain feature vectors from ophthalmic and internal carotid arterial Doppler signals. In addition to this, the problem of selecting relevant features among the features available for the purpose of classification of Doppler signals was dealt with. Multilayer perceptron neural networks (MLPNNs) with different inputs (feature vectors) were used for diagnosis of ophthalmic and internal carotid arterial diseases. The assessment of feature extraction methods was performed by taking into consideration of performances of the MLPNNs. The performances of the MLPNNs were evaluated by the convergence rates (number of training epochs) and the total classification accuracies. Finally, some conclusions were drawn concerning the efficiency of discrete wavelet transform as a feature extraction method used for the diagnosis of ophthalmic and internal carotid arterial diseases. PMID:16278106

  9. An automated approach for extracting Barrier Island morphology from digital elevation models

    NASA Astrophysics Data System (ADS)

    Wernette, Phillipe; Houser, Chris; Bishop, Michael P.

    2016-06-01

    The response and recovery of a barrier island to extreme storms depends on the elevation of the dune base and crest, both of which can vary considerably alongshore and through time. Quantifying the response to and recovery from storms requires that we can first identify and differentiate the dune(s) from the beach and back-barrier, which in turn depends on accurate identification and delineation of the dune toe, crest and heel. The purpose of this paper is to introduce a multi-scale automated approach for extracting beach, dune (dune toe, dune crest and dune heel), and barrier island morphology. The automated approach introduced here extracts the shoreline and back-barrier shoreline based on elevation thresholds, and extracts the dune toe, dune crest and dune heel based on the average relative relief (RR) across multiple spatial scales of analysis. The multi-scale automated RR approach to extracting dune toe, dune crest, and dune heel based upon relative relief is more objective than traditional approaches because every pixel is analyzed across multiple computational scales and the identification of features is based on the calculated RR values. The RR approach out-performed contemporary approaches and represents a fast objective means to define important beach and dune features for predicting barrier island response to storms. The RR method also does not require that the dune toe, crest, or heel are spatially continuous, which is important because dune morphology is likely naturally variable alongshore.

  10. Automated Feature Extraction of Foredune Morphology from Terrestrial Lidar Data

    NASA Astrophysics Data System (ADS)

    Spore, N.; Brodie, K. L.; Swann, C.

    2014-12-01

    Foredune morphology is often described in storm impact prediction models using the elevation of the dune crest and dune toe and compared with maximum runup elevations to categorize the storm impact and predicted responses. However, these parameters do not account for other foredune features that may make them more or less erodible, such as alongshore variations in morphology, vegetation coverage, or compaction. The goal of this work is to identify other descriptive features that can be extracted from terrestrial lidar data that may affect the rate of dune erosion under wave attack. Daily, mobile-terrestrial lidar surveys were conducted during a 6-day nor'easter (Hs = 4 m in 6 m water depth) along 20km of coastline near Duck, North Carolina which encompassed a variety of foredune forms in close proximity to each other. This abstract will focus on the tools developed for the automated extraction of the morphological features from terrestrial lidar data, while the response of the dune will be presented by Brodie and Spore as an accompanying abstract. Raw point cloud data can be dense and is often under-utilized due to time and personnel constraints required for analysis, since many algorithms are not fully automated. In our approach, the point cloud is first projected into a local coordinate system aligned with the coastline, and then bare earth points are interpolated onto a rectilinear 0.5 m grid creating a high resolution digital elevation model. The surface is analyzed by identifying features along each cross-shore transect. Surface curvature is used to identify the position of the dune toe, and then beach and berm morphology is extracted shoreward of the dune toe, and foredune morphology is extracted landward of the dune toe. Changes in, and magnitudes of, cross-shore slope, curvature, and surface roughness are used to describe the foredune face and each cross-shore transect is then classified using its pre-storm morphology for storm-response analysis.

  11. Automated extraction of chemical structure information from digital raster images

    PubMed Central

    Park, Jungkap; Rosania, Gus R; Shedden, Kerby A; Nguyen, Mandee; Lyu, Naesung; Saitou, Kazuhiro

    2009-01-01

    Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links to scientific research

  12. Automated extraction of knowledge for model-based diagnostics

    NASA Technical Reports Server (NTRS)

    Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.

    1990-01-01

    The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.

  13. Automated Extraction of Substance Use Information from Clinical Texts

    PubMed Central

    Wang, Yan; Chen, Elizabeth S.; Pakhomov, Serguei; Arsoniadis, Elliot; Carter, Elizabeth W.; Lindemann, Elizabeth; Sarkar, Indra Neil; Melton, Genevieve B.

    2015-01-01

    Within clinical discourse, social history (SH) includes important information about substance use (alcohol, drug, and nicotine use) as key risk factors for disease, disability, and mortality. In this study, we developed and evaluated a natural language processing (NLP) system for automated detection of substance use statements and extraction of substance use attributes (e.g., temporal and status) based on Stanford Typed Dependencies. The developed NLP system leveraged linguistic resources and domain knowledge from a multi-site social history study, Propbank and the MiPACQ corpus. The system attained F-scores of 89.8, 84.6 and 89.4 respectively for alcohol, drug, and nicotine use statement detection, as well as average F-scores of 82.1, 90.3, 80.8, 88.7, 96.6, and 74.5 respectively for extraction of attributes. Our results suggest that NLP systems can achieve good performance when augmented with linguistic resources and domain knowledge when applied to a wide breadth of substance use free text clinical notes. PMID:26958312

  14. Automated Dsm Extraction from Uav Images and Performance Analysis

    NASA Astrophysics Data System (ADS)

    Rhee, S.; Kim, T.

    2015-08-01

    As technology evolves, unmanned aerial vehicles (UAVs) imagery is being used from simple applications such as image acquisition to complicated applications such as 3D spatial information extraction. Spatial information is usually provided in the form of a DSM or point cloud. It is important to generate very dense tie points automatically from stereo images. In this paper, we tried to apply stereo image-based matching technique developed for satellite/aerial images to UAV images, propose processing steps for automated DSM generation and to analyse the possibility of DSM generation. For DSM generation from UAV images, firstly, exterior orientation parameters (EOPs) for each dataset were adjusted. Secondly, optimum matching pairs were determined. Thirdly, stereo image matching was performed with each pair. Developed matching algorithm is based on grey-level correlation on pixels applied along epipolar lines. Finally, the extracted match results were united with one result and the final DSM was made. Generated DSM was compared with a reference DSM from Lidar. Overall accuracy was 1.5 m in NMAD. However, several problems have to be solved in future, including obtaining precise EOPs, handling occlusion and image blurring problems. More effective interpolation technique needs to be developed in the future.

  15. Automated Tract Extraction via Atlas Based Adaptive Clustering

    PubMed Central

    Tunç, Birkan; Parker, William A.; Ingalhalikar, Madhura; Verma, Ragini

    2014-01-01

    Advancements in imaging protocols such as the high angular resolution diffusion-weighted imaging (HARDI) and in tractography techniques are expected to cause an increase in the tract-based analyses. Statistical analyses over white matter tracts can contribute greatly towards understanding structural mechanisms of the brain since tracts are representative of the connectivity pathways. The main challenge with tract-based studies is the extraction of the tracts of interest in a consistent and comparable manner over a large group of individuals without drawing the inclusion and exclusion regions of interest. In this work, we design a framework for automated extraction of white matter tracts. The framework introduces three main components, namely a connectivity based fiber representation, a fiber clustering atlas, and a clustering approach called Adaptive Clustering. The fiber representation relies on the connectivity signatures of fibers to establish an easy correspondence between different subjects. A group-wise clustering of these fibers that are represented by the connectivity signatures is then used to generate a fiber bundle atlas. Finally, Adaptive Clustering incorporates the previously generated clustering atlas as a prior, to cluster the fibers of a new subject automatically. Experiments on the HARDI scans of healthy individuals acquired repeatedly, demonstrate the applicability, the reliability and the repeatability of our approach in extracting white matter tracts. By alleviating the seed region selection or the inclusion/exclusion ROI drawing requirements that are usually handled by trained radiologists, the proposed framework expands the range of possible clinical applications and establishes the ability to perform tract-based analyses with large samples. PMID:25134977

  16. Brain MAPS: an automated, accurate and robust brain extraction technique using a template library

    PubMed Central

    Leung, Kelvin K.; Barnes, Josephine; Modat, Marc; Ridgway, Gerard R.; Bartlett, Jonathan W.; Fox, Nick C.; Ourselin, Sébastien

    2011-01-01

    Whole brain extraction is an important pre-processing step in neuro-image analysis. Manual or semi-automated brain delineations are labour-intensive and thus not desirable in large studies, meaning that automated techniques are preferable. The accuracy and robustness of automated methods are crucial because human expertise may be required to correct any sub-optimal results, which can be very time consuming. We compared the accuracy of four automated brain extraction methods: Brain Extraction Tool (BET), Brain Surface Extractor (BSE), Hybrid Watershed Algorithm (HWA) and a Multi-Atlas Propagation and Segmentation (MAPS) technique we have previously developed for hippocampal segmentation. The four methods were applied to extract whole brains from 682 1.5T and 157 3T T1-weighted MR baseline images from the Alzheimer’s Disease Neuroimaging Initiative database. Semi-automated brain segmentations with manual editing and checking were used as the gold-standard to compare with the results. The median Jaccard index of MAPS was higher than HWA, BET and BSE in 1.5T and 3T scans (p < 0.05, all tests), and the 1st-99th centile range of the Jaccard index of MAPS was smaller than HWA, BET and BSE in 1.5T and 3T scans (p < 0.05, all tests). HWA and MAPS were found to be best at including all brain tissues (median false negative rate ≤ 0.010% for 1.5T scans and ≤ 0.019% for 3T scans, both methods). The median Jaccard index of MAPS were similar in both 1.5T and 3T scans, whereas those of BET, BSE and HWA were higher in 1.5T scans than 3T scans (p < 0.05, all tests). We found that the diagnostic group had a small effect on the median Jaccard index of all four methods. In conclusion, MAPS had relatively high accuracy and low variability compared to HWA, BET and BSE in MR scans with and without atrophy. PMID:21195780

  17. ACIS Extract: A Chandra/ACIS Tool for Automated Point Source Extraction and Spectral Fitting

    NASA Astrophysics Data System (ADS)

    Townsley, L.; Broos, P.; Bauer, F.; Getman, K.

    2003-03-01

    ACIS Extract (AE) is an IDL program that assists the observer in performing the many tasks involved in analyzing the spectra of large numbers of point sources observed with the ACIS instrument on Chandra. Notably, all tasks are performed in a context that may include multiple observations of the field. Features of AE and its several accessory tools include refining the accuracy of source positions, defining extraction regions based on the PSF of each source in each observation, generating single-observation and composite ARFs and RMFs, applying energy-dependent aperture corrections to the ARFs, computing light curves and K-S tests for source variability, automated broad-band photometry, automated spectral fitting and review of fitting results, and compilation of results into LaTeX tables. A variety of interactive plots are produced showing various source properties across the catalog. This poster details the capabilities of the package and shows example output. The code and a detailed users' manual are available to the community at http://www.astro.psu.edu/xray/docs/TARA/ae_users_guide.html. Support for this effort was provided by NASA contract NAS8-38252 to Gordon Garmire, the ACIS Principal Investigator.

  18. A COMPARISON OF AUTOMATED AND TRADITIONAL METHODS FOR THE EXTRACTION OF ARSENICALS FROM FISH

    EPA Science Inventory

    An automated extractor employing accelerated solvent extraction (ASE) has been compared with a traditional sonication method of extraction for the extraction of arsenicals from fish tissue. Four different species of fish and a standard reference material, DORM-2, were subjected t...

  19. AUTOMATED SOLID PHASE EXTRACTION GC/MS FOR ANALYSIS OF SEMIVOLATILES IN WATER AND SEDIMENTS

    EPA Science Inventory

    Data is presented on the development of a new automated system combining solid phase extraction (SPE) with GC/MS spectrometry for the single-run analysis of water samples containing a broad range of organic compounds. The system uses commercially available automated in-line sampl...

  20. The Overview of Entity Relation Extraction Methods

    NASA Astrophysics Data System (ADS)

    Cheng, Xian-Yi; Chen, Xiao-Hong; Hua, Jin

    The Information extraction can be defined as the task of extracting information of specified events or facts, and then stored in a database for the users' querying. Only with the correct relationship between the various entities, the database can be correctly store in. Entity relation extraction becomes a key technology of information Extraction system. In this paper, we analyze the status of entity relation extraction method; propose several problems for this field to be solved.

  1. Towards automated support for extraction of reusable components

    NASA Technical Reports Server (NTRS)

    Abd-El-hafiz, S. K.; Basili, Victor R.; Caldiera, Gianluigi

    1992-01-01

    A cost effective introduction of software reuse techniques requires the reuse of existing software developed in many cases without aiming at reusability. This paper discusses the problems related to the analysis and reengineering of existing software in order to reuse it. We introduce a process model for component extraction and focus on the problem of analyzing and qualifying software components which are candidates for reuse. A prototype tool for supporting the extraction of reusable components is presented. One of the components of this tool aids in understanding programs and is based on the functional model of correctness. It can assist software engineers in the process of finding correct formal specifications for programs. A detailed description of this component and an example to demonstrate a possible operational scenario are given.

  2. Towards automated support for extraction of reusable components

    NASA Technical Reports Server (NTRS)

    Abd-El-hafiz, S. K.; Basili, V. R.; Caldier, G.

    1991-01-01

    A cost effective introduction of software reuse techniques requires the reuse of existing software developed in many cases without aiming at reusability. This paper discusses the problems related to the analysis and reengineering of existing software in order to reuse it. We introduce a process model for component extraction and focus on the problem of analyzing and qualifying software components which are candidates for reuse. A prototype tool for supporting the extraction of reusable components is presented. One of the components of this tool aids in understanding programs and is based on the functional model of correctness. It can assist software engineers in the process of finding correct formal specifications for programs. A detailed description of this component and an example to demonstrate a possible operational scenario are given.

  3. Application and evaluation of automated methods to extract neuroanatomical connectivity statements from free text

    PubMed Central

    Pavlidis, Paul

    2012-01-01

    Motivation: Automated annotation of neuroanatomical connectivity statements from the neuroscience literature would enable accessible and large-scale connectivity resources. Unfortunately, the connectivity findings are not formally encoded and occur as natural language text. This hinders aggregation, indexing, searching and integration of the reports. We annotated a set of 1377 abstracts for connectivity relations to facilitate automated extraction of connectivity relationships from neuroscience literature. We tested several baseline measures based on co-occurrence and lexical rules. We compare results from seven machine learning methods adapted from the protein interaction extraction domain that employ part-of-speech, dependency and syntax features. Results: Co-occurrence based methods provided high recall with weak precision. The shallow linguistic kernel recalled 70.1% of the sentence-level connectivity statements at 50.3% precision. Owing to its speed and simplicity, we applied the shallow linguistic kernel to a large set of new abstracts. To evaluate the results, we compared 2688 extracted connections with the Brain Architecture Management System (an existing database of rat connectivity). The extracted connections were connected in the Brain Architecture Management System at a rate of 63.5%, compared with 51.1% for co-occurring brain region pairs. We found that precision increases with the recency and frequency of the extracted relationships. Availability and implementation: The source code, evaluations, documentation and other supplementary materials are available at http://www.chibi.ubc.ca/WhiteText. Contact: paul@chibi.ubc.ca Supplementary information: Supplementary data are available at Bioinformatics Online. PMID:22954628

  4. Selecting a Relational Database Management System for Library Automation Systems.

    ERIC Educational Resources Information Center

    Shekhel, Alex; O'Brien, Mike

    1989-01-01

    Describes the evaluation of four relational database management systems (RDBMSs) (Informix Turbo, Oracle 6.0 TPS, Unify 2000 and Relational Technology's Ingres 5.0) to determine which is best suited for library automation. The evaluation criteria used to develop a benchmark specifically designed to test RDBMSs for libraries are discussed. (CLB)

  5. Model-based automated extraction of microtubules from electron tomography volume.

    PubMed

    Jiang, Ming; Ji, Qiang; McEwen, Bruce F

    2006-07-01

    We propose a model-based automated approach to extracting microtubules from noisy electron tomography volume. Our approach consists of volume enhancement, microtubule localization, and boundary segmentation to exploit the unique geometric and photometric properties of microtubules. The enhancement starts with an anisotropic invariant wavelet transform to enhance the microtubules globally, followed by a three-dimensional (3-D) tube-enhancing filter based on Weingarten matrix to further accentuate the tubular structures locally. The enhancement ends with a modified coherence-enhancing diffusion to complete the interruptions along the microtubules. The microtubules are then localized with a centerline extraction algorithm adapted for tubular objects. To perform segmentation, we novelly modify and extend active shape model method. We first use 3-D local surface enhancement to characterize the microtubule boundary and improve shape searching by relating the boundary strength with the weight matrix of the searching error. We then integrate the active shape model with Kalman filtering to utilize the longitudinal smoothness along the microtubules. The segmentation improved in this way is robust against missing boundaries and outliers that are often present in the tomography volume. Experimental results demonstrate that our automated method produces results close to those by manual process and uses only a fraction of the time of the latter. PMID:16871731

  6. Automating Nuclear-Safety-Related SQA Procedures with Custom Applications

    SciTech Connect

    Freels, James D.

    2016-01-01

    Nuclear safety-related procedures are rigorous for good reason. Small design mistakes can quickly turn into unwanted failures. Researchers at Oak Ridge National Laboratory worked with COMSOL to define a simulation app that automates the software quality assurance (SQA) verification process and provides results in less than 24 hours.

  7. Biomedical Relation Extraction: From Binary to Complex

    PubMed Central

    Zhong, Dayou

    2014-01-01

    Biomedical relation extraction aims to uncover high-quality relations from life science literature with high accuracy and efficiency. Early biomedical relation extraction tasks focused on capturing binary relations, such as protein-protein interactions, which are crucial for virtually every process in a living cell. Information about these interactions provides the foundations for new therapeutic approaches. In recent years, more interests have been shifted to the extraction of complex relations such as biomolecular events. While complex relations go beyond binary relations and involve more than two arguments, they might also take another relation as an argument. In the paper, we conduct a thorough survey on the research in biomedical relation extraction. We first present a general framework for biomedical relation extraction and then discuss the approaches proposed for binary and complex relation extraction with focus on the latter since it is a much more difficult task compared to binary relation extraction. Finally, we discuss challenges that we are facing with complex relation extraction and outline possible solutions and future directions. PMID:25214883

  8. Disposable and removable nucleic acid extraction and purification cartridges for automated flow-through systems

    DOEpatents

    Regan, John Frederick

    2014-09-09

    Removable cartridges are used on automated flow-through systems for the purpose of extracting and purifying genetic material from complex matrices. Different types of cartridges are paired with specific automated protocols to concentrate, extract, and purifying pathogenic or human genetic material. Their flow-through nature allows large quantities sample to be processed. Matrices may be filtered using size exclusion and/or affinity filters to concentrate the pathogen of interest. Lysed material is ultimately passed through a filter to remove the insoluble material before the soluble genetic material is delivered past a silica-like membrane that binds the genetic material, where it is washed, dried, and eluted. Cartridges are inserted into the housing areas of flow-through automated instruments, which are equipped with sensors to ensure proper placement and usage of the cartridges. Properly inserted cartridges create fluid- and air-tight seals with the flow lines of an automated instrument.

  9. Spatial resolution requirements for automated cartographic road extraction

    USGS Publications Warehouse

    Benjamin, S.; Gaydos, L.

    1990-01-01

    Ground resolution requirements for detection and extraction of road locations in a digitized large-scale photographic database were investigated. A color infrared photograph of Sunnyvale, California was scanned, registered to a map grid, and spatially degraded to 1- to 5-metre resolution pixels. Road locations in each data set were extracted using a combination of image processing and CAD programs. These locations were compared to a photointerpretation of road locations to determine a preferred pixel size for the extraction method. Based on road pixel omission error computations, a 3-metre pixel resolution appears to be the best choice for this extraction method. -Authors

  10. Automated serial extraction of DNA and RNA from biobanked tissue specimens

    PubMed Central

    2013-01-01

    Background With increasing biobanking of biological samples, methods for large scale extraction of nucleic acids are in demand. The lack of such techniques designed for extraction from tissues results in a bottleneck in downstream genetic analyses, particularly in the field of cancer research. We have developed an automated procedure for tissue homogenization and extraction of DNA and RNA into separate fractions from the same frozen tissue specimen. A purpose developed magnetic bead based technology to serially extract both DNA and RNA from tissues was automated on a Tecan Freedom Evo robotic workstation. Results 864 fresh-frozen human normal and tumor tissue samples from breast and colon were serially extracted in batches of 96 samples. Yields and quality of DNA and RNA were determined. The DNA was evaluated in several downstream analyses, and the stability of RNA was determined after 9 months of storage. The extracted DNA performed consistently well in processes including PCR-based STR analysis, HaloPlex selection and deep sequencing on an Illumina platform, and gene copy number analysis using microarrays. The RNA has performed well in RT-PCR analyses and maintains integrity upon storage. Conclusions The technology described here enables the processing of many tissue samples simultaneously with a high quality product and a time and cost reduction for the user. This reduces the sample preparation bottleneck in cancer research. The open automation format also enables integration with upstream and downstream devices for automated sample quantitation or storage. PMID:23957867

  11. Automated Algorithm for Extraction of Wetlands from IRS Resourcesat Liss III Data

    NASA Astrophysics Data System (ADS)

    Subramaniam, S.; Saxena, M.

    2011-09-01

    Wetlands play significant role in maintaining the ecological balance of both biotic and abiotic life in coastal and inland environments. Hence, understanding of their occurrence, spatial extent of change in wetland environment is very important and can be monitored using satellite remote sensing technique. The extraction of wetland features using remote sensing has so far been carried out using visual/ hybrid digital analysis techniques, which is time consuming. To monitor the wetland and their features at National/ State level, there is a need for the development of automated technique for the extraction of wetland features. A knowledge based algorithm has been developed using hierarchical decision tree approach for automated extraction of wetland features such as surface water spread, wet area, turbidity and wet vegetation including aquatic for pre and post monsoon period. The results obtained for Chhattisgarh, India using the automated technique has been found to be satisfactory, when compared with hybrid digital/visual analysis technique.

  12. The evaluation of a concept for a Canadian-made automated multipurpose materials extraction facility

    NASA Astrophysics Data System (ADS)

    Kleinberg, H.

    Long-term habitation of space will eventually require use of off-Earth resources to reduce long-term program costs and risks to personnel and equipment due to launch from Earth. Extraction of oxygen from lunar soil is a prime example. Processes currently under study for such activities focus on the extraction of only one element / chemical from one type of soil on one world, and they produce large amounts of waste material. This paper presents the results of an examination by Spar Aerospace of a plasma separation concept as part of a materials extraction facility that might be used in space. Such a process has the far-reaching potential for extracting any or all of the elements available in soil samples, extraction of oxygen from lunar soil being the near-term application. Plasma separation has the potential for a 100 percent yield of extracted elements from input samples, and the versatility to be used on many non-terrestrial sites for the extraction of available elemental resources. The development of new materials extraction processes for each world would thus be eliminated. Such a facility could also reduce the generation of waste products by decomposing soil samples into pure, stable elements. Robotics, automation, and a plasma separation facility could be used to gather, prepare, process, separate, collect and ship the available chemical elements. The following topics are discussed: automated soil-gathering using robotics; automated soil pre-processing; plasma dissociation and separation of soil, and collection of sorted elements in an automated process; containment of gases, storage of pure elements, metals; and automated shipment of materials to a manned base, or pick-up site.

  13. Automated multisyringe stir bar sorptive extraction using robust montmorillonite/epoxy-coated stir bars.

    PubMed

    Ghani, Milad; Saraji, Mohammad; Maya, Fernando; Cerdà, Víctor

    2016-05-01

    Herein we present a simple, rapid and low cost strategy for the preparation of robust stir bar coatings based on the combination of montmorillonite with epoxy resin. The composite stir bar was implemented in a novel automated multisyringe stir bar sorptive extraction system (MS-SBSE), and applied to the extraction of four chlorophenols (4-chlorophenol, 2,4-dichlorophenol, 2,4,6-trichlorophenol and pentachlorophenol) as model compounds, followed by high performance liquid chromatography-diode array detection. The different experimental parameters of the MS-SBSE, such as sample volume, selection of the desorption solvent, desorption volume, desorption time, sample solution pH, salt effect and extraction time were studied. Under the optimum conditions, the detection limits were between 0.02 and 0.34μgL(-1). Relative standard deviations (RSD) of the method for the analytes at 10μgL(-1) concentration level ranged from 3.5% to 4.1% (as intra-day RSD) and from 3.9% to 4.3% (as inter-day RSD at 50μgL(-1) concentration level). Batch-to-batch reproducibility for three different stir bars was 4.6-5.1%. The enrichment factors were between 30 and 49. In order to investigate the capability of the developed technique for real sample analysis, well water, wastewater and leachates from a solid waste treatment plant were satisfactorily analyzed. PMID:27062720

  14. Comparison of an automated nucleic acid extraction system with the column-based procedure

    PubMed Central

    Hinz, Rebecca; Hagen, Ralf Matthias

    2015-01-01

    Here, we assessed the extraction efficiency of a deployable bench-top nucleic acid extractor EZ1 in comparison to the column-based approach with complex sample matrices. A total of 48 EDTA blood samples and 81 stool samples were extracted by EZ1 automated extraction and the column-based QIAamp DNA Mini Kit. Blood sample extractions were assessed by two real-time malaria PCRs, while stool samples were analyzed by six multiplex real-time PCR assays targeting bacterial, viral, and parasitic stool pathogens. Inhibition control PCR testing was performed as well. In total, 147 concordant and 13 discordant pathogen-specific PCR results were obtained. The latter comprised 11 positive results after column-based extraction only and two positive results after EZ1 extraction only. EZ1 extraction showed a higher frequency of inhibition. This phenomenon was, however, inconsistent for the different PCR schemes. In case of concordant PCR results, relevant differences of cycle threshold numbers for the compared extraction schemes were not observed. Switches from well-established column-based extraction to extraction with the automated EZ1 system do not lead to a relevantly reduced yield of target DNA when complex sample matrices are used. If sample inhibition is observed, column-based extraction from another sample aliquot may be considered. PMID:25883797

  15. Artificial intelligence issues related to automated computing operations

    NASA Technical Reports Server (NTRS)

    Hornfeck, William A.

    1989-01-01

    Large data processing installations represent target systems for effective applications of artificial intelligence (AI) constructs. The system organization of a large data processing facility at the NASA Marshall Space Flight Center is presented. The methodology and the issues which are related to AI application to automated operations within a large-scale computing facility are described. Problems to be addressed and initial goals are outlined.

  16. Automated microfluidic DNA/RNA extraction with both disposable and reusable components

    NASA Astrophysics Data System (ADS)

    Kim, Jungkyu; Johnson, Michael; Hill, Parker; Sonkul, Rahul S.; Kim, Jongwon; Gale, Bruce K.

    2012-01-01

    An automated microfluidic nucleic extraction system was fabricated with a multilayer polydimethylsiloxane (PDMS) structure that consists of sample wells, microvalves, a micropump and a disposable microfluidic silica cartridge. Both the microvalves and micropump structures were fabricated in a single layer and are operated pneumatically using a 100 µm PDMS membrane. To fabricate the disposable microfluidic silica cartridge, two-cavity structures were made in a PDMS replica to fit the stacked silica membranes. A handheld controller for the microvalves and pumps was developed to enable system automation. With purified ribonucleic acid (RNA), whole blood and E. coli samples, the automated microfluidic nucleic acid extraction system was validated with a guanidine-based solid phase extraction procedure. An extraction efficiency of ~90% for deoxyribonucleic acid (DNA) and ~54% for RNA was obtained in 12 min from whole blood and E. coli samples, respectively. In addition, the same quantity and quality of extracted DNA was confirmed by polymerase chain reaction (PCR) amplification. The PCR also presented the appropriate amplification and melting profiles. Automated, programmable fluid control and physical separation of the reusable components and the disposable components significantly decrease the assay time and manufacturing cost and increase the flexibility and compatibility of the system with downstream components.

  17. Feature Extraction and Selection Strategies for Automated Target Recognition

    NASA Technical Reports Server (NTRS)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  18. Chemical documents: machine understanding and automated information extraction.

    PubMed

    Townsend, Joe A; Adams, Sam E; Waudby, Christopher A; de Souza, Vanessa K; Goodman, Jonathan M; Murray-Rust, Peter

    2004-11-21

    Automatically extracting chemical information from documents is a challenging task, but an essential one for dealing with the vast quantity of data that is available. The task is least difficult for structured documents, such as chemistry department web pages or the output of computational chemistry programs, but requires increasingly sophisticated approaches for less structured documents, such as chemical papers. The identification of key units of information, such as chemical names, makes the extraction of useful information from unstructured documents possible. PMID:15534707

  19. Dynamic electromembrane extraction: Automated movement of donor and acceptor phases to improve extraction efficiency.

    PubMed

    Asl, Yousef Abdossalami; Yamini, Yadollah; Seidi, Shahram; Amanzadeh, Hatam

    2015-11-01

    In the present research, dynamic electromembrane extraction (DEME) was introduced for the first time for extraction and determination of ionizable species from different biological matrices. The setup proposed for DEME provides an efficient, stable, and reproducible method to increase extraction efficiency. This setup consists of a piece of hollow fiber mounted inside a glass flow cell by means of two plastics connector tubes. In this dynamic system, an organic solvent is impregnated into the pores of hollow fiber as supported liquid membrane (SLM); an aqueous acceptor solution is repeatedly pumped into the lumen of hollow fiber by a syringe pump whereas a peristaltic pump is used to move sample solution around the mounted hollow fiber into the flow cell. Two platinum electrodes connected to a power supply are used during extractions which are located into the lumen of the hollow fiber and glass flow cell, respectively. The method was applied for extraction of amitriptyline (AMI) and nortriptyline (NOR) as model analytes from biological fluids. Effective parameters on DEME of the model analytes were investigated and optimized. Under optimized conditions, the calibration curves were linear in the range of 2.0-100μgL(-1) with coefficient of determination (r(2)) more than 0.9902 for both of the analytes. The relative standard deviations (RSD %) were less than 8.4% based on four replicate measurements. LODs less than 1.0μgL(-1) were obtained for both AMI and NOR. The preconcentration factors higher than 83-fold were obtained for the extraction of AMI and NOR in various biological samples. PMID:26455283

  20. Visual Routines for Extracting Magnitude Relations

    ERIC Educational Resources Information Center

    Michal, Audrey L.; Uttal, David; Shah, Priti; Franconeri, Steven L.

    2016-01-01

    Linking relations described in text with relations in visualizations is often difficult. We used eye tracking to measure the optimal way to extract such relations in graphs, college students, and young children (6- and 8-year-olds). Participants compared relational statements ("Are there more blueberries than oranges?") with simple…

  1. Prescription Extraction from Clinical Notes: Towards Automating EMR Medication Reconciliation

    PubMed Central

    Wang, Yajuan; Steinhubl, Steven R.; Defilippi, Chrisopher; Ng, Kenney; Ebadollahi, Shahram; Stewart, Walter F.; Byrd, Roy J

    2015-01-01

    Medication in for ma lion is one of [he most important clinical data types in electronic medical records (EMR) This study developed an NLP application (PredMED) to extract full prescriptions and their relevant components from a large corpus of unstructured ambulatory office visit clinical notes and the corresponding structured medication reconciliation (MED REC) data in the EMR. PredMED achieved an 84.4% F-score on office visit encounter notes and 95.0% on MED„REC data, outperforming two available medication extraction systems. To assess the potential for using automatically extracted prescriptions in the medication reconciliation task, we manually analyzed discrepancies between prescriptions found in clinical encounter notes and in matching MED_REC data for sample patient encounters. PMID:26306266

  2. Discovering Indicators of Successful Collaboration Using Tense: Automated Extraction of Patterns in Discourse

    ERIC Educational Resources Information Center

    Thompson, Kate; Kennedy-Clark, Shannon; Wheeler, Penny; Kelly, Nick

    2014-01-01

    This paper describes a technique for locating indicators of success within the data collected from complex learning environments, proposing an application of e-research to access learner processes and measure and track group progress. The technique combines automated extraction of tense and modality via parts-of-speech tagging with a visualisation…

  3. Automated concept-level information extraction to reduce the need for custom software and rules development

    PubMed Central

    Nguyen, Thien M; Goryachev, Sergey; Fiore, Louis D

    2011-01-01

    Objective Despite at least 40 years of promising empirical performance, very few clinical natural language processing (NLP) or information extraction systems currently contribute to medical science or care. The authors address this gap by reducing the need for custom software and rules development with a graphical user interface-driven, highly generalizable approach to concept-level retrieval. Materials and methods A ‘learn by example’ approach combines features derived from open-source NLP pipelines with open-source machine learning classifiers to automatically and iteratively evaluate top-performing configurations. The Fourth i2b2/VA Shared Task Challenge's concept extraction task provided the data sets and metrics used to evaluate performance. Results Top F-measure scores for each of the tasks were medical problems (0.83), treatments (0.82), and tests (0.83). Recall lagged precision in all experiments. Precision was near or above 0.90 in all tasks. Discussion With no customization for the tasks and less than 5 min of end-user time to configure and launch each experiment, the average F-measure was 0.83, one point behind the mean F-measure of the 22 entrants in the competition. Strong precision scores indicate the potential of applying the approach for more specific clinical information extraction tasks. There was not one best configuration, supporting an iterative approach to model creation. Conclusion Acceptable levels of performance can be achieved using fully automated and generalizable approaches to concept-level information extraction. The described implementation and related documentation is available for download. PMID:21697292

  4. Automated extraction of pleural effusion in three-dimensional thoracic CT images

    NASA Astrophysics Data System (ADS)

    Kido, Shoji; Tsunomori, Akinori

    2009-02-01

    It is important for diagnosis of pulmonary diseases to measure volume of accumulating pleural effusion in threedimensional thoracic CT images quantitatively. However, automated extraction of pulmonary effusion correctly is difficult. Conventional extraction algorithm using a gray-level based threshold can not extract pleural effusion from thoracic wall or mediastinum correctly, because density of pleural effusion in CT images is similar to those of thoracic wall or mediastinum. So, we have developed an automated extraction method of pulmonary effusion by use of extracting lung area with pleural effusion. Our method used a template of lung obtained from a normal lung for segmentation of lungs with pleural effusions. Registration process consisted of two steps. First step was a global matching processing between normal and abnormal lungs of organs such as bronchi, bones (ribs, sternum and vertebrae) and upper surfaces of livers which were extracted using a region-growing algorithm. Second step was a local matching processing between normal and abnormal lungs which were deformed by the parameter obtained from the global matching processing. Finally, we segmented a lung with pleural effusion by use of the template which was deformed by two parameters obtained from the global matching processing and the local matching processing. We compared our method with a conventional extraction method using a gray-level based threshold and two published methods. The extraction rates of pleural effusions obtained from our method were much higher than those obtained from other methods. Automated extraction method of pulmonary effusion by use of extracting lung area with pleural effusion is promising for diagnosis of pulmonary diseases by providing quantitative volume of accumulating pleural effusion.

  5. Data Mining: The Art of Automated Knowledge Extraction

    NASA Astrophysics Data System (ADS)

    Karimabadi, H.; Sipes, T.

    2012-12-01

    Data mining algorithms are used routinely in a wide variety of fields and they are gaining adoption in sciences. The realities of real world data analysis are that (a) data has flaws, and (b) the models and assumptions that we bring to the data are inevitably flawed, and/or biased and misspecified in some way. Data mining can improve data analysis by detecting anomalies in the data, check for consistency of the user model assumptions, and decipher complex patterns and relationships that would not be possible otherwise. The common form of data collected from in situ spacecraft measurements is multi-variate time series which represents one of the most challenging problems in data mining. We have successfully developed algorithms to deal with such data and have extended the algorithms to handle streaming data. In this talk, we illustrate the utility of our algorithms through several examples including automated detection of reconnection exhausts in the solar wind and flux ropes in the magnetotail. We also show examples from successful applications of our technique to analysis of 3D kinetic simulations. With an eye to the future, we provide an overview of our upcoming plans that include collaborative data mining, expert outsourcing data mining, computer vision for image analysis, among others. Finally, we discuss the integration of data mining algorithms with web-based services such as VxOs and other Heliophysics data centers and the resulting capabilities that it would enable.

  6. Multispectral Image Road Extraction Based Upon Automated Map Conflation

    NASA Astrophysics Data System (ADS)

    Chen, Bin

    Road network extraction from remotely sensed imagery enables many important and diverse applications such as vehicle tracking, drone navigation, and intelligent transportation studies. There are, however, a number of challenges to road detection from an image. Road pavement material, width, direction, and topology vary across a scene. Complete or partial occlusions caused by nearby buildings, trees, and the shadows cast by them, make maintaining road connectivity difficult. The problems posed by occlusions are exacerbated with the increasing use of oblique imagery from aerial and satellite platforms. Further, common objects such as rooftops and parking lots are made of materials similar or identical to road pavements. This problem of common materials is a classic case of a single land cover material existing for different land use scenarios. This work addresses these problems in road extraction from geo-referenced imagery by leveraging the OpenStreetMap digital road map to guide image-based road extraction. The crowd-sourced cartography has the advantages of worldwide coverage that is constantly updated. The derived road vectors follow only roads and so can serve to guide image-based road extraction with minimal confusion from occlusions and changes in road material. On the other hand, the vector road map has no information on road widths and misalignments between the vector map and the geo-referenced image are small but nonsystematic. Properly correcting misalignment between two geospatial datasets, also known as map conflation, is an essential step. A generic framework requiring minimal human intervention is described for multispectral image road extraction and automatic road map conflation. The approach relies on the road feature generation of a binary mask and a corresponding curvilinear image. A method for generating the binary road mask from the image by applying a spectral measure is presented. The spectral measure, called anisotropy-tunable distance (ATD

  7. Plasmid purification by phenol extraction from guanidinium thiocyanate solution: development of an automated protocol.

    PubMed

    Fisher, J A; Favreau, M B

    1991-05-01

    We have developed a novel plasmid isolation procedure and have adapted it for use on an automated nucleic acid extraction instrument. The protocol is based on the finding that phenol extraction of a 1 M guanidinium thiocyanate solution at pH 4.5 efficiently removes genomic DNA from the aqueous phase, while supercoiled plasmid DNA is retained in the aqueous phase. S1 nuclease digestion of the removed genomic DNA shows that it has been denatured, which presumably confers solubility in the organic phase. The complete automated protocol for plasmid isolation involves pretreatment of bacterial cells successively with lysozyme, RNase A, and proteinase K. Following these digestions, the solution is extracted twice with a phenol/chloroform/water mixture and once with chloroform. Purified plasmid is then collected by isopropanol precipitation. The purified plasmid is essentially free of genomic DNA, RNA, and protein and is a suitable substrate for DNA sequencing and other applications requiring highly pure supercoiled plasmid. PMID:1713749

  8. Fully Automated Electro Membrane Extraction Autosampler for LC-MS Systems Allowing Soft Extractions for High-Throughput Applications.

    PubMed

    Fuchs, David; Pedersen-Bjergaard, Stig; Jensen, Henrik; Rand, Kasper D; Honoré Hansen, Steen; Petersen, Nickolaj Jacob

    2016-07-01

    The current work describes the implementation of electro membrane extraction (EME) into an autosampler for high-throughput analysis of samples by EME-LC-MS. The extraction probe was built into a luer lock adapter connected to a HTC PAL autosampler syringe. As the autosampler drew sample solution, analytes were extracted into the lumen of the extraction probe and transferred to a LC-MS system for further analysis. Various parameters affecting extraction efficacy were investigated including syringe fill strokes, syringe pull up volume, pull up delay and volume in the sample vial. The system was optimized for soft extraction of analytes and high sample throughput. Further, it was demonstrated that by flushing the EME-syringe with acidic wash buffer and reverting the applied electric potential, carry-over between samples can be reduced to below 1%. Performance of the system was characterized (RSD, <10%; R(2), 0.994) and finally, the EME-autosampler was used to analyze in vitro conversion of methadone into its main metabolite by rat liver microsomes and for demonstrating the potential of known CYP3A4 inhibitors to prevent metabolism of methadone. By making use of the high extraction speed of EME, a complete analytical workflow of purification, separation, and analysis of sample could be achieved within only 5.5 min. With the developed system large sequences of samples could be analyzed in a completely automated manner. This high degree of automation makes the developed EME-autosampler a powerful tool for a wide range of applications where high-throughput extractions are required before sample analysis. PMID:27237618

  9. Automated DNA extraction platforms offer solutions to challenges of assessing microbial biofouling in oil production facilities.

    PubMed

    Oldham, Athenia L; Drilling, Heather S; Stamps, Blake W; Stevenson, Bradley S; Duncan, Kathleen E

    2012-01-01

    The analysis of microbial assemblages in industrial, marine, and medical systems can inform decisions regarding quality control or mitigation. Modern molecular approaches to detect, characterize, and quantify microorganisms provide rapid and thorough measures unbiased by the need for cultivation. The requirement of timely extraction of high quality nucleic acids for molecular analysis is faced with specific challenges when used to study the influence of microorganisms on oil production. Production facilities are often ill equipped for nucleic acid extraction techniques, making the preservation and transportation of samples off-site a priority. As a potential solution, the possibility of extracting nucleic acids on-site using automated platforms was tested. The performance of two such platforms, the Fujifilm QuickGene-Mini80™ and Promega Maxwell®16 was compared to a widely used manual extraction kit, MOBIO PowerBiofilm™ DNA Isolation Kit, in terms of ease of operation, DNA quality, and microbial community composition. Three pipeline biofilm samples were chosen for these comparisons; two contained crude oil and corrosion products and the third transported seawater. Overall, the two more automated extraction platforms produced higher DNA yields than the manual approach. DNA quality was evaluated for amplification by quantitative PCR (qPCR) and end-point PCR to generate 454 pyrosequencing libraries for 16S rRNA microbial community analysis. Microbial community structure, as assessed by DGGE analysis and pyrosequencing, was comparable among the three extraction methods. Therefore, the use of automated extraction platforms should enhance the feasibility of rapidly evaluating microbial biofouling at remote locations or those with limited resources. PMID:23168231

  10. Automated DNA extraction platforms offer solutions to challenges of assessing microbial biofouling in oil production facilities

    PubMed Central

    2012-01-01

    The analysis of microbial assemblages in industrial, marine, and medical systems can inform decisions regarding quality control or mitigation. Modern molecular approaches to detect, characterize, and quantify microorganisms provide rapid and thorough measures unbiased by the need for cultivation. The requirement of timely extraction of high quality nucleic acids for molecular analysis is faced with specific challenges when used to study the influence of microorganisms on oil production. Production facilities are often ill equipped for nucleic acid extraction techniques, making the preservation and transportation of samples off-site a priority. As a potential solution, the possibility of extracting nucleic acids on-site using automated platforms was tested. The performance of two such platforms, the Fujifilm QuickGene-Mini80™ and Promega Maxwell®16 was compared to a widely used manual extraction kit, MOBIO PowerBiofilm™ DNA Isolation Kit, in terms of ease of operation, DNA quality, and microbial community composition. Three pipeline biofilm samples were chosen for these comparisons; two contained crude oil and corrosion products and the third transported seawater. Overall, the two more automated extraction platforms produced higher DNA yields than the manual approach. DNA quality was evaluated for amplification by quantitative PCR (qPCR) and end-point PCR to generate 454 pyrosequencing libraries for 16S rRNA microbial community analysis. Microbial community structure, as assessed by DGGE analysis and pyrosequencing, was comparable among the three extraction methods. Therefore, the use of automated extraction platforms should enhance the feasibility of rapidly evaluating microbial biofouling at remote locations or those with limited resources. PMID:23168231

  11. Automated segmentation and feature extraction of product inspection items

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1997-03-01

    X-ray film and linescan images of pistachio nuts on conveyor trays for product inspection are considered. The final objective is the categorization of pistachios into good, blemished and infested nuts. A crucial step before classification is the separation of touching products and the extraction of features essential for classification. This paper addresses new detection and segmentation algorithms to isolate touching or overlapping items. These algorithms employ a new filter, a new watershed algorithm, and morphological processing to produce nutmeat-only images. Tests on a large database of x-ray film and real-time x-ray linescan images of around 2900 small, medium and large nuts showed excellent segmentation results. A new technique to detect and segment dark regions in nutmeat images is also presented and tested on approximately 300 x-ray film and approximately 300 real-time linescan x-ray images with 95-97 percent detection and correct segmentation. New algorithms are described that determine nutmeat fill ratio and locate splits in nutmeat. The techniques formulated in this paper are of general use in many different product inspection and computer vision problems.

  12. Extraction of Prostatic Lumina and Automated Recognition for Prostatic Calculus Image Using PCA-SVM

    PubMed Central

    Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D. Joshua

    2011-01-01

    Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi. PMID:21461364

  13. Automated identification of adverse events related to central venous catheters.

    PubMed

    Penz, Janet F E; Wilcox, Adam B; Hurdle, John F

    2007-04-01

    Methods for surveillance of adverse events (AEs) in clinical settings are limited by cost, technology, and appropriate data availability. In this study, two methods for semi-automated review of text records within the Veterans Administration database are utilized to identify AEs related to the placement of central venous catheters (CVCs): a Natural Language Processing program and a phrase-matching algorithm. A sample of manually reviewed records were then compared to the results of both methods to assess sensitivity and specificity. The phrase-matching algorithm was found to be a sensitive but relatively non-specific method, whereas a natural language processing system was significantly more specific but less sensitive. Positive predictive values for each method estimated the CVC-associated AE rate at this institution to be 6.4 and 6.2%, respectively. Using both methods together results in acceptable sensitivity and specificity (72.0 and 80.1%, respectively). All methods including manual chart review are limited by incomplete or inaccurate clinician documentation. A secondary finding was related to the completeness of administrative data (ICD-9 and CPT codes) used to identify intensive care unit patients in whom a CVC was placed. Administrative data identified less than 11% of patients who had a CVC placed. This suggests that other methods, including automated methods such as phrase matching, may be more sensitive than administrative data in identifying patients with devices. Considerable potential exists for the use of such methods for the identification of patients at risk, AE surveillance, and prevention of AEs through decision support technologies. PMID:16901760

  14. Automation of Extraction Chromatograhic and Ion Exchange Separations for Radiochemical Analysis and Monitoring

    SciTech Connect

    Grate, Jay W.; O'Hara, Matthew J.; Egorov, Oleg

    2009-08-19

    Radiochemical analysis, complete with the separation of radionuclides of interest from the sample matrix and from other interfering radionuclides, is often an essential step in the determination of the radiochemical composition of a nuclear sample or process stream. Although some radionuclides can be determined nondestructively by gamma spectroscopy, where the gamma rays penetrate significant distances in condensed media and the gamma ray energies are diagnostic for specific radionuclides, other radionuclides that may be of interest emit only alpha or beta particles. For these, samples must be taken for destructive analysis and radiochemical separations are required. For process monitoring purposes, the radiochemical separation and detection methods must be rapid so that the results will be timely. These results could be obtained by laboratory analysis or by radiochemical process analyzers operating on-line or at-site. In either case, there is a need for automated radiochemical analysis methods to provide speed, throughput, safety, and consistent analytical protocols. Classical methods of separation used during the development of nuclear technologies, namely manual precipitations, solvent extractions, and ion exchange, are slow and labor intensive. Fortunately, the convergence of digital instrumentation for preprogrammed fluid manipulation and the development of new separation materials for column-based isolation of radionuclides has enabled the development of automated radiochemical analysis methodology. The primary means for separating radionuclides in solution are liquid-liquid extraction and ion exchange. These processes are well known and have been reviewed in the past.1 Ion exchange is readily employed in column formats. Liquid-liquid extraction can also be implemented on column formats using solvent-impregnated resins as extraction chromatographic materials. The organic liquid extractant is immobilized in the pores of a microporous polymer material. Under

  15. Evaluation of an automated hydrolysis and extraction method for quantification of total fat and lipid classess in cereal products.

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The utility of an automated acid hydrolysis-extraction (AHE) system was evaluated for extraction of fat for the quantification of total, saturated, polyunsaturated, monounsaturated, and trans fat in cereal products. Oil extracted by the AHE system was assessed for total fat gravimetrically and by c...

  16. Automated extraction of acetylgestagens from kidney fat by matrix solid phase dispersion.

    PubMed

    Rosén, J; Hellenäs, K E; Törnqvist, P; Shearan, P

    1994-12-01

    A new extraction method for the acetylgestagens medroxyprogesterone acetate (MPA), chloromadinone acetate and megestrol acetate, from kidney fat, has been developed. The method is a combination of matrix solid phase dispersion and solid phase extraction and is simpler and safer than previous methods, especially as it can be automated. The recovery was estimated as 59 +/- 5% (mean +/- standard deviation) for MPA. For screening purposes detection can be achieved using a commercially available enzyme immunoassay kit giving detection limits in the range of 1.0-2.0 ng g-1. PMID:7533481

  17. Neuron Image Analyzer: Automated and Accurate Extraction of Neuronal Data from Low Quality Images.

    PubMed

    Kim, Kwang-Min; Son, Kilho; Palmore, G Tayhas R

    2015-01-01

    Image analysis software is an essential tool used in neuroscience and neural engineering to evaluate changes in neuronal structure following extracellular stimuli. Both manual and automated methods in current use are severely inadequate at detecting and quantifying changes in neuronal morphology when the images analyzed have a low signal-to-noise ratio (SNR). This inadequacy derives from the fact that these methods often include data from non-neuronal structures or artifacts by simply tracing pixels with high intensity. In this paper, we describe Neuron Image Analyzer (NIA), a novel algorithm that overcomes these inadequacies by employing Laplacian of Gaussian filter and graphical models (i.e., Hidden Markov Model, Fully Connected Chain Model) to specifically extract relational pixel information corresponding to neuronal structures (i.e., soma, neurite). As such, NIA that is based on vector representation is less likely to detect false signals (i.e., non-neuronal structures) or generate artifact signals (i.e., deformation of original structures) than current image analysis algorithms that are based on raster representation. We demonstrate that NIA enables precise quantification of neuronal processes (e.g., length and orientation of neurites) in low quality images with a significant increase in the accuracy of detecting neuronal changes post-stimulation. PMID:26593337

  18. Neuron Image Analyzer: Automated and Accurate Extraction of Neuronal Data from Low Quality Images

    PubMed Central

    Kim, Kwang-Min; Son, Kilho; Palmore, G. Tayhas R.

    2015-01-01

    Image analysis software is an essential tool used in neuroscience and neural engineering to evaluate changes in neuronal structure following extracellular stimuli. Both manual and automated methods in current use are severely inadequate at detecting and quantifying changes in neuronal morphology when the images analyzed have a low signal-to-noise ratio (SNR). This inadequacy derives from the fact that these methods often include data from non-neuronal structures or artifacts by simply tracing pixels with high intensity. In this paper, we describe Neuron Image Analyzer (NIA), a novel algorithm that overcomes these inadequacies by employing Laplacian of Gaussian filter and graphical models (i.e., Hidden Markov Model, Fully Connected Chain Model) to specifically extract relational pixel information corresponding to neuronal structures (i.e., soma, neurite). As such, NIA that is based on vector representation is less likely to detect false signals (i.e., non-neuronal structures) or generate artifact signals (i.e., deformation of original structures) than current image analysis algorithms that are based on raster representation. We demonstrate that NIA enables precise quantification of neuronal processes (e.g., length and orientation of neurites) in low quality images with a significant increase in the accuracy of detecting neuronal changes post-stimulation. PMID:26593337

  19. Knowledge-based automated road network extraction system using multispectral images

    NASA Astrophysics Data System (ADS)

    Sun, Weihua; Messinger, David W.

    2013-04-01

    A novel approach for automated road network extraction from multispectral WorldView-2 imagery using a knowledge-based system is presented. This approach uses a multispectral flood-fill technique to extract asphalt pixels from satellite images; it follows by identifying prominent curvilinear structures using template matching. The extracted curvilinear structures provide an initial estimate of the road network, which is refined by the knowledge-based system. This system breaks the curvilinear structures into small segments and then groups them using a set of well-defined rules; a saliency check is then performed to prune the road segments. As a final step, these segments, carrying road width and orientation information, can be reconstructed to generate a proper road map. The approach is shown to perform well with various urban and suburban scenes. It can also be deployed to extract the road network in large-scale scenes.

  20. Automated Extraction of Absorption Bands from Reflectance Special

    NASA Technical Reports Server (NTRS)

    Huguenin, R. L.; Vale, L.; Mcintire, D.; Jones, J.

    1985-01-01

    A multiple high order derivative spectroscopy technique has been developed for deriving wavelength positions, half widths, and heights of absorption bands in reflectance spectra. The technique is applicable to laboratory spectra as well as medium resolution (100-200/cm) telescope or spacecraft spectra with moderate (few percent) noise. The technique permits absorption band positions to be detected with an accuracy of better than 3%, and often better than 1%. The high complexity of radiative transfer processes in diffusely reflected spectra can complicate the determination of absorption band positions. Continuum reflections, random illumination geometries within the material, phase angle effects, composite overlapping bands, and calibration uncertainties can shift apparent band positions by 20% from their actual positions or mask them beyond detection. Using multiple high order derivative analysis, effects of scattering continua, phase angle, and calibration (smooth features) are suppressed. Inflection points that characterize the positions and half widths of constituent bands are enhanced by the process and directly detected with relatively high sensitivity.

  1. Munitions related feature extraction from LIDAR data.

    SciTech Connect

    Roberts, Barry L.

    2010-06-01

    The characterization of former military munitions ranges is critical in the identification of areas likely to contain residual unexploded ordnance (UXO). Although these ranges are large, often covering tens-of-thousands of acres, the actual target areas represent only a small fraction of the sites. The challenge is that many of these sites do not have records indicating locations of former target areas. The identification of target areas is critical in the characterization and remediation of these sites. The Strategic Environmental Research and Development Program (SERDP) and Environmental Security Technology Certification Program (ESTCP) of the DoD have been developing and implementing techniques for the efficient characterization of large munitions ranges. As part of this process, high-resolution LIDAR terrain data sets have been collected over several former ranges. These data sets have been shown to contain information relating to former munitions usage at these ranges, specifically terrain cratering due to high-explosives detonations. The location and relative intensity of crater features can provide information critical in reconstructing the usage history of a range, and indicate areas most likely to contain UXO. We have developed an automated procedure using an adaptation of the Circular Hough Transform for the identification of crater features in LIDAR terrain data. The Circular Hough Transform is highly adept at finding circular features (craters) in noisy terrain data sets. This technique has the ability to find features of a specific radius providing a means of filtering features based on expected scale and providing additional spatial characterization of the identified feature. This method of automated crater identification has been applied to several former munitions ranges with positive results.

  2. Automated diagnosis of Age-related Macular Degeneration using greyscale features from digital fundus images.

    PubMed

    Mookiah, Muthu Rama Krishnan; Acharya, U Rajendra; Koh, Joel E W; Chandran, Vinod; Chua, Chua Kuang; Tan, Jen Hong; Lim, Choo Min; Ng, E Y K; Noronha, Kevin; Tong, Louis; Laude, Augustinus

    2014-10-01

    Age-related Macular Degeneration (AMD) is one of the major causes of vision loss and blindness in ageing population. Currently, there is no cure for AMD, however early detection and subsequent treatment may prevent the severe vision loss or slow the progression of the disease. AMD can be classified into two types: dry and wet AMDs. The people with macular degeneration are mostly affected by dry AMD. Early symptoms of AMD are formation of drusen and yellow pigmentation. These lesions are identified by manual inspection of fundus images by the ophthalmologists. It is a time consuming, tiresome process, and hence an automated diagnosis of AMD screening tool can aid clinicians in their diagnosis significantly. This study proposes an automated dry AMD detection system using various entropies (Shannon, Kapur, Renyi and Yager), Higher Order Spectra (HOS) bispectra features, Fractional Dimension (FD), and Gabor wavelet features extracted from greyscale fundus images. The features are ranked using t-test, Kullback-Lieber Divergence (KLD), Chernoff Bound and Bhattacharyya Distance (CBBD), Receiver Operating Characteristics (ROC) curve-based and Wilcoxon ranking methods in order to select optimum features and classified into normal and AMD classes using Naive Bayes (NB), k-Nearest Neighbour (k-NN), Probabilistic Neural Network (PNN), Decision Tree (DT) and Support Vector Machine (SVM) classifiers. The performance of the proposed system is evaluated using private (Kasturba Medical Hospital, Manipal, India), Automated Retinal Image Analysis (ARIA) and STructured Analysis of the Retina (STARE) datasets. The proposed system yielded the highest average classification accuracies of 90.19%, 95.07% and 95% with 42, 54 and 38 optimal ranked features using SVM classifier for private, ARIA and STARE datasets respectively. This automated AMD detection system can be used for mass fundus image screening and aid clinicians by making better use of their expertise on selected images that

  3. ANALYSIS OF SELECTED FACTORS RELATIVE TO AUTOMATED SCHOOL SCHEDULING PROCESSES.

    ERIC Educational Resources Information Center

    CHAFFEE, LEONARD M.; HELLER, ROBERT W.

    PROJECT PASS (PROJECT IN AUTOMATED SCHOOL SCHEDULING) WAS SPONSORED IN 1965 BY THE WESTERN NEW YORK SCHOOL STUDY COUNCIL TO PROVIDE IN-SERVICE EDUCATION FOR SCHOOL PERSONNEL CONTEMPLATING THE USE OF AUTOMATED APPROACHES TO SCHOOL SCHEDULING. TWO TECHNIQUES WERE UTILIZED--CLASS LOADING AND STUDENT SELECTION (CLASS), AND GENERAL ACADEMIC SIMULATION…

  4. Analyzing Automated Instructional Systems: Metaphors from Related Design Professions.

    ERIC Educational Resources Information Center

    Jonassen, David H.; Wilson, Brent G.

    Noting that automation has had an impact on virtually every manufacturing and information operation in the world, including instructional design (ID), this paper suggests three basic metaphors for automating instructional design activities: (1) computer-aided design and manufacturing (CAD/CAM) systems; (2) expert system advisor systems; and (3)…

  5. BRONCO: Biomedical entity Relation ONcology COrpus for extracting gene-variant-disease-drug relations

    PubMed Central

    Lee, Kyubum; Lee, Sunwon; Park, Sungjoon; Kim, Sunkyu; Kim, Suhkyung; Choi, Kwanghun; Tan, Aik Choon; Kang, Jaewoo

    2016-01-01

    Comprehensive knowledge of genomic variants in a biological context is key for precision medicine. As next-generation sequencing technologies improve, the amount of literature containing genomic variant data, such as new functions or related phenotypes, rapidly increases. Because numerous articles are published every day, it is almost impossible to manually curate all the variant information from the literature. Many researchers focus on creating an improved automated biomedical natural language processing (BioNLP) method that extracts useful variants and their functional information from the literature. However, there is no gold-standard data set that contains texts annotated with variants and their related functions. To overcome these limitations, we introduce a Biomedical entity Relation ONcology COrpus (BRONCO) that contains more than 400 variants and their relations with genes, diseases, drugs and cell lines in the context of cancer and anti-tumor drug screening research. The variants and their relations were manually extracted from 108 full-text articles. BRONCO can be utilized to evaluate and train new methods used for extracting biomedical entity relations from full-text publications, and thus be a valuable resource to the biomedical text mining research community. Using BRONCO, we quantitatively and qualitatively evaluated the performance of three state-of-the-art BioNLP methods. We also identified their shortcomings, and suggested remedies for each method. We implemented post-processing modules for the three BioNLP methods, which improved their performance. Database URL: http://infos.korea.ac.kr/bronco PMID:27074804

  6. BRONCO: Biomedical entity Relation ONcology COrpus for extracting gene-variant-disease-drug relations.

    PubMed

    Lee, Kyubum; Lee, Sunwon; Park, Sungjoon; Kim, Sunkyu; Kim, Suhkyung; Choi, Kwanghun; Tan, Aik Choon; Kang, Jaewoo

    2016-01-01

    Comprehensive knowledge of genomic variants in a biological context is key for precision medicine. As next-generation sequencing technologies improve, the amount of literature containing genomic variant data, such as new functions or related phenotypes, rapidly increases. Because numerous articles are published every day, it is almost impossible to manually curate all the variant information from the literature. Many researchers focus on creating an improved automated biomedical natural language processing (BioNLP) method that extracts useful variants and their functional information from the literature. However, there is no gold-standard data set that contains texts annotated with variants and their related functions. To overcome these limitations, we introduce a Biomedical entity Relation ONcology COrpus (BRONCO) that contains more than 400 variants and their relations with genes, diseases, drugs and cell lines in the context of cancer and anti-tumor drug screening research. The variants and their relations were manually extracted from 108 full-text articles. BRONCO can be utilized to evaluate and train new methods used for extracting biomedical entity relations from full-text publications, and thus be a valuable resource to the biomedical text mining research community. Using BRONCO, we quantitatively and qualitatively evaluated the performance of three state-of-the-art BioNLP methods. We also identified their shortcomings, and suggested remedies for each method. We implemented post-processing modules for the three BioNLP methods, which improved their performance.Database URL:http://infos.korea.ac.kr/bronco. PMID:27074804

  7. Automated extraction of natural drainage density patterns for the conterminous United States through high performance computing

    USGS Publications Warehouse

    Stanislawski, Larry V.; Falgout, Jeff T.; Buttenfield, Barbara P.

    2015-01-01

    Hydrographic networks form an important data foundation for cartographic base mapping and for hydrologic analysis. Drainage density patterns for these networks can be derived to characterize local landscape, bedrock and climate conditions, and further inform hydrologic and geomorphological analysis by indicating areas where too few headwater channels have been extracted. But natural drainage density patterns are not consistently available in existing hydrographic data for the United States because compilation and capture criteria historically varied, along with climate, during the period of data collection over the various terrain types throughout the country. This paper demonstrates an automated workflow that is being tested in a high-performance computing environment by the U.S. Geological Survey (USGS) to map natural drainage density patterns at the 1:24,000-scale (24K) for the conterminous United States. Hydrographic network drainage patterns may be extracted from elevation data to guide corrections for existing hydrographic network data. The paper describes three stages in this workflow including data pre-processing, natural channel extraction, and generation of drainage density patterns from extracted channels. The workflow is concurrently implemented by executing procedures on multiple subbasin watersheds within the U.S. National Hydrography Dataset (NHD). Pre-processing defines parameters that are needed for the extraction process. Extraction proceeds in standard fashion: filling sinks, developing flow direction and weighted flow accumulation rasters. Drainage channels with assigned Strahler stream order are extracted within a subbasin and simplified. Drainage density patterns are then estimated with 100-meter resolution and subsequently smoothed with a low-pass filter. The extraction process is found to be of better quality in higher slope terrains. Concurrent processing through the high performance computing environment is shown to facilitate and refine

  8. Extraction, identification, and functional characterization of a bioactive substance from automated compound-handling plastic tips.

    PubMed

    Watson, John; Greenough, Emily B; Leet, John E; Ford, Michael J; Drexler, Dieter M; Belcastro, James V; Herbst, John J; Chatterjee, Moneesh; Banks, Martyn

    2009-06-01

    Disposable plastic labware is ubiquitous in contemporary pharmaceutical research laboratories. Plastic labware is routinely used for chemical compound storage and during automated liquid-handling processes that support assay development, high-throughput screening, structure-activity determinations, and liability profiling. However, there is little information available in the literature on the contaminants released from plastic labware upon DMSO exposure and their resultant effects on specific biological assays. The authors report here the extraction, by simple DMSO washing, of a biologically active substance from one particular size of disposable plastic tips used in automated compound handling. The active contaminant was identified as erucamide ((Z)-docos-13-enamide), a long-chain mono-unsaturated fatty acid amide commonly used in plastics manufacturing, by gas chromatography/mass spectroscopy analysis of the DMSO-extracted material. Tip extracts prepared in DMSO, as well as a commercially obtained sample of erucamide, were active in a functional bioassay of a known G-protein-coupled fatty acid receptor. A sample of a different disposable tip product from the same vendor did not release detectable erucamide following solvent extraction, and DMSO extracts prepared from this product were inactive in the receptor functional assay. These results demonstrate that solvent-extractable contaminants from some plastic labware used in the contemporary pharmaceutical research and development (R&D) environment can be introduced into physical and biological assays during routine compound management liquid-handling processes. These contaminants may further possess biological activity and are therefore a potential source of assay-specific confounding artifacts. PMID:19470712

  9. Automated solid-phase extraction of herbicides from water for gas chromatographic-mass spectrometric analysis

    USGS Publications Warehouse

    Meyer, M.T.; Mills, M.S.; Thurman, E.M.

    1993-01-01

    An automated solid-phase extraction (SPE) method was developed for the pre-concentration of chloroacetanilide and triazine herbicides, and two triazine metabolites from 100-ml water samples. Breakthrough experiments for the C18 SPE cartridge show that the two triazine metabolites are not fully retained and that increasing flow-rate decreases their retention. Standard curve r2 values of 0.998-1.000 for each compound were consistently obtained and a quantitation level of 0.05 ??g/l was achieved for each compound tested. More than 10,000 surface and ground water samples have been analyzed by this method.

  10. Automated CO2 extraction from air for clumped isotope analysis in the atmo- and biosphere

    NASA Astrophysics Data System (ADS)

    Hofmann, Magdalena; Ziegler, Martin; Pons, Thijs; Lourens, Lucas; Röckmann, Thomas

    2015-04-01

    The conventional stable isotope ratios 13C/12C and 18O/16O in atmospheric CO2 are a powerful tool for unraveling the global carbon cycle. In recent years, it has been suggested that the abundance of the very rare isotopologue 13C18O16O on m/z 47 might be a promising tracer to complement conventional stable isotope analysis of atmospheric CO2 [Affek and Eiler, 2006; Affek et al. 2007; Eiler and Schauble, 2004; Yeung et al., 2009]. Here we present an automated analytical system that is designed for clumped isotope analysis of atmo- and biospheric CO2. The carbon dioxide gas is quantitatively extracted from about 1.5L of air (ATP). The automated stainless steel extraction and purification line consists of three main components: (i) a drying unit (a magnesium perchlorate unit and a cryogenic water trap), (ii) two CO2 traps cooled with liquid nitrogen [Werner et al., 2001] and (iii) a GC column packed with Porapak Q that can be cooled with liquid nitrogen to -30°C during purification and heated up to 230°C in-between two extraction runs. After CO2 extraction and purification, the CO2 is automatically transferred to the mass spectrometer. Mass spectrometric analysis of the 13C18O16O abundance is carried out in dual inlet mode on a MAT 253 mass spectrometer. Each analysis generally consists of 80 change-over-cycles. Three additional Faraday cups were added to the mass spectrometer for simultaneous analysis of the mass-to-charge ratios 44, 45, 46, 47, 48 and 49. The reproducibility for δ13C, δ18O and Δ47 for repeated CO2 extractions from air is in the range of 0.11o (SD), 0.18o (SD) and 0.02 (SD)o respectively. This automated CO2 extraction and purification system will be used to analyse the clumped isotopic signature in atmospheric CO2 (tall tower, Cabauw, Netherlands) and to study the clumped isotopic fractionation during photosynthesis (leaf chamber experiments) and soil respiration. References Affek, H. P., Xu, X. & Eiler, J. M., Geochim. Cosmochim. Acta 71, 5033

  11. Strategies for Medical Data Extraction and Presentation Part 3: Automated Context- and User-Specific Data Extraction.

    PubMed

    Reiner, Bruce

    2015-08-01

    In current medical practice, data extraction is limited by a number of factors including lack of information system integration, manual workflow, excessive workloads, and lack of standardized databases. The combined limitations result in clinically important data often being overlooked, which can adversely affect clinical outcomes through the introduction of medical error, diminished diagnostic confidence, excessive utilization of medical services, and delays in diagnosis and treatment planning. Current technology development is largely inflexible and static in nature, which adversely affects functionality and usage among the diverse and heterogeneous population of end users. In order to address existing limitations in medical data extraction, alternative technology development strategies need to be considered which incorporate the creation of end user profile groups (to account for occupational differences among end users), customization options (accounting for individual end user needs and preferences), and context specificity of data (taking into account both the task being performed and data subject matter). Creation of the proposed context- and user-specific data extraction and presentation templates offers a number of theoretical benefits including automation and improved workflow, completeness in data search, ability to track and verify data sources, creation of computerized decision support and learning tools, and establishment of data-driven best practice guidelines. PMID:25833768

  12. CHANNEL MORPHOLOGY TOOL (CMT): A GIS-BASED AUTOMATED EXTRACTION MODEL FOR CHANNEL GEOMETRY

    SciTech Connect

    JUDI, DAVID; KALYANAPU, ALFRED; MCPHERSON, TIMOTHY; BERSCHEID, ALAN

    2007-01-17

    This paper describes an automated Channel Morphology Tool (CMT) developed in ArcGIS 9.1 environment. The CMT creates cross-sections along a stream centerline and uses a digital elevation model (DEM) to create station points with elevations along each of the cross-sections. The generated cross-sections may then be exported into a hydraulic model. Along with the rapid cross-section generation the CMT also eliminates any cross-section overlaps that might occur due to the sinuosity of the channels using the Cross-section Overlap Correction Algorithm (COCoA). The CMT was tested by extracting cross-sections from a 5-m DEM for a 50-km channel length in Houston, Texas. The extracted cross-sections were compared directly with surveyed cross-sections in terms of the cross-section area. Results indicated that the CMT-generated cross-sections satisfactorily matched the surveyed data.

  13. Modelling and representation issues in automated feature extraction from aerial and satellite images

    NASA Astrophysics Data System (ADS)

    Sowmya, Arcot; Trinder, John

    New digital systems for the processing of photogrammetric and remote sensing images have led to new approaches to information extraction for mapping and Geographic Information System (GIS) applications, with the expectation that data can become more readily available at a lower cost and with greater currency. Demands for mapping and GIS data are increasing as well for environmental assessment and monitoring. Hence, researchers from the fields of photogrammetry and remote sensing, as well as computer vision and artificial intelligence, are bringing together their particular skills for automating these tasks of information extraction. The paper will review some of the approaches used in knowledge representation and modelling for machine vision, and give examples of their applications in research for image understanding of aerial and satellite imagery.

  14. Automated extraction of fine features of kinetochore microtubules and plus-ends from electron tomography volume.

    PubMed

    Jiang, Ming; Ji, Qiang; McEwen, Bruce F

    2006-07-01

    Kinetochore microtubules (KMTs) and the associated plus-ends have been areas of intense investigation in both cell biology and molecular medicine. Though electron tomography opens up new possibilities in understanding their function by imaging their high-resolution structures, the interpretation of the acquired data remains an obstacle because of the complex and cluttered cellular environment. As a result, practical segmentation of the electron tomography data has been dominated by manual operation, which is time consuming and subjective. In this paper, we propose a model-based automated approach to extracting KMTs and the associated plus-ends with a coarse-to-fine scale scheme consisting of volume preprocessing, microtubule segmentation and plus-end tracing. In volume preprocessing, we first apply an anisotropic invariant wavelet transform and a tube-enhancing filter to enhance the microtubules at coarse level for localization. This is followed with a surface-enhancing filter to accentuate the fine microtubule boundary features. The microtubule body is then segmented using a modified active shape model method. Starting from the segmented microtubule body, the plus-ends are extracted with a probabilistic tracing method improved with rectangular window based feature detection and the integration of multiple cues. Experimental results demonstrate that our automated method produces results comparable to manual segmentation but using only a fraction of the manual segmentation time. PMID:16830922

  15. Automated Detection and Extraction of Coronal Dimmings from SDO/AIA Data

    NASA Astrophysics Data System (ADS)

    Davey, Alisdair R.; Attrill, G. D. R.; Wills-Davey, M. J.

    2010-05-01

    The sheer volume of data anticipated from the Solar Dynamics Observatory/Atmospheric Imaging Assembly (SDO/AIA) highlights the necessity for the development of automatic detection methods for various types of solar activity. Initially recognised in the 1970s, it is now well established that coronal dimmings are closely associated with coronal mass ejections (CMEs), and are particularly recognised as an indicator of front-side (halo) CMEs, which can be difficult to detect in white-light coronagraph data. An automated coronal dimming region detection and extraction algorithm removes visual observer bias from determination of physical quantities such as spatial location, area and volume. This allows reproducible, quantifiable results to be mined from very large datasets. The information derived may facilitate more reliable early space weather detection, as well as offering the potential for conducting large-sample studies focused on determining the geoeffectiveness of CMEs, coupled with analysis of their associated coronal dimmings. We present examples of dimming events extracted using our algorithm from existing EUV data, demonstrating the potential for the anticipated application to SDO/AIA data. Metadata returned by our algorithm include: location, area, volume, mass and dynamics of coronal dimmings. As well as running on historic datasets, this algorithm is capable of detecting and extracting coronal dimmings in near real-time. The coronal dimming detection and extraction algorithm described in this poster is part of the SDO/Computer Vision Center effort hosted at SAO (Martens et al., 2009). We acknowledge NASA grant NNH07AB97C.

  16. A Novel Validation Algorithm Allows for Automated Cell Tracking and the Extraction of Biologically Meaningful Parameters

    PubMed Central

    Madany Mamlouk, Amir; Schicktanz, Simone; Kruse, Charli

    2011-01-01

    Automated microscopy is currently the only method to non-invasively and label-free observe complex multi-cellular processes, such as cell migration, cell cycle, and cell differentiation. Extracting biological information from a time-series of micrographs requires each cell to be recognized and followed through sequential microscopic snapshots. Although recent attempts to automatize this process resulted in ever improving cell detection rates, manual identification of identical cells is still the most reliable technique. However, its tedious and subjective nature prevented tracking from becoming a standardized tool for the investigation of cell cultures. Here, we present a novel method to accomplish automated cell tracking with a reliability comparable to manual tracking. Previously, automated cell tracking could not rival the reliability of manual tracking because, in contrast to the human way of solving this task, none of the algorithms had an independent quality control mechanism; they missed validation. Thus, instead of trying to improve the cell detection or tracking rates, we proceeded from the idea to automatically inspect the tracking results and accept only those of high trustworthiness, while rejecting all other results. This validation algorithm works independently of the quality of cell detection and tracking through a systematic search for tracking errors. It is based only on very general assumptions about the spatiotemporal contiguity of cell paths. While traditional tracking often aims to yield genealogic information about single cells, the natural outcome of a validated cell tracking algorithm turns out to be a set of complete, but often unconnected cell paths, i.e. records of cells from mitosis to mitosis. This is a consequence of the fact that the validation algorithm takes complete paths as the unit of rejection/acceptance. The resulting set of complete paths can be used to automatically extract important biological parameters with high

  17. A novel validation algorithm allows for automated cell tracking and the extraction of biologically meaningful parameters.

    PubMed

    Rapoport, Daniel H; Becker, Tim; Madany Mamlouk, Amir; Schicktanz, Simone; Kruse, Charli

    2011-01-01

    Automated microscopy is currently the only method to non-invasively and label-free observe complex multi-cellular processes, such as cell migration, cell cycle, and cell differentiation. Extracting biological information from a time-series of micrographs requires each cell to be recognized and followed through sequential microscopic snapshots. Although recent attempts to automatize this process resulted in ever improving cell detection rates, manual identification of identical cells is still the most reliable technique. However, its tedious and subjective nature prevented tracking from becoming a standardized tool for the investigation of cell cultures. Here, we present a novel method to accomplish automated cell tracking with a reliability comparable to manual tracking. Previously, automated cell tracking could not rival the reliability of manual tracking because, in contrast to the human way of solving this task, none of the algorithms had an independent quality control mechanism; they missed validation. Thus, instead of trying to improve the cell detection or tracking rates, we proceeded from the idea to automatically inspect the tracking results and accept only those of high trustworthiness, while rejecting all other results. This validation algorithm works independently of the quality of cell detection and tracking through a systematic search for tracking errors. It is based only on very general assumptions about the spatiotemporal contiguity of cell paths. While traditional tracking often aims to yield genealogic information about single cells, the natural outcome of a validated cell tracking algorithm turns out to be a set of complete, but often unconnected cell paths, i.e. records of cells from mitosis to mitosis. This is a consequence of the fact that the validation algorithm takes complete paths as the unit of rejection/acceptance. The resulting set of complete paths can be used to automatically extract important biological parameters with high

  18. Automated Large Scale Parameter Extraction of Road-Side Trees Sampled by a Laser Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Lindenbergh, R. C.; Berthold, D.; Sirmacek, B.; Herrero-Huerta, M.; Wang, J.; Ebersbach, D.

    2015-08-01

    In urbanized Western Europe trees are considered an important component of the built-up environment. This also means that there is an increasing demand for tree inventories. Laser mobile mapping systems provide an efficient and accurate way to sample the 3D road surrounding including notable roadside trees. Indeed, at, say, 50 km/h such systems collect point clouds consisting of half a million points per 100m. Method exists that extract tree parameters from relatively small patches of such data, but a remaining challenge is to operationally extract roadside tree parameters at regional level. For this purpose a workflow is presented as follows: The input point clouds are consecutively downsampled, retiled, classified, segmented into individual trees and upsampled to enable automated extraction of tree location, tree height, canopy diameter and trunk diameter at breast height (DBH). The workflow is implemented to work on a laser mobile mapping data set sampling 100 km of road in Sachsen, Germany and is tested on a stretch of road of 7km long. Along this road, the method detected 315 trees that were considered well detected and 56 clusters of tree points were no individual trees could be identified. Using voxels, the data volume could be reduced by about 97 % in a default scenario. Processing the results of this scenario took ~2500 seconds, corresponding to about 10 km/h, which is getting close to but is still below the acquisition rate which is estimated at 50 km/h.

  19. Semi-automated extraction of landslides in Taiwan based on SPOT imagery and DEMs

    NASA Astrophysics Data System (ADS)

    Eisank, Clemens; Hölbling, Daniel; Friedl, Barbara; Chen, Yi-Chin; Chang, Kang-Tsung

    2014-05-01

    The vast availability and improved quality of optical satellite data and digital elevation models (DEMs), as well as the need for complete and up-to-date landslide inventories at various spatial scales have fostered the development of semi-automated landslide recognition systems. Among the tested approaches for designing such systems, object-based image analysis (OBIA) stepped out to be a highly promising methodology. OBIA offers a flexible, spatially enabled framework for effective landslide mapping. Most object-based landslide mapping systems, however, have been tailored to specific, mainly small-scale study areas or even to single landslides only. Even though reported mapping accuracies tend to be higher than for pixel-based approaches, accuracy values are still relatively low and depend on the particular study. There is still room to improve the applicability and objectivity of object-based landslide mapping systems. The presented study aims at developing a knowledge-based landslide mapping system implemented in an OBIA environment, i.e. Trimble eCognition. In comparison to previous knowledge-based approaches, the classification of segmentation-derived multi-scale image objects relies on digital landslide signatures. These signatures hold the common operational knowledge on digital landslide mapping, as reported by 25 Taiwanese landslide experts during personal semi-structured interviews. Specifically, the signatures include information on commonly used data layers, spectral and spatial features, and feature thresholds. The signatures guide the selection and implementation of mapping rules that were finally encoded in Cognition Network Language (CNL). Multi-scale image segmentation is optimized by using the improved Estimation of Scale Parameter (ESP) tool. The approach described above is developed and tested for mapping landslides in a sub-region of the Baichi catchment in Northern Taiwan based on SPOT imagery and a high-resolution DEM. An object

  20. Automated extraction and classification of time-frequency contours in humpback vocalizations.

    PubMed

    Ou, Hui; Au, Whitlow W L; Zurk, Lisa M; Lammers, Marc O

    2013-01-01

    A time-frequency contour extraction and classification algorithm was created to analyze humpback whale vocalizations. The algorithm automatically extracted contours of whale vocalization units by searching for gray-level discontinuities in the spectrogram images. The unit-to-unit similarity was quantified by cross-correlating the contour lines. A library of distinctive humpback units was then generated by applying an unsupervised, cluster-based learning algorithm. The purpose of this study was to provide a fast and automated feature selection tool to describe the vocal signatures of animal groups. This approach could benefit a variety of applications such as species description, identification, and evolution of song structures. The algorithm was tested on humpback whale song data recorded at various locations in Hawaii from 2002 to 2003. Results presented in this paper showed low probability of false alarm (0%-4%) under noisy environments with small boat vessels and snapping shrimp. The classification algorithm was tested on a controlled set of 30 units forming six unit types, and all the units were correctly classified. In a case study on humpback data collected in the Auau Chanel, Hawaii, in 2002, the algorithm extracted 951 units, which were classified into 12 distinctive types. PMID:23297903

  1. Extraction of words from the national ID cards for automated recognition

    NASA Astrophysics Data System (ADS)

    Akhter, Md. Rezwan; Bhuiyan, Md. Hasanuzzaman; Uddin, Mohammad Shorif

    2011-10-01

    The government of Bangladesh introduced national ID cards in 2008 for all peoples of age 18 years and above. This card is now a de-facto identity document and finds diverse applications in vote casting, bank account opening, telephone subscribing as well as in many real life transactions and security checking. To get real fruits of this versatile ID card, automated retrieving and recognition of an independent person from this extra large national database is an ultimate necessity. This work is the first step to fill this gap in making the recognition in automated fashion. Here we have investigated an image analysis technique to extract the words that will be used in subsequent recognition steps. At first scanned ID card image is used as an input into the computer system and then the target text region is separated from the picture region. The text region is used for separation of lines and words on the basis of the vertical and horizontal projections of image intensity, respectively. Experimentation using real national ID cards confirms the effectiveness of our technique.

  2. Reference line extraction for automated data-entry system using wavelet transform

    NASA Astrophysics Data System (ADS)

    Chitwong, Sakreya; Phonsri, Seksan; Thitimajshima, Punya

    1999-12-01

    It is common that most document forms opt for the use of straight line as a reference position for filled information. The automated data-entry systems of such documents require an ability to search these reference lines so that the location of information in the forms can be known. This paper proposes a wavelet-based algorithm for extracting these reference lines in business forms. Stationary wavelet transform is used to transform a gray-level document image into different frequency-band images. The horizontal detail subband is then selected and passed through a post-processing to produce a binary bitmap of reference lines. The experimental results on synthetic and real document images will be given to illustrate the usefulness of such an algorithm.

  3. Automated Extraction of Dose/Volume Statistics for Radiotherapy-Treatment-Plan Evaluation in Clinical-Trial Quality Assurance

    PubMed Central

    Gong, Yutao U. T.; Yu, Jialu; Pang, Dalong; Zhen, Heming; Galvin, James; Xiao, Ying

    2016-01-01

    Radiotherapy clinical-trial quality assurance is a crucial yet challenging process. This note presents a tool that automatically extracts dose/volume statistics for determining dosimetry compliance review with improved efficiency and accuracy. A major objective of this study is to develop an automated solution for clinical-trial radiotherapy dosimetry review. PMID:26973814

  4. FBI DRUGFIRE program: the development and deployment of an automated firearms identification system to support serial, gang, and drug-related shooting investigations

    NASA Astrophysics Data System (ADS)

    Sibert, Robert W.

    1994-03-01

    The FBI DRUGFIRE Program entails the continuing phased development and deployment of a scalable automated firearms identification system. The first phase of this system, a networked, database-driven firearms evidence imaging system, has been operational for approximately one year and has demonstrated its effectiveness in facilitating the sharing and linking of firearms evidence collected in serial, gang, and drug-related shooting investigations. However, there is a pressing need for development of enhancements which will more fully automate the system so that it is capable of processing very large volumes of firearms evidence. These enhancements would provide automated image analysis and pattern matching functionalities. Existing `spin off' technologies need to be integrated into the present DRUGFIRE system to automate the 3-D mensuration, registration, feature extraction, and matching of the microtopographical surface features imprinted on the primers of fired casings during firing.

  5. Support Vector Machine with Ensemble Tree Kernel for Relation Extraction.

    PubMed

    Liu, Xiaoyong; Fu, Hui; Du, Zhiguo

    2016-01-01

    Relation extraction is one of the important research topics in the field of information extraction research. To solve the problem of semantic variation in traditional semisupervised relation extraction algorithm, this paper proposes a novel semisupervised relation extraction algorithm based on ensemble learning (LXRE). The new algorithm mainly uses two kinds of support vector machine classifiers based on tree kernel for integration and integrates the strategy of constrained extension seed set. The new algorithm can weaken the inaccuracy of relation extraction, which is caused by the phenomenon of semantic variation. The numerical experimental research based on two benchmark data sets (PropBank and AIMed) shows that the LXRE algorithm proposed in the paper is superior to other two common relation extraction methods in four evaluation indexes (Precision, Recall, F-measure, and Accuracy). It indicates that the new algorithm has good relation extraction ability compared with others. PMID:27118966

  6. Support Vector Machine with Ensemble Tree Kernel for Relation Extraction

    PubMed Central

    Fu, Hui; Du, Zhiguo

    2016-01-01

    Relation extraction is one of the important research topics in the field of information extraction research. To solve the problem of semantic variation in traditional semisupervised relation extraction algorithm, this paper proposes a novel semisupervised relation extraction algorithm based on ensemble learning (LXRE). The new algorithm mainly uses two kinds of support vector machine classifiers based on tree kernel for integration and integrates the strategy of constrained extension seed set. The new algorithm can weaken the inaccuracy of relation extraction, which is caused by the phenomenon of semantic variation. The numerical experimental research based on two benchmark data sets (PropBank and AIMed) shows that the LXRE algorithm proposed in the paper is superior to other two common relation extraction methods in four evaluation indexes (Precision, Recall, F-measure, and Accuracy). It indicates that the new algorithm has good relation extraction ability compared with others. PMID:27118966

  7. Rapid and automated sample preparation for nucleic acid extraction on a microfluidic CD (compact disk)

    NASA Astrophysics Data System (ADS)

    Kim, Jitae; Kido, Horacio; Zoval, Jim V.; Gagné, Dominic; Peytavi, Régis; Picard, François J.; Bastien, Martine; Boissinot, Maurice; Bergeron, Michel G.; Madou, Marc J.

    2006-01-01

    Rapid and automated preparation of PCR (polymerase chain reaction)-ready genomic DNA was demonstrated on a multiplexed CD (compact disk) platform by using hard-to-lyse bacterial spores. Cell disruption is carried out while beadcell suspensions are pushed back and forth in center-tapered lysing chambers by angular oscillation of the disk - keystone effect. During this lysis period, the cell suspensions are securely held within the lysing chambers by heatactivated wax valves. Upon application of a remote heat to the disk in motion, the wax valves release lysate solutions into centrifuge chambers where cell debris are separated by an elevated rotation of the disk. Only debris-free DNA extract is then transferred to collection chambers by capillary-assisted siphon and collected for heating that inactivates PCR inhibitors. Lysing capacity was evaluated using a real-time PCR assay to monitor the efficiency of Bacillus globigii spore lysis. PCR analysis showed that 5 minutes' CD lysis run gave spore lysis efficiency similar to that obtained with a popular commercial DNA extraction kit (i.e., IDI-lysis kit from GeneOhm Sciences Inc.) which is highly efficient for microbial cell and spore lysis. This work will contribute to the development of an integrated CD-based assay for rapid diagnosis of infectious diseases.

  8. Automated data extraction from in situ protein-stable isotope probing studies.

    PubMed

    Slysz, Gordon W; Steinke, Laurey; Ward, David M; Klatt, Christian G; Clauss, Therese R W; Purvine, Samuel O; Payne, Samuel H; Anderson, Gordon A; Smith, Richard D; Lipton, Mary S

    2014-03-01

    Protein-stable isotope probing (protein-SIP) has strong potential for revealing key metabolizing taxa in complex microbial communities. While most protein-SIP work to date has been performed under controlled laboratory conditions to allow extensive isotope labeling of the target organism(s), a key application will be in situ studies of microbial communities for short periods of time under natural conditions that result in small degrees of partial labeling. One hurdle restricting large-scale in situ protein-SIP studies is the lack of algorithms and software for automated data processing of the massive data sets resulting from such studies. In response, we developed Stable Isotope Probing Protein Extraction Resources software (SIPPER) and applied it for large-scale extraction and visualization of data from short-term (3 h) protein-SIP experiments performed in situ on phototrophic bacterial mats isolated from Yellowstone National Park. Several metrics incorporated into the software allow it to support exhaustive analysis of the complex composite isotopic envelope observed as a result of low amounts of partial label incorporation. SIPPER also enables the detection of labeled molecular species without the need for any prior identification. PMID:24467184

  9. Automated data extraction from in situ protein stable isotope probing studies

    SciTech Connect

    Slysz, Gordon W.; Steinke, Laurey A.; Ward, David M.; Klatt, Christian G.; Clauss, Therese RW; Purvine, Samuel O.; Payne, Samuel H.; Anderson, Gordon A.; Smith, Richard D.; Lipton, Mary S.

    2014-01-27

    Protein stable isotope probing (protein-SIP) has strong potential for revealing key metabolizing taxa in complex microbial communities. While most protein-SIP work to date has been performed under controlled laboratory conditions to allow extensive isotope labeling of the target organism, a key application will be in situ studies of microbial communities under conditions that result in small degrees of partial labeling. One hurdle restricting large scale in situ protein-SIP studies is the lack of algorithms and software for automated data processing of the massive data sets resulting from such studies. In response, we developed Stable Isotope Probing Protein Extraction Resources software (SIPPER) and applied it for large scale extraction and visualization of data from short term (3 h) protein-SIP experiments performed in situ on Yellowstone phototrophic bacterial mats. Several metrics incorporated into the software allow it to support exhaustive analysis of the complex composite isotopic envelope observed as a result of low amounts of partial label incorporation. SIPPER also enables the detection of labeled molecular species without the need for any prior identification.

  10. AUTOMATION.

    ERIC Educational Resources Information Center

    Manpower Research Council, Milwaukee, WI.

    THE MANPOWER RESEARCH COUNCIL, A NONPROFIT SERVICE ORGANIZATION, HAS AS ITS OBJECTIVE THE DEVELOPMENT OF AN INTERCHANGE AMONG THE MANUFACTURING AND SERVICE INDUSTRIES OF THE UNITED STATES OF INFORMATION ON EMPLOYMENT, INDUSTRIAL RELATIONS TRENDS AND ACTIVITIES, AND MANAGEMENT PROBLEMS. A SURVEY OF 200 MEMBER CORPORATIONS, EMPLOYING A TOTAL OF…

  11. Streamlining DNA Barcoding Protocols: Automated DNA Extraction and a New cox1 Primer in Arachnid Systematics

    PubMed Central

    Vidergar, Nina; Toplak, Nataša; Kuntner, Matjaž

    2014-01-01

    Background DNA barcoding is a popular tool in taxonomic and phylogenetic studies, but for most animal lineages protocols for obtaining the barcoding sequences—mitochondrial cytochrome C oxidase subunit I (cox1 AKA CO1)—are not standardized. Our aim was to explore an optimal strategy for arachnids, focusing on the species-richest lineage, spiders by (1) improving an automated DNA extraction protocol, (2) testing the performance of commonly used primer combinations, and (3) developing a new cox1 primer suitable for more efficient alignment and phylogenetic analyses. Methodology We used exemplars of 15 species from all major spider clades, processed a range of spider tissues of varying size and quality, optimized genomic DNA extraction using the MagMAX Express magnetic particle processor—an automated high throughput DNA extraction system—and tested cox1 amplification protocols emphasizing the standard barcoding region using ten routinely employed primer pairs. Results The best results were obtained with the commonly used Folmer primers (LCO1490/HCO2198) that capture the standard barcode region, and with the C1-J-2183/C1-N-2776 primer pair that amplifies its extension. However, C1-J-2183 is designed too close to HCO2198 for well-interpreted, continuous sequence data, and in practice the resulting sequences from the two primer pairs rarely overlap. We therefore designed a new forward primer C1-J-2123 60 base pairs upstream of the C1-J-2183 binding site. The success rate of this new primer (93%) matched that of C1-J-2183. Conclusions The use of C1-J-2123 allows full, indel-free overlap of sequences obtained with the standard Folmer primers and with C1-J-2123 primer pair. Our preliminary tests suggest that in addition to spiders, C1-J-2123 will also perform in other arachnids and several other invertebrates. We provide optimal PCR protocols for these primer sets, and recommend using them for systematic efforts beyond DNA barcoding. PMID:25415202

  12. Californian demonstration and validation of automated agricultural field extraction from multi-temporal Landsat data

    NASA Astrophysics Data System (ADS)

    Yan, L.; Roy, D. P.

    2013-12-01

    The spatial distribution of agricultural fields is a fundamental description of rural landscapes and the location and extent of fields is important to establish the area of land utilized for agricultural yield prediction, resource allocation, and for economic planning. To date, field objects have not been extracted from satellite data over large areas because of computational constraints and because consistently processed appropriate resolution data have not been available or affordable. We present a fully automated computational methodology to extract agricultural fields from 30m Web Enabled Landsat data (WELD) time series and results for approximately 250,000 square kilometers (eleven 150 x 150 km WELD tiles) encompassing all the major agricultural areas of California. The extracted fields, including rectangular, circular, and irregularly shaped fields, are evaluated by comparison with manually interpreted Landsat field objects. Validation results are presented in terms of standard confusion matrix accuracy measures and also the degree of field object over-segmentation, under-segmentation, fragmentation and shape distortion. The apparent success of the presented field extraction methodology is due to several factors. First, the use of multi-temporal Landsat data, as opposed to single Landsat acquisitions, that enables crop rotations and inter-annual variability in the state of the vegetation to be accommodated for and provides more opportunities for cloud-free, non-missing and atmospherically uncontaminated surface observations. Second, the adoption of an object based approach, namely the variational region-based geometric active contour method that enables robust segmentation with only a small number of parameters and that requires no training data collection. Third, the use of a watershed algorithm to decompose connected segments belonging to multiple fields into coherent isolated field segments and a geometry based algorithm to detect and associate parts of

  13. PKDE4J: Entity and relation extraction for public knowledge discovery.

    PubMed

    Song, Min; Kim, Won Chul; Lee, Dahee; Heo, Go Eun; Kang, Keun Young

    2015-10-01

    Due to an enormous number of scientific publications that cannot be handled manually, there is a rising interest in text-mining techniques for automated information extraction, especially in the biomedical field. Such techniques provide effective means of information search, knowledge discovery, and hypothesis generation. Most previous studies have primarily focused on the design and performance improvement of either named entity recognition or relation extraction. In this paper, we present PKDE4J, a comprehensive text-mining system that integrates dictionary-based entity extraction and rule-based relation extraction in a highly flexible and extensible framework. Starting with the Stanford CoreNLP, we developed the system to cope with multiple types of entities and relations. The system also has fairly good performance in terms of accuracy as well as the ability to configure text-processing components. We demonstrate its competitive performance by evaluating it on many corpora and found that it surpasses existing systems with average F-measures of 85% for entity extraction and 81% for relation extraction. PMID:26277115

  14. Automated Outreach for Cardiovascular-Related Medication Refill Reminders.

    PubMed

    Harrison, Teresa N; Green, Kelley R; Liu, In-Lu Amy; Vansomphone, Southida S; Handler, Joel; Scott, Ronald D; Cheetham, T Craig; Reynolds, Kristi

    2016-07-01

    The objective of this study was to evaluate the effectiveness of an automated telephone system reminding patients with hypertension and/or cardiovascular disease to obtain overdue medication refills. The authors compared the intervention with usual care among patients with an overdue prescription for a statin or lisinopril-hydrochlorothiazide (lisinopril-HCTZ). The primary outcome was refill rate at 2 weeks. Secondary outcomes included time to refill and change in low-density lipoprotein cholesterol and blood pressure. Significantly more patients who received a reminder call refilled their prescription compared with the usual-care group (statin cohort: 30.3% vs 24.9% [P<.0001]; lisinopril-HCTZ cohort: 30.7% vs 24.2% [P<.0001]). The median time to refill was shorter in patients receiving the reminder call (statin cohort: 29 vs 36 days [P<.0001]; lisinopril-HCTZ cohort: 24 vs 31 days [P<.0001]). There were no statistically significant differences in mean low-density lipoprotein cholesterol and blood pressure. These findings suggest the need for interventions that have a longer-term impact. PMID:26542896

  15. Semi-automated procedures for shoreline extraction using single RADARSAT-1 SAR image

    NASA Astrophysics Data System (ADS)

    Al Fugura, A.'kif; Billa, Lawal; Pradhan, Biswajeet

    2011-12-01

    Coastline identification is important for surveying and mapping reasons. Coastline serves as the basic point of reference and is used on nautical charts for navigation purposes. Its delineation has become crucial and more important in the wake of the many recent earthquakes and tsunamis resulting in complete change and redraw of some shorelines. In a tropical country like Malaysia, presence of cloud cover hinders the application of optical remote sensing data. In this study a semi-automated technique and procedures are presented for shoreline delineation from RADARSAT-1 image. A scene of RADARSAT-1 satellite image was processed using enhanced filtering technique to identify and extract the shoreline coast of Kuala Terengganu, Malaysia. RADSARSAT image has many advantages over the optical data because of its ability to penetrate cloud cover and its night sensing capabilities. At first, speckles were removed from the image by using Lee sigma filter which was used to reduce random noise and to enhance the image and discriminate the boundary between land and water. The results showed an accurate and improved extraction and delineation of the entire coastline of Kuala Terrenganu. The study demonstrated the reliability of the image averaging filter in reducing random noise over the sea surface especially near the shoreline. It enhanced land-water boundary differentiation, enabling better delineation of the shoreline. Overall, the developed techniques showed the potential of radar imagery for accurate shoreline mapping and will be useful for monitoring shoreline changes during high and low tides as well as shoreline erosion in a tropical country like Malaysia.

  16. The BUME method: a novel automated chloroform-free 96-well total lipid extraction method for blood plasma[S

    PubMed Central

    Löfgren, Lars; Ståhlman, Marcus; Forsberg, Gun-Britt; Saarinen, Sinikka; Nilsson, Ralf; Hansson, Göran I.

    2012-01-01

    Lipid extraction from biological samples is a critical and often tedious preanalytical step in lipid research. Primarily on the basis of automation criteria, we have developed the BUME method, a novel chloroform-free total lipid extraction method for blood plasma compatible with standard 96-well robots. In only 60 min, 96 samples can be automatically extracted with lipid profiles of commonly analyzed lipid classes almost identically and with absolute recoveries similar or better to what is obtained using the chloroform-based reference method. Lipid recoveries were linear from 10–100 µl plasma for all investigated lipids using the developed extraction protocol. The BUME protocol includes an initial one-phase extraction of plasma into 300 µl butanol:methanol (BUME) mixture (3:1) followed by two-phase extraction into 300 µl heptane:ethyl acetate (3:1) using 300 µl 1% acetic acid as buffer. The lipids investigated included the most abundant plasma lipid classes (e.g., cholesterol ester, free cholesterol, triacylglycerol, phosphatidylcholine, and sphingomyelin) as well as less abundant but biologically important lipid classes, including ceramide, diacylglycerol, and lyso-phospholipids. This novel method has been successfully implemented in our laboratory and is now used daily. We conclude that the fully automated, high-throughput BUME method can replace chloroform-based methods, saving both human and environmental resources. PMID:22645248

  17. A Multi-Atlas Based Method for Automated Anatomical Rat Brain MRI Segmentation and Extraction of PET Activity

    PubMed Central

    Lancelot, Sophie; Roche, Roxane; Slimen, Afifa; Bouillot, Caroline; Levigoureux, Elise; Langlois, Jean-Baptiste; Zimmer, Luc; Costes, Nicolas

    2014-01-01

    Introduction Preclinical in vivo imaging requires precise and reproducible delineation of brain structures. Manual segmentation is time consuming and operator dependent. Automated segmentation as usually performed via single atlas registration fails to account for anatomo-physiological variability. We present, evaluate, and make available a multi-atlas approach for automatically segmenting rat brain MRI and extracting PET activies. Methods High-resolution 7T 2DT2 MR images of 12 Sprague-Dawley rat brains were manually segmented into 27-VOI label volumes using detailed protocols. Automated methods were developed with 7/12 atlas datasets, i.e. the MRIs and their associated label volumes. MRIs were registered to a common space, where an MRI template and a maximum probability atlas were created. Three automated methods were tested: 1/registering individual MRIs to the template, and using a single atlas (SA), 2/using the maximum probability atlas (MP), and 3/registering the MRIs from the multi-atlas dataset to an individual MRI, propagating the label volumes and fusing them in individual MRI space (propagation & fusion, PF). Evaluation was performed on the five remaining rats which additionally underwent [18F]FDG PET. Automated and manual segmentations were compared for morphometric performance (assessed by comparing volume bias and Dice overlap index) and functional performance (evaluated by comparing extracted PET measures). Results Only the SA method showed volume bias. Dice indices were significantly different between methods (PF>MP>SA). PET regional measures were more accurate with multi-atlas methods than with SA method. Conclusions Multi-atlas methods outperform SA for automated anatomical brain segmentation and PET measure’s extraction. They perform comparably to manual segmentation for FDG-PET quantification. Multi-atlas methods are suitable for rapid reproducible VOI analyses. PMID:25330005

  18. Rapid and Semi-Automated Extraction of Neuronal Cell Bodies and Nuclei from Electron Microscopy Image Stacks

    PubMed Central

    Holcomb, Paul S.; Morehead, Michael; Doretto, Gianfranco; Chen, Peter; Berg, Stuart; Plaza, Stephen; Spirou, George

    2016-01-01

    Connectomics—the study of how neurons wire together in the brain—is at the forefront of modern neuroscience research. However, many connectomics studies are limited by the time and precision needed to correctly segment large volumes of electron microscopy (EM) image data. We present here a semi-automated segmentation pipeline using freely available software that can significantly decrease segmentation time for extracting both nuclei and cell bodies from EM image volumes. PMID:27259933

  19. Automated Agricultural Field Extraction from Multi-temporal Web Enabled Landsat Data

    NASA Astrophysics Data System (ADS)

    Yan, L.; Roy, D. P.

    2012-12-01

    Agriculture has caused significant anthropogenic surface change. In many regions agricultural field sizes may be increasing to maximize yields and reduce costs resulting in decreased landscape spatial complexity and increased homogenization of land uses with potential for significant biogeochemical and ecological effects. To date, studies of the incidence, drivers and impacts of changing field sizes have not been undertaken over large areas because of computational constraints and because consistently processed appropriate resolution data have not been available or affordable. The Landsat series of satellites provides near-global coverage, long term, and appropriate spatial resolution (30m) satellite data to document changing field sizes. The recent free availability of all the Landsat data in the U.S. Landsat archive now provides the opportunity to study field size changes in a global and consistent way. Commercial software can be used to extract fields from Landsat data but are inappropriate for large area application because they require considerable human interaction. This paper presents research to develop and validate an automated computational Geographic Object Based Image Analysis methodology to extract agricultural fields and derive field sizes from Web Enabled Landsat Data (WELD) (http://weld.cr.usgs.gov/). WELD weekly products (30m reflectance and brightness temperature) are classified into Satellite Image Automatic Mapper™ (SIAM™) spectral categories and an edge intensity map and a map of the probability of each pixel being agricultural are derived from five years of 52 weeks of WELD and corresponding SIAM™ data. These data are fused to derive candidate agriculture field segments using a variational region-based geometric active contour model. Geometry-based algorithms are used to decompose connected segments belonging to multiple fields into coherent isolated field objects with a divide and conquer strategy to detect and merge partial circle

  20. Using mobile laser scanning data for automated extraction of road markings

    NASA Astrophysics Data System (ADS)

    Guan, Haiyan; Li, Jonathan; Yu, Yongtao; Wang, Cheng; Chapman, Michael; Yang, Bisheng

    2014-01-01

    A mobile laser scanning (MLS) system allows direct collection of accurate 3D point information in unprecedented detail at highway speeds and at less than traditional survey costs, which serves the fast growing demands of transportation-related road surveying including road surface geometry and road environment. As one type of road feature in traffic management systems, road markings on paved roadways have important functions in providing guidance and information to drivers and pedestrians. This paper presents a stepwise procedure to recognize road markings from MLS point clouds. To improve computational efficiency, we first propose a curb-based method for road surface extraction. This method first partitions the raw MLS data into a set of profiles according to vehicle trajectory data, and then extracts small height jumps caused by curbs in the profiles via slope and elevation-difference thresholds. Next, points belonging to the extracted road surface are interpolated into a geo-referenced intensity image using an extended inverse-distance-weighted (IDW) approach. Finally, we dynamically segment the geo-referenced intensity image into road-marking candidates with multiple thresholds that correspond to different ranges determined by point-density appropriate normality. A morphological closing operation with a linear structuring element is finally used to refine the road-marking candidates by removing noise and improving completeness. This road-marking extraction algorithm is comprehensively discussed in the analysis of parameter sensitivity and overall performance. An experimental study performed on a set of road markings with ground-truth shows that the proposed algorithm provides a promising solution to the road-marking extraction from MLS data.

  1. Automated Semantic Indices Related to Cognitive Function and Rate of Cognitive Decline

    ERIC Educational Resources Information Center

    Pakhomov, Serguei V. S.; Hemmy, Laura S.; Lim, Kelvin O.

    2012-01-01

    The objective of our study is to introduce a fully automated, computational linguistic technique to quantify semantic relations between words generated on a standard semantic verbal fluency test and to determine its cognitive and clinical correlates. Cognitive differences between patients with Alzheimer's disease and mild cognitive impairment are…

  2. Arsenic fractionation in agricultural soil using an automated three-step sequential extraction method coupled to hydride generation-atomic fluorescence spectrometry.

    PubMed

    Rosas-Castor, J M; Portugal, L; Ferrer, L; Guzmán-Mar, J L; Hernández-Ramírez, A; Cerdà, V; Hinojosa-Reyes, L

    2015-05-18

    A fully automated modified three-step BCR flow-through sequential extraction method was developed for the fractionation of the arsenic (As) content from agricultural soil based on a multi-syringe flow injection analysis (MSFIA) system coupled to hydride generation-atomic fluorescence spectrometry (HG-AFS). Critical parameters that affect the performance of the automated system were optimized by exploiting a multivariate approach using a Doehlert design. The validation of the flow-based modified-BCR method was carried out by comparison with the conventional BCR method. Thus, the total As content was determined in the following three fractions: fraction 1 (F1), the acid-soluble or interchangeable fraction; fraction 2 (F2), the reducible fraction; and fraction 3 (F3), the oxidizable fraction. The limits of detection (LOD) were 4.0, 3.4, and 23.6 μg L(-1) for F1, F2, and F3, respectively. A wide working concentration range was obtained for the analysis of each fraction, i.e., 0.013-0.800, 0.011-0.900 and 0.079-1.400 mg L(-1) for F1, F2, and F3, respectively. The precision of the automated MSFIA-HG-AFS system, expressed as the relative standard deviation (RSD), was evaluated for a 200 μg L(-1) As standard solution, and RSD values between 5 and 8% were achieved for the three BCR fractions. The new modified three-step BCR flow-based sequential extraction method was satisfactorily applied for arsenic fractionation in real agricultural soil samples from an arsenic-contaminated mining zone to evaluate its extractability. The frequency of analysis of the proposed method was eight times higher than that of the conventional BCR method (6 vs 48 h), and the kinetics of lixiviation were established for each fraction. PMID:25910440

  3. Detecting and extracting clusters in atom probe data: a simple, automated method using Voronoi cells.

    PubMed

    Felfer, P; Ceguerra, A V; Ringer, S P; Cairney, J M

    2015-03-01

    The analysis of the formation of clusters in solid solutions is one of the most common uses of atom probe tomography. Here, we present a method where we use the Voronoi tessellation of the solute atoms and its geometric dual, the Delaunay triangulation to test for spatial/chemical randomness of the solid solution as well as extracting the clusters themselves. We show how the parameters necessary for cluster extraction can be determined automatically, i.e. without user interaction, making it an ideal tool for the screening of datasets and the pre-filtering of structures for other spatial analysis techniques. Since the Voronoi volumes are closely related to atomic concentrations, the parameters resulting from this analysis can also be used for other concentration based methods such as iso-surfaces. PMID:25497494

  4. A novel automated device for rapid nucleic acid extraction utilizing a zigzag motion of magnetic silica beads.

    PubMed

    Yamaguchi, Akemi; Matsuda, Kazuyuki; Uehara, Masayuki; Honda, Takayuki; Saito, Yasunori

    2016-02-01

    We report a novel automated device for nucleic acid extraction, which consists of a mechanical control system and a disposable cassette. The cassette is composed of a bottle, a capillary tube, and a chamber. After sample injection in the bottle, the sample is lysed, and nucleic acids are adsorbed on the surface of magnetic silica beads. These magnetic beads are transported and are vibrated through the washing reagents in the capillary tube under the control of the mechanical control system, and thus, the nucleic acid is purified without centrifugation. The purified nucleic acid is automatically extracted in 3 min for the polymerase chain reaction (PCR). The nucleic acid extraction is dependent on the transport speed and the vibration frequency of the magnetic beads, and optimizing these two parameters provided better PCR efficiency than the conventional manual procedure. There was no difference between the detection limits of our novel device and that of the conventional manual procedure. We have already developed the droplet-PCR machine, which can amplify and detect specific nucleic acids rapidly and automatically. Connecting the droplet-PCR machine to our novel automated extraction device enables PCR analysis within 15 min, and this system can be made available as a point-of-care testing in clinics as well as general hospitals. PMID:26772121

  5. Comparative Evaluation of Commercially Available Manual and Automated Nucleic Acid Extraction Methods for Rotavirus RNA Detection in Stool

    PubMed Central

    Esona, Mathew D.; McDonald, Sharla; Kamili, Shifaq; Kerin, Tara; Gautam, Rashi; Bowen, Michael D.

    2015-01-01

    Rotaviruses are a major cause of viral gastroenteritis in children. For accurate and sensitive detection of rotavirus RNA from stool samples by reverse transcription-polymerase chain reaction (RT-PCR), the extraction process must be robust. However, some extraction methods may not remove the strong RT-PCR inhibitors known to be present in stool samples. The objective of this study was to evaluate and compare the performance of six extraction methods used commonly for extraction of rotavirus RNA from stool, which have never been formally evaluated: the MagNA Pure Compact, KingFisher Flex and NucliSENS® easyMAG® instruments, the NucliSENS® miniMAG® semi-automated system, and two manual purification kits, the QIAamp Viral RNA kit and a modified RNaid® kit. Using each method, total nucleic acid or RNA was extracted from eight rotavirus-positive stool samples with enzyme immunoassay optical density (EIA OD) values ranging from 0.176 to 3.098. Extracts prepared using the MagNA Pure Compact instrument yielded the most consistent results by qRT-PCR and conventional RT-PCR. When extracts prepared from a dilution series were extracted by the 6 methods and tested, rotavirus RNA was detected in all samples by qRT-PCR but by conventional RT-PCR testing, only the MagNA Pure Compact and KingFisher Flex extracts were positive in all cases. RT-PCR inhibitors were detected in extracts produced with the QIAamp Viral RNA Mini kit. The findings of this study should prove useful for selection of extraction methods to be incorporated into future rotavirus detection and genotyping protocols. PMID:24036075

  6. Automated Extraction of Buildings and Roads in a Graph Partitioning Framework

    NASA Astrophysics Data System (ADS)

    Ok, A. O.

    2013-10-01

    This paper presents an original unsupervised framework to identify regions belonging to buildings and roads from monocular very high resolution (VHR) satellite images. The proposed framework consists of three main stages. In the first stage, we extract information only related to building regions using shadow evidence and probabilistic fuzzy landscapes. Firstly, the shadow areas cast by building objects are detected and the directional spatial relationship between buildings and their shadows is modelled with the knowledge of illumination direction. Thereafter, each shadow region is handled separately and initial building regions are identified by iterative graph-cuts designed in a two-label partitioning. The second stage of the framework automatically classifies the image into four classes: building, shadow, vegetation, and others. In this step, the previously labelled building regions as well as the shadow and vegetation areas are involved in a four-label graph optimization performed in the entire image domain to achieve the unsupervised classification result. The final stage aims to extend this classification to five classes in which the class road is involved. For that purpose, we extract the regions that might belong to road segments and utilize that information in a final graph optimization. This final stage eventually characterizes the regions belonging to buildings and roads. Experiments performed on seven test images selected from GeoEye-1 VHR datasets show that the presented approach has ability to extract the regions belonging to buildings and roads in a single graph theory framework.

  7. Mixed-mode isolation of triazine metabolites from soil and aquifer sediments using automated solid-phase extraction

    USGS Publications Warehouse

    Mills, M.S.; Thurman, E.M.

    1992-01-01

    Reversed-phase isolation and ion-exchange purification were combined in the automated solid-phase extraction of two polar s-triazine metabolites, 2-amino-4-chloro-6-(isopropylamino)-s-triazine (deethylatrazine) and 2-amino-4-chloro-6-(ethylamino)-s-triazine (deisopropylatrazine) from clay-loam and slit-loam soils and sandy aquifer sediments. First, methanol/ water (4/1, v/v) soil extracts were transferred to an automated workstation following evaporation of the methanol phase for the rapid reversed-phase isolation of the metabolites on an octadecylresin (C18). The retention of the triazine metabolites on C18 decreased substantially when trace methanol concentrations (1%) remained. Furthermore, the retention on C18 increased with decreasing aqueous solubility and increasing alkyl-chain length of the metabolites and parent herbicides, indicating a reversed-phase interaction. The analytes were eluted with ethyl acetate, which left much of the soil organic-matter impurities on the resin. Second, the small-volume organic eluate was purified on an anion-exchange resin (0.5 mL/min) to extract the remaining soil pigments that could foul the ion source of the GC/MS system. Recoveries of the analytes were 75%, using deuterated atrazine as a surrogate, and were comparable to recoveries by soxhlet extraction. The detection limit was 0.1 ??g/kg with a coefficient of variation of 15%. The ease and efficiency of this automated method makes it viable, practical technique for studying triazine metabolites in the environment.

  8. A fully integrated and automated microsystem for rapid pharmacogenetic typing of multiple warfarin-related single-nucleotide polymorphisms.

    PubMed

    Zhuang, Bin; Han, Junping; Xiang, Guangxin; Gan, Wupeng; Wang, Shuaiqin; Wang, Dong; Wang, Lei; Sun, Jing; Li, Cai-Xia; Liu, Peng

    2016-01-01

    A fully integrated and automated microsystem consisting of low-cost, disposable plastic chips for DNA extraction and PCR amplification combined with a reusable glass capillary array electrophoresis chip in a modular-based format was successfully developed for warfarin pharmacogenetic testing. DNA extraction was performed by adopting a filter paper-based method, followed by "in situ" PCR that was carried out directly in the same reaction chamber of the chip without elution. PCR products were then co-injected with sizing standards into separation channels for detection using a novel injection electrode. The entire process was automatically conducted on a custom-made compact control and detection instrument. The limit of detection of the microsystem for the singleplex amplification of amelogenin was determined to be 0.625 ng of standard K562 DNA and 0.3 μL of human whole blood. A two-color multiplex allele-specific PCR assay for detecting the warfarin-related single-nucleotide polymorphisms (SNPs) 6853 (-1639G>A) and 6484 (1173C>T) in the VKORC1 gene and the *3 SNP (1075A>C) in the CYP2C9 gene was developed and used for validation studies. The fully automated genetic analysis was completed in two hours with a minimum requirement of 0.5 μL of input blood. Samples from patients with different genotypes were all accurately analyzed. In addition, both dried bloodstains and oral swabs were successfully processed by the microsystem with a simple modification to the DNA extraction and amplification chip. The successful development and operation of this microsystem establish the feasibility of rapid warfarin pharmacogenetic testing in routine clinical practice. PMID:26568290

  9. Chemical-induced disease relation extraction with various linguistic features

    PubMed Central

    Gu, Jinghang; Qian, Longhua; Zhou, Guodong

    2016-01-01

    Understanding the relations between chemicals and diseases is crucial in various biomedical tasks such as new drug discoveries and new therapy developments. While manually mining these relations from the biomedical literature is costly and time-consuming, such a procedure is often difficult to keep up-to-date. To address these issues, the BioCreative-V community proposed a challenging task of automatic extraction of chemical-induced disease (CID) relations in order to benefit biocuration. This article describes our work on the CID relation extraction task on the BioCreative-V tasks. We built a machine learning based system that utilized simple yet effective linguistic features to extract relations with maximum entropy models. In addition to leveraging various features, the hypernym relations between entity concepts derived from the Medical Subject Headings (MeSH)-controlled vocabulary were also employed during both training and testing stages to obtain more accurate classification models and better extraction performance, respectively. We demoted relation extraction between entities in documents to relation extraction between entity mentions. In our system, pairs of chemical and disease mentions at both intra- and inter-sentence levels were first constructed as relation instances for training and testing, then two classification models at both levels were trained from the training examples and applied to the testing examples. Finally, we merged the classification results from mention level to document level to acquire final relations between chemicals and diseases. Our system achieved promising F-scores of 60.4% on the development dataset and 58.3% on the test dataset using gold-standard entity annotations, respectively. Database URL: https://github.com/JHnlp/BC5CIDTask PMID:27052618

  10. Chemical-induced disease relation extraction with various linguistic features.

    PubMed

    Gu, Jinghang; Qian, Longhua; Zhou, Guodong

    2016-01-01

    Understanding the relations between chemicals and diseases is crucial in various biomedical tasks such as new drug discoveries and new therapy developments. While manually mining these relations from the biomedical literature is costly and time-consuming, such a procedure is often difficult to keep up-to-date. To address these issues, the BioCreative-V community proposed a challenging task of automatic extraction of chemical-induced disease (CID) relations in order to benefit biocuration. This article describes our work on the CID relation extraction task on the BioCreative-V tasks. We built a machine learning based system that utilized simple yet effective linguistic features to extract relations with maximum entropy models. In addition to leveraging various features, the hypernym relations between entity concepts derived from the Medical Subject Headings (MeSH)-controlled vocabulary were also employed during both training and testing stages to obtain more accurate classification models and better extraction performance, respectively. We demoted relation extraction between entities in documents to relation extraction between entity mentions. In our system, pairs of chemical and disease mentions at both intra- and inter-sentence levels were first constructed as relation instances for training and testing, then two classification models at both levels were trained from the training examples and applied to the testing examples. Finally, we merged the classification results from mention level to document level to acquire final relations between chemicals and diseases. Our system achieved promisingF-scores of 60.4% on the development dataset and 58.3% on the test dataset using gold-standard entity annotations, respectively. Database URL:https://github.com/JHnlp/BC5CIDTask. PMID:27052618

  11. Automated identification and geometrical features extraction of individual trees from Mobile Laser Scanning data in Budapest

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Székely, Balázs; Folly-Ritvay, Zoltán; Skobrák, Ferenc; Koenig, Kristina; Höfle, Bernhard

    2016-04-01

    Mobile Laser Scanning (MLS) is an evolving operational measurement technique for urban environment providing large amounts of high resolution information about trees, street features, pole-like objects on the street sides or near to motorways. In this study we investigate a robust segmentation method to extract the individual trees automatically in order to build an object-based tree database system. We focused on the large urban parks in Budapest (Margitsziget and Városliget; KARESZ project) which contained large diversity of different kind of tree species. The MLS data contained high density point cloud data with 1-8 cm mean absolute accuracy 80-100 meter distance from streets. The robust segmentation method contained following steps: The ground points are determined first. As a second step cylinders are fitted in vertical slice 1-1.5 meter relative height above ground, which is used to determine the potential location of each single trees trunk and cylinder-like object. Finally, residual values are calculated as deviation of each point from a vertically expanded fitted cylinder; these residual values are used to separate cylinder-like object from individual trees. After successful parameterization, the model parameters and the corresponding residual values of the fitted object are extracted and imported into the tree database. Additionally, geometric features are calculated for each segmented individual tree like crown base, crown width, crown length, diameter of trunk, volume of the individual trees. In case of incompletely scanned trees, the extraction of geometric features is based on fitted circles. The result of the study is a tree database containing detailed information about urban trees, which can be a valuable dataset for ecologist, city planners, planting and mapping purposes. Furthermore, the established database will be the initial point for classification trees into single species. MLS data used in this project had been measured in the framework of

  12. Automated solid-phase extraction coupled online with HPLC-FLD for the quantification of zearalenone in edible oil.

    PubMed

    Drzymala, Sarah S; Weiz, Stefan; Heinze, Julia; Marten, Silvia; Prinz, Carsten; Zimathies, Annett; Garbe, Leif-Alexander; Koch, Matthias

    2015-05-01

    Established maximum levels for the mycotoxin zearalenone (ZEN) in edible oil require monitoring by reliable analytical methods. Therefore, an automated SPE-HPLC online system based on dynamic covalent hydrazine chemistry has been developed. The SPE step comprises a reversible hydrazone formation by ZEN and a hydrazine moiety covalently attached to a solid phase. Seven hydrazine materials with different properties regarding the resin backbone, pore size, particle size, specific surface area, and loading have been evaluated. As a result, a hydrazine-functionalized silica gel was chosen. The final automated online method was validated and applied to the analysis of three maize germ oil samples including a provisionally certified reference material. Important performance criteria for the recovery (70-120 %) and precision (RSDr <25 %) as set by the Commission Regulation EC 401/2006 were fulfilled: The mean recovery was 78 % and RSDr did not exceed 8 %. The results of the SPE-HPLC online method were further compared to results obtained by liquid-liquid extraction with stable isotope dilution analysis LC-MS/MS and found to be in good agreement. The developed SPE-HPLC online system with fluorescence detection allows a reliable, accurate, and sensitive quantification (limit of quantification, 30 μg/kg) of ZEN in edible oils while significantly reducing the workload. To our knowledge, this is the first report on an automated SPE-HPLC method based on a covalent SPE approach. PMID:25709066

  13. Automated extraction of 11-nor-delta9-tetrahydrocannabinol carboxylic acid from urine samples using the ASPEC XL solid-phase extraction system.

    PubMed

    Langen, M C; de Bijl, G A; Egberts, A C

    2000-09-01

    The analysis of 11-nor-delta9-tetrahydrocannabinol-carboxylic acid (THCCOOH, the major metabolite of cannabis) in urine with gas chromatography and mass spectrometry (GC-MS) and solid-phase extraction (SPE) sample preparation is well documented. Automated SPE sample preparation of THCCOOH in urine, although potentially advantageous, is to our knowledge poorly investigated. The objective of the present study was to develop and validate an automated SPE sample-preparation step using ASPEC XL suited for GC-MS confirmation analysis of THCCOOH in urine drug control. The recoveries showed that it was not possible to transfer the protocol for the manual SPE procedure with the vacuum manifold to the ASPEC XL without loss of recovery. Making the sample more lipophilic by adding 1 mL 2-propanol after hydrolysis to the urine sample in order to overcome the problem of surface adsorption of THCCOOH led to an extraction efficiency (77%) comparable to that reached with the vacuum manifold (84%). The reproducibility of the automated SPE procedure was better (coefficient of variation 5%) than that of the manual procedure (coefficient of variation 12%). The limit of detection was 1 ng/mL, and the limit of quantitation was 4 ng/mL. Precision at the 12.5-ng/mL level was as follows: mean, 12.4 and coefficient of variation, 3.0%. Potential carryover was evaluated, but a carryover effect could not be detected. It was concluded that the proposed method is suited for GC-MS confirmation urinalysis of THCCOOH for prisons and detoxification centers. PMID:10999349

  14. INVESTIGATION OF ARSENIC SPECIATION ON DRINKING WATER TREATMENT MEDIA UTILIZING AUTOMATED SEQUENTIAL CONTINUOUS FLOW EXTRACTION WITH IC-ICP-MS DETECTION

    EPA Science Inventory

    Three treatment media, used for the removal of arsenic from drinking water, were sequentially extracted using 10mM MgCl2 (pH 8), 10mM NaH2PO4 (pH 7) followed by 10mM (NH4)2C2O4 (pH 3). The media were extracted using an on-line automated continuous extraction system which allowed...

  15. Direct Sampling and Analysis from Solid Phase Extraction Cards using an Automated Liquid Extraction Surface Analysis Nanoelectrospray Mass Spectrometry System

    SciTech Connect

    Walworth, Matthew J; ElNaggar, Mariam S; Stankovich, Joseph J; WitkowskiII, Charles E.; Norris, Jeremy L; Van Berkel, Gary J

    2011-01-01

    Direct liquid extraction based surface sampling, a technique previously demonstrated with continuous flow and autonomous pipette liquid microjunction surface sampling probes, has recently been implemented as the Liquid Extraction Surface Analysis (LESA) mode on the commercially available Advion NanoMate chip-based infusion nanoelectrospray ionization system. In the present paper, the LESA mode was applied to the analysis of 96-well format custom solid phase extraction (SPE) cards, with each well consisting of either a 1 or 2 mm diameter monolithic hydrophobic stationary phase. These substrate wells were conditioned, loaded with either single or multi-component aqueous mixtures, and read out using the LESA mode of a TriVersa NanoMate or a Nanomate 100 coupled to an ABI/Sciex 4000QTRAPTM hybrid triple quadrupole/linear ion trap mass spectrometer and a Thermo LTQ XL linear ion trap mass spectrometer. Extraction conditions, including extraction/nanoESI solvent composition, volume, and dwell times, were optimized in the analysis of targeted compounds. Limit of detection and quantitation as well as analysis reproducibility figures of merit were measured. Calibration data was obtained for propranolol using a deuterated internal standard which demonstrated linearity and reproducibility. A 10x increase in signal and cleanup of micromolar Angiotensin II from a concentrated salt solution was demonstrated. Additionally, a multicomponent herbicide mixture at ppb concentration levels was analyzed using MS3 spectra for compound identification in the presence of isobaric interferences.

  16. Enhancing Biomedical Text Summarization Using Semantic Relation Extraction

    PubMed Central

    Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao

    2011-01-01

    Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization. PMID:21887336

  17. High performance liquid chromatography for quantification of gatifloxacin in rat plasma following automated on-line solid phase extraction.

    PubMed

    Tasso, Leandro; Dalla Costa, Teresa

    2007-05-01

    An automated system using on-line solid phase extraction and HPLC with fluorimetric detection was developed and validated for quantification of gatifloxacin in rat plasma. The extraction was carried out using C(18) cartridges (BondElut), with a high extraction yield. After washing, gatifloxacin was eluted from the cartridge with mobile phase onto a C(18) HPLC column. The mobile phase consisted of a mixture of phosphoric acid (2.5mM), methanol, acetonitrile and triethylamine (64.8:15:20:0.2, v/v/v/v, apparent pH(app.) 2.8). All samples and standard solutions were chromatographed at 28 degrees C. The method developed was selective and linear for drug concentrations ranging between 20 and 600 ng/ml. Gatifloxacin recovery ranged from 95.6 to 99.7%, and the limit of quantification was 20 ng/ml. The intra and inter-assay accuracy were up to 94.3%. The precision determined not exceed 5.8% of the CV. High extraction yield up to 95% was obtained. Drug stability in plasma was shown in freezer at -20 degrees C up to 1 month, after three freeze-thaw cycles and for 24h in the autosampler after processing. The assay has been successfully applied to measure gatifloxacin plasma concentrations in pharmacokinetic study in rats. PMID:17403594

  18. Automated on-line liquid-liquid extraction system for temporal mass spectrometric analysis of dynamic samples.

    PubMed

    Hsieh, Kai-Ta; Liu, Pei-Han; Urban, Pawel L

    2015-09-24

    Most real samples cannot directly be infused to mass spectrometers because they could contaminate delicate parts of ion source and guides, or cause ion suppression. Conventional sample preparation procedures limit temporal resolution of analysis. We have developed an automated liquid-liquid extraction system that enables unsupervised repetitive treatment of dynamic samples and instantaneous analysis by mass spectrometry (MS). It incorporates inexpensive open-source microcontroller boards (Arduino and Netduino) to guide the extraction and analysis process. Duration of every extraction cycle is 17 min. The system enables monitoring of dynamic processes over many hours. The extracts are automatically transferred to the ion source incorporating a Venturi pump. Operation of the device has been characterized (repeatability, RSD = 15%, n = 20; concentration range for ibuprofen, 0.053-2.000 mM; LOD for ibuprofen, ∼0.005 mM; including extraction and detection). To exemplify its usefulness in real-world applications, we implemented this device in chemical profiling of pharmaceutical formulation dissolution process. Temporal dissolution profiles of commercial ibuprofen and acetaminophen tablets were recorded during 10 h. The extraction-MS datasets were fitted with exponential functions to characterize the rates of release of the main and auxiliary ingredients (e.g. ibuprofen, k = 0.43 ± 0.01 h(-1)). The electronic control unit of this system interacts with the operator via touch screen, internet, voice, and short text messages sent to the mobile phone, which is helpful when launching long-term (e.g. overnight) measurements. Due to these interactive features, the platform brings the concept of the Internet-of-Things (IoT) to the chemistry laboratory environment. PMID:26423626

  19. Background Knowledge in Learning-Based Relation Extraction

    ERIC Educational Resources Information Center

    Do, Quang Xuan

    2012-01-01

    In this thesis, we study the importance of background knowledge in relation extraction systems. We not only demonstrate the benefits of leveraging background knowledge to improve the systems' performance but also propose a principled framework that allows one to effectively incorporate knowledge into statistical machine learning models for…

  20. Comparative Assessment of Automated Nucleic Acid Sample Extraction Equipment for Biothreat Agents

    PubMed Central

    Kalina, Warren Vincent; Douglas, Christina Elizabeth; Coyne, Susan Rajnik

    2014-01-01

    Magnetic beads offer superior impurity removal and nucleic acid selection over older extraction methods. The performances of nucleic acid extraction of biothreat agents in blood or buffer by easyMAG, MagNA Pure, EZ1 Advanced XL, and Nordiag Arrow were evaluated. All instruments showed excellent performance in blood; however, the easyMAG had the best precision and versatility. PMID:24452173

  1. Coreference based event-argument relation extraction on biomedical text

    PubMed Central

    2011-01-01

    This paper presents a new approach to exploit coreference information for extracting event-argument (E-A) relations from biomedical documents. This approach has two advantages: (1) it can extract a large number of valuable E-A relations based on the concept of salience in discourse; (2) it enables us to identify E-A relations over sentence boundaries (cross-links) using transitivity of coreference relations. We propose two coreference-based models: a pipeline based on Support Vector Machine (SVM) classifiers, and a joint Markov Logic Network (MLN). We show the effectiveness of these models on a biomedical event corpus. Both models outperform the systems that do not use coreference information. When the two proposed models are compared to each other, joint MLN outperforms pipeline SVM with gold coreference information. PMID:22166257

  2. Toward automated parasitic extraction of silicon photonics using layout physical verifications

    NASA Astrophysics Data System (ADS)

    Ismail, Mohamed; El Shamy, Raghi S.; Madkour, Kareem; Hammouda, Sherif; Swillam, Mohamed A.

    2016-08-01

    A physical verification flow of the layout of silicon photonic circuits is suggested. Simple empirical models are developed to estimate the bend power loss and coupled power in photonic integrated circuits fabricated using SOI standard wafers. These models are utilized in physical verification flow of the circuit layout to verify reliable fabrication using any electronic design automation tool. The models are accurate compared with electromagnetic solvers. The models are closed form and circumvent the need to utilize any EM solver for the verification process. Hence, it dramatically reduces the time of the verification process.

  3. Automated Development of Feature Extraction Tools for Planetary Science Image Datasets

    NASA Astrophysics Data System (ADS)

    Plesko, C.; Brumby, S.; Asphaug, E.

    2003-03-01

    We explore development of feature extraction algorithms for Mars Orbiter Camera narrow angle data using GENIE machine learning software. The algorithms are successful at detecting craters within the images, and generalize well to a new image.

  4. Automating identification of adverse events related to abnormal lab results using standard vocabularies.

    PubMed

    Brandt, C A; Lu, C C; Nadkarni, P M

    2005-01-01

    Laboratory data need to be imported automatically into central Clinical Study Data Management Systems (CSDMSs), and abnormal laboratory data need to be linked to clinically related adverse events. This import of laboratory data can be automated through mapping to standard vocabularies with HL7/LOINC mapping to the metadata within a CSDMS. We have designed a system that uses the UMLS metathesaurus as a common source to map or link abnormal laboratory values to adverse event CTCAE coded terms and grades in the metadata of TrialDB, a generic CSDMS. PMID:16779190

  5. Unsupervised entity and relation extraction from clinical records in Italian.

    PubMed

    Alicante, Anita; Corazza, Anna; Isgrò, Francesco; Silvestri, Stefano

    2016-05-01

    This paper proposes and discusses the use of text mining techniques for the extraction of information from clinical records written in Italian. However, as it is very difficult and expensive to obtain annotated material for languages different from English, we only consider unsupervised approaches, where no annotated training set is necessary. We therefore propose a complete system that is structured in two steps. In the first one domain entities are extracted from the clinical records by means of a metathesaurus and standard natural language processing tools. The second step attempts to discover relations between the entity pairs extracted from the whole set of clinical records. For this last step we investigate the performance of unsupervised methods such as clustering in the space of entity pairs, represented by an ad hoc feature vector. The resulting clusters are then automatically labelled by using the most significant features. The system has been tested on a fairly large data set of clinical records in Italian, investigating the variation in the performance adopting different similarity measures in the feature space. The results of our experiments show that the unsupervised approach proposed is promising and well suited for a semi-automatic labelling of the extracted relations. PMID:26851833

  6. A fully automated system for analysis of pesticides in water: on-line extraction followed by liquid chromatography-tandem photodiode array/postcolumn derivatization/fluorescence detection.

    PubMed

    Patsias, J; Papadopoulou-Mourkidou, E

    1999-01-01

    A fully automated system for on-line solid phase extraction (SPE) followed by high-performance liquid chromatography (HPLC) with tandem detection with a photodiode array detector and a fluorescence detector (after postcolumn derivatization) was developed for analysis of many chemical classes of pesticides and their major conversion products in aquatic systems. An automated on-line-SPE system (Prospekt) operated with reversed-phase cartridges (PRP-1) extracts analytes from 100 mL acidified (pH = 3) filtered water sample. On-line HPLC analysis is performed with a 15 cm C18 analytical column eluted with a mobile phase of phosphate (pH = 3)-acetonitrile in 25 min linear gradient mode. Solutes are detected by tandem diode array/derivatization/fluorescence detection. The system is controlled and monitored by a single computer operated with Millenium software. Recoveries of most analytes in samples fortified at 1 microgram/L are > 90%, with relative standard deviation values of < 5%. For a few very polar analytes, mostly N-methylcarbamoyloximes (i.e., aldicarb sulfone, methomyl, and oxamyl), recoveries are < 20%. However, for these compounds, as well as for the rest of the N-methylcarbamates except for aldicarb sulfoxide and butoxycarboxim, the limits of detection (LODs) are 0.005-0.05 microgram/L. LODs for aldicarb sulfoxide and butoxycarboxim are 0.2 and 0.1 microgram, respectively. LODs for the rest of the analytes except 4-nitrophenol, bentazone, captan, decamethrin, and MCPA are 0.05-0.1 microgram/L. LODs for the latter compounds are 0.2-1.0 microgram/L. The system can be operated unattended. PMID:10444834

  7. Towards a Relation Extraction Framework for Cyber-Security Concepts

    SciTech Connect

    Jones, Corinne L; Bridges, Robert A; Huffer, Kelly M; Goodall, John R

    2015-01-01

    In order to assist security analysts in obtaining information pertaining to their network, such as novel vulnerabilities, exploits, or patches, information retrieval methods tailored to the security domain are needed. As labeled text data is scarce and expensive, we follow developments in semi-supervised NLP and implement a bootstrapping algorithm for extracting security entities and their relationships from text. The algorithm requires little input data, specifically, a few relations or patterns (heuristics for identifying relations), and incorporates an active learning component which queries the user on the most important decisions to prevent drifting the desired relations. Preliminary testing on a small corpus shows promising results, obtaining precision of .82.

  8. Technical Note: Semi-automated effective width extraction from time-lapse RGB imagery of a remote, braided Greenlandic river

    NASA Astrophysics Data System (ADS)

    Gleason, C. J.; Smith, L. C.; Finnegan, D. C.; LeWinter, A. L.; Pitcher, L. H.; Chu, V. W.

    2015-06-01

    River systems in remote environments are often challenging to monitor and understand where traditional gauging apparatus are difficult to install or where safety concerns prohibit field measurements. In such cases, remote sensing, especially terrestrial time-lapse imaging platforms, offer a means to better understand these fluvial systems. One such environment is found at the proglacial Isortoq River in southwestern Greenland, a river with a constantly shifting floodplain and remote Arctic location that make gauging and in situ measurements all but impossible. In order to derive relevant hydraulic parameters for this river, two true color (RGB) cameras were installed in July 2011, and these cameras collected over 10 000 half hourly time-lapse images of the river by September of 2012. Existing approaches for extracting hydraulic parameters from RGB imagery require manual or supervised classification of images into water and non-water areas, a task that was impractical for the volume of data in this study. As such, automated image filters were developed that removed images with environmental obstacles (e.g., shadows, sun glint, snow) from the processing stream. Further image filtering was accomplished via a novel automated histogram similarity filtering process. This similarity filtering allowed successful (mean accuracy 79.6 %) supervised classification of filtered images from training data collected from just 10 % of those images. Effective width, a hydraulic parameter highly correlated with discharge in braided rivers, was extracted from these classified images, producing a hydrograph proxy for the Isortoq River between 2011 and 2012. This hydrograph proxy shows agreement with historic flooding observed in other parts of Greenland in July 2012 and offers promise that the imaging platform and processing methodology presented here will be useful for future monitoring studies of remote rivers.

  9. Automated extraction and analysis of rock discontinuity characteristics from 3D point clouds

    NASA Astrophysics Data System (ADS)

    Bianchetti, Matteo; Villa, Alberto; Agliardi, Federico; Crosta, Giovanni B.

    2016-04-01

    A reliable characterization of fractured rock masses requires an exhaustive geometrical description of discontinuities, including orientation, spacing, and size. These are required to describe discontinuum rock mass structure, perform Discrete Fracture Network and DEM modelling, or provide input for rock mass classification or equivalent continuum estimate of rock mass properties. Although several advanced methodologies have been developed in the last decades, a complete characterization of discontinuity geometry in practice is still challenging, due to scale-dependent variability of fracture patterns and difficult accessibility to large outcrops. Recent advances in remote survey techniques, such as terrestrial laser scanning and digital photogrammetry, allow a fast and accurate acquisition of dense 3D point clouds, which promoted the development of several semi-automatic approaches to extract discontinuity features. Nevertheless, these often need user supervision on algorithm parameters which can be difficult to assess. To overcome this problem, we developed an original Matlab tool, allowing fast, fully automatic extraction and analysis of discontinuity features with no requirements on point cloud accuracy, density and homogeneity. The tool consists of a set of algorithms which: (i) process raw 3D point clouds, (ii) automatically characterize discontinuity sets, (iii) identify individual discontinuity surfaces, and (iv) analyse their spacing and persistence. The tool operates in either a supervised or unsupervised mode, starting from an automatic preliminary exploration data analysis. The identification and geometrical characterization of discontinuity features is divided in steps. First, coplanar surfaces are identified in the whole point cloud using K-Nearest Neighbor and Principal Component Analysis algorithms optimized on point cloud accuracy and specified typical facet size. Then, discontinuity set orientation is calculated using Kernel Density Estimation and

  10. [Corrected Title: Solid-Phase Extraction of Polar Compounds from Water] Automated Electrostatics Environmental Chamber

    NASA Technical Reports Server (NTRS)

    Sauer, Richard; Rutz, Jeffrey; Schultz, John

    2005-01-01

    A solid-phase extraction (SPE) process has been developed for removing alcohols, carboxylic acids, aldehydes, ketones, amines, and other polar organic compounds from water. This process can be either a subprocess of a water-reclamation process or a means of extracting organic compounds from water samples for gas-chromatographic analysis. This SPE process is an attractive alternative to an Environmental Protection Administration liquid-liquid extraction process that generates some pollution and does not work in a microgravitational environment. In this SPE process, one forces a water sample through a resin bed by use of positive pressure on the upstream side and/or suction on the downstream side, thereby causing organic compounds from the water to be adsorbed onto the resin. If gas-chromatographic analysis is to be done, the resin is dried by use of a suitable gas, then the adsorbed compounds are extracted from the resin by use of a solvent. Unlike the liquid-liquid process, the SPE process works in both microgravity and Earth gravity. In comparison with the liquid-liquid process, the SPE process is more efficient, extracts a wider range of organic compounds, generates less pollution, and costs less.

  11. Automated extraction of urban trees from mobile LiDAR point clouds

    NASA Astrophysics Data System (ADS)

    Fan, W.; Chenglu, W.; Jonathan, L.

    2016-03-01

    This paper presents an automatic algorithm to localize and extract urban trees from mobile LiDAR point clouds. First, in order to reduce the number of points to be processed, the ground points are filtered out from the raw point clouds, and the un-ground points are segmented into supervoxels. Then, a novel localization method is proposed to locate the urban trees accurately. Next, a segmentation method by localization is proposed to achieve objects. Finally, the features of objects are extracted, and the feature vectors are classified by random forests trained on manually labeled objects. The proposed method has been tested on a point cloud dataset. The results prove that our algorithm efficiently extracts the urban trees.

  12. Analysis of betamethasone in rat plasma using automated solid-phase extraction coupled with liquid chromatography-tandem mass spectrometry. Determination of plasma concentrations in rat following oral and intravenous administration.

    PubMed

    Tamvakopoulos, C S; Neugebauer, J M; Donnelly, M; Griffin, P R

    2002-09-01

    A method is described for the determination of betamethasone in rat plasma by liquid chromatography-tandem mass spectrometry (LC-MS-MS). The analyte was recovered from plasma by solid-phase extraction and subsequently analyzed by LC-MS-MS. A Packard Multiprobe II, an automated liquid handling system, was employed for the preparation and extraction of a 96-well plate containing unknown plasma samples, standards and quality control samples in an automated fashion. Prednisolone, a structurally related steroid, was used as an internal standard. Using the described approach, a limit of quantitation of 2 ng/ml was achieved with a 50 microl aliquot of rat plasma. The described level of sensitivity allowed the determination of betamethasone concentrations and subsequent measurement of kinetic parameters of betamethasone in rat. Combination of automated plasma extraction and the sensitivity and selectivity of LC-MS-MS offers a valuable alternative to the methodologies currently used for the quantitation of steroids in biological fluids. PMID:12137997

  13. An automated system for retrieving herb-drug interaction related articles from MEDLINE

    PubMed Central

    Lin, Kuo; Friedman, Carol; Finkelstein, Joseph

    2016-01-01

    An automated, user-friendly and accurate system for retrieving herb-drug interaction (HDIs) related articles in MEDLINE can increase the safety of patients, as well as improve the physicians’ article retrieving ability regarding speed and experience. Previous studies show that MeSH based queries associated with negative effects of drugs can be customized, resulting in good performance in retrieving relevant information, but no study has focused on the area of herb-drug interactions (HDI). This paper adapted the characteristics of HDI related papers and created a multilayer HDI article searching system. It achieved a sensitivity of 92% at a precision of 93% in a preliminary evaluation. Instead of requiring physicians to conduct PubMed searches directly, this system applies a more user-friendly approach by employing a customized system that enhances PubMed queries, shielding users from having to write queries, dealing with PubMed, or reading many irrelevant articles. The system provides automated processes and outputs target articles based on the input. PMID:27570662

  14. An automated system for retrieving herb-drug interaction related articles from MEDLINE.

    PubMed

    Lin, Kuo; Friedman, Carol; Finkelstein, Joseph

    2016-01-01

    An automated, user-friendly and accurate system for retrieving herb-drug interaction (HDIs) related articles in MEDLINE can increase the safety of patients, as well as improve the physicians' article retrieving ability regarding speed and experience. Previous studies show that MeSH based queries associated with negative effects of drugs can be customized, resulting in good performance in retrieving relevant information, but no study has focused on the area of herb-drug interactions (HDI). This paper adapted the characteristics of HDI related papers and created a multilayer HDI article searching system. It achieved a sensitivity of 92% at a precision of 93% in a preliminary evaluation. Instead of requiring physicians to conduct PubMed searches directly, this system applies a more user-friendly approach by employing a customized system that enhances PubMed queries, shielding users from having to write queries, dealing with PubMed, or reading many irrelevant articles. The system provides automated processes and outputs target articles based on the input. PMID:27570662

  15. An automated algorithm for extracting road edges from terrestrial mobile LiDAR data

    NASA Astrophysics Data System (ADS)

    Kumar, Pankaj; McElhinney, Conor P.; Lewis, Paul; McCarthy, Timothy

    2013-11-01

    Terrestrial mobile laser scanning systems provide rapid and cost effective 3D point cloud data which can be used for extracting features such as the road edge along a route corridor. This information can assist road authorities in carrying out safety risk assessment studies along road networks. The knowledge of the road edge is also a prerequisite for the automatic estimation of most other road features. In this paper, we present an algorithm which has been developed for extracting left and right road edges from terrestrial mobile LiDAR data. The algorithm is based on a novel combination of two modified versions of the parametric active contour or snake model. The parameters involved in the algorithm are selected empirically and are fixed for all the road sections. We have developed a novel way of initialising the snake model based on the navigation information obtained from the mobile mapping vehicle. We tested our algorithm on different types of road sections representing rural, urban and national primary road sections. The successful extraction of road edges from these multiple road section environments validates our algorithm. These findings and knowledge provide valuable insights as well as a prototype road edge extraction tool-set, for both national road authorities and survey companies.

  16. Kernel-Based Learning for Domain-Specific Relation Extraction

    NASA Astrophysics Data System (ADS)

    Basili, Roberto; Giannone, Cristina; Del Vescovo, Chiara; Moschitti, Alessandro; Naggar, Paolo

    In a specific process of business intelligence, i.e. investigation on organized crime, empirical language processing technologies can play a crucial role. The analysis of transcriptions on investigative activities, such as police interrogatories, for the recognition and storage of complex relations among people and locations is a very difficult and time consuming task, ultimately based on pools of experts. We discuss here an inductive relation extraction platform that opens the way to much cheaper and consistent workflows. The presented empirical investigation shows that accurate results, comparable to the expert teams, can be achieved, and parametrization allows to fine tune the system behavior for fitting domain-specific requirements.

  17. Automation of ⁹⁹Tc extraction by LOV prior ICP-MS detection: application to environmental samples.

    PubMed

    Rodríguez, Rogelio; Leal, Luz; Miranda, Silvia; Ferrer, Laura; Avivar, Jessica; García, Ariel; Cerdà, Víctor

    2015-02-01

    A new, fast, automated and inexpensive sample pre-treatment method for (99)Tc determination by inductively coupled plasma-mass spectrometry (ICP-MS) detection is presented. The miniaturized approach is based on a lab-on-valve (LOV) system, allowing automatic separation and preconcentration of (99)Tc. Selectivity is provided by the solid phase extraction system used (TEVA resin) which retains selectively pertechnetate ion in diluted nitric acid solution. The proposed system has some advantages such as minimization of sample handling, reduction of reagents volume, improvement of intermediate precision and sample throughput, offering a significant decrease of both time and cost per analysis in comparison to other flow techniques and batch methods. The proposed LOV system has been successfully applied to different samples of environmental interest (water and soil) with satisfactory recoveries, between 94% and 98%. The detection limit (LOD) of the developed method is 0.005 ng. The high durability of the resin and its low amount (32 mg), its good intermediate precision (RSD 3.8%) and repeatability (RSD 2%) and its high extraction frequency (up to 5 h(-1)) makes this method an inexpensive, high precision and fast tool for monitoring (99)Tc in environmental samples. PMID:25435232

  18. Fully automated Liquid Extraction-Based Surface Sampling and Ionization Using a Chip-Based Robotic Nanoelectrospray Platform

    SciTech Connect

    Kertesz, Vilmos; Van Berkel, Gary J

    2010-01-01

    A fully automated liquid extraction-based surface sampling device utilizing an Advion NanoMate chip-based infusion nanoelectrospray ionization system is reported. Analyses were enabled for discrete spot sampling by using the Advanced User Interface of the current commercial control software. This software interface provided the parameter control necessary for the NanoMate robotic pipettor to both form and withdraw a liquid microjunction for sampling from a surface. The system was tested with three types of analytically important sample surface types, viz., spotted sample arrays on a MALDI plate, dried blood spots on paper, and whole-body thin tissue sections from drug dosed mice. The qualitative and quantitative data were consistent with previous studies employing other liquid extraction-based surface sampling techniques. The successful analyses performed here utilized the hardware and software elements already present in the NanoMate system developed to handle and analyze liquid samples. Implementation of an appropriate sample (surface) holder, a solvent reservoir, faster movement of the robotic arm, finer control over solvent flow rate when dispensing and retrieving the solution at the surface, and the ability to select any location on a surface to sample from would improve the analytical performance and utility of the platform.

  19. Sequential automated fusion/extraction chromatography methodology for the dissolution of uranium in environmental samples for mass spectrometric determination.

    PubMed

    Milliard, Alex; Durand-Jézéquel, Myriam; Larivière, Dominic

    2011-01-17

    An improved methodology has been developed, based on dissolution by automated fusion followed by extraction chromatography for the detection and quantification of uranium in environmental matrices by mass spectrometry. A rapid fusion protocol (<8 min) was investigated for the complete dissolution of various samples. It could be preceded, if required, by an effective ashing procedure using the M4 fluxer and a newly designed platinum lid. Complete dissolution of the sample was observed and measured using standard reference materials (SRMs) and experimental data show no evidence of cross-contamination of crucibles when LiBO(2)/LiBr melts were used. The use of a M4 fusion unit also improved repeatability in sample preparation over muffle furnace fusion. Instrumental issues originating from the presence of high salt concentrations in the digestate after lithium metaborate fusion was also mitigated using an extraction chromatography (EXC) protocol aimed at removing lithium and interfering matrix constituants prior to the elution of uranium. The sequential methodology, which can be performed simultaneously on three samples, requires less than 20 min per sample for fusion and separation. It was successfully coupled to inductively coupled plasma mass spectrometry (ICP-MS) achieving detection limits below 100 pg kg(-1) for 5-300 mg of sample. PMID:21167982

  20. Quantitative analysis of ex vivo colorectal epithelium using an automated feature extraction algorithm for microendoscopy image data.

    PubMed

    Prieto, Sandra P; Lai, Keith K; Laryea, Jonathan A; Mizell, Jason S; Muldoon, Timothy J

    2016-04-01

    Qualitative screening for colorectal polyps via fiber bundle microendoscopy imaging has shown promising results, with studies reporting high rates of sensitivity and specificity, as well as low interobserver variability with trained clinicians. A quantitative image quality control and image feature extraction algorithm (QFEA) was designed to lessen the burden of training and provide objective data for improved clinical efficacy of this method. After a quantitative image quality control step, QFEA extracts field-of-view area, crypt area, crypt circularity, and crypt number per image. To develop and validate this QFEA, a training set of microendoscopy images was collected from freshly resected porcine colon epithelium. The algorithm was then further validated on ex vivo image data collected from eight human subjects, selected from clinically normal appearing regions distant from grossly visible tumor in surgically resected colorectal tissue. QFEA has proven flexible in application to both mosaics and individual images, and its automated crypt detection sensitivity ranges from 71 to 94% despite intensity and contrast variation within the field of view. It also demonstrates the ability to detect and quantify differences in grossly normal regions among different subjects, suggesting the potential efficacy of this approach in detecting occult regions of dysplasia. PMID:27335893

  1. Progress in automated extraction and purification of in situ 14C from quartz: Results from the Purdue in situ 14C laboratory

    NASA Astrophysics Data System (ADS)

    Lifton, Nathaniel; Goehring, Brent; Wilson, Jim; Kubley, Thomas; Caffee, Marc

    2015-10-01

    Current extraction methods for in situ 14C from quartz [e.g., Lifton et al., (2001), Pigati et al., (2010), Hippe et al., (2013)] are time-consuming and repetitive, making them an attractive target for automation. We report on the status of in situ 14C extraction and purification systems originally automated at the University of Arizona that have now been reconstructed and upgraded at the Purdue Rare Isotope Measurement Laboratory (PRIME Lab). The Purdue in situ 14C laboratory builds on the flow-through extraction system design of Pigati et al. (2010), automating most of the procedure by retrofitting existing valves with external servo-controlled actuators, regulating the pressure of research purity O2 inside the furnace tube via a PID-based pressure controller in concert with an inlet mass flow controller, and installing an automated liquid N2 distribution system, all driven by LabView® software. A separate system for cryogenic CO2 purification, dilution, and splitting is also fully automated, ensuring a highly repeatable process regardless of the operator. We present results from procedural blanks and an intercomparison material (CRONUS-A), as well as results of experiments to increase the amount of material used in extraction, from the standard 5 g to 10 g or above. Results thus far are quite promising with procedural blanks comparable to previous work and significant improvements in reproducibility for CRONUS-A measurements. The latter analyses also demonstrate the feasibility of quantitative extraction of in situ 14C from sample masses up to 10 g. Our lab is now analyzing unknowns routinely, but lowering overall blank levels is the focus of ongoing research.

  2. Automated biphasic morphological assessment of hepatitis B-related liver fibrosis using second harmonic generation microscopy

    PubMed Central

    Wang, Tong-Hong; Chen, Tse-Ching; Teng, Xiao; Liang, Kung-Hao; Yeh, Chau-Ting

    2015-01-01

    Liver fibrosis assessment by biopsy and conventional staining scores is based on histopathological criteria. Variations in sample preparation and the use of semi-quantitative histopathological methods commonly result in discrepancies between medical centers. Thus, minor changes in liver fibrosis might be overlooked in multi-center clinical trials, leading to statistically non-significant data. Here, we developed a computer-assisted, fully automated, staining-free method for hepatitis B-related liver fibrosis assessment. In total, 175 liver biopsies were divided into training (n = 105) and verification (n = 70) cohorts. Collagen was observed using second harmonic generation (SHG) microscopy without prior staining, and hepatocyte morphology was recorded using two-photon excitation fluorescence (TPEF) microscopy. The training cohort was utilized to establish a quantification algorithm. Eleven of 19 computer-recognizable SHG/TPEF microscopic morphological features were significantly correlated with the ISHAK fibrosis stages (P < 0.001). A biphasic scoring method was applied, combining support vector machine and multivariate generalized linear models to assess the early and late stages of fibrosis, respectively, based on these parameters. The verification cohort was used to verify the scoring method, and the area under the receiver operating characteristic curve was >0.82 for liver cirrhosis detection. Since no subjective gradings are needed, interobserver discrepancies could be avoided using this fully automated method. PMID:26260921

  3. Automated biphasic morphological assessment of hepatitis B-related liver fibrosis using second harmonic generation microscopy

    NASA Astrophysics Data System (ADS)

    Wang, Tong-Hong; Chen, Tse-Ching; Teng, Xiao; Liang, Kung-Hao; Yeh, Chau-Ting

    2015-08-01

    Liver fibrosis assessment by biopsy and conventional staining scores is based on histopathological criteria. Variations in sample preparation and the use of semi-quantitative histopathological methods commonly result in discrepancies between medical centers. Thus, minor changes in liver fibrosis might be overlooked in multi-center clinical trials, leading to statistically non-significant data. Here, we developed a computer-assisted, fully automated, staining-free method for hepatitis B-related liver fibrosis assessment. In total, 175 liver biopsies were divided into training (n = 105) and verification (n = 70) cohorts. Collagen was observed using second harmonic generation (SHG) microscopy without prior staining, and hepatocyte morphology was recorded using two-photon excitation fluorescence (TPEF) microscopy. The training cohort was utilized to establish a quantification algorithm. Eleven of 19 computer-recognizable SHG/TPEF microscopic morphological features were significantly correlated with the ISHAK fibrosis stages (P < 0.001). A biphasic scoring method was applied, combining support vector machine and multivariate generalized linear models to assess the early and late stages of fibrosis, respectively, based on these parameters. The verification cohort was used to verify the scoring method, and the area under the receiver operating characteristic curve was >0.82 for liver cirrhosis detection. Since no subjective gradings are needed, interobserver discrepancies could be avoided using this fully automated method.

  4. Quantification of lung tumor rotation with automated landmark extraction using orthogonal cine MRI images

    NASA Astrophysics Data System (ADS)

    Paganelli, Chiara; Lee, Danny; Greer, Peter B.; Baroni, Guido; Riboldi, Marco; Keall, Paul

    2015-09-01

    The quantification of tumor motion in sites affected by respiratory motion is of primary importance to improve treatment accuracy. To account for motion, different studies analyzed the translational component only, without focusing on the rotational component, which was quantified in a few studies on the prostate with implanted markers. The aim of our study was to propose a tool able to quantify lung tumor rotation without the use of internal markers, thus providing accurate motion detection close to critical structures such as the heart or liver. Specifically, we propose the use of an automatic feature extraction method in combination with the acquisition of fast orthogonal cine MRI images of nine lung patients. As a preliminary test, we evaluated the performance of the feature extraction method by applying it on regions of interest around (i) the diaphragm and (ii) the tumor and comparing the estimated motion with that obtained by (i) the extraction of the diaphragm profile and (ii) the segmentation of the tumor, respectively. The results confirmed the capability of the proposed method in quantifying tumor motion. Then, a point-based rigid registration was applied to the extracted tumor features between all frames to account for rotation. The median lung rotation values were  -0.6   ±   2.3° and  -1.5   ±   2.7° in the sagittal and coronal planes respectively, confirming the need to account for tumor rotation along with translation to improve radiotherapy treatment.

  5. Automated extraction of clinical traits of multiple sclerosis in electronic medical records

    PubMed Central

    Davis, Mary F; Sriram, Subramaniam; Bush, William S; Denny, Joshua C; Haines, Jonathan L

    2013-01-01

    Objectives The clinical course of multiple sclerosis (MS) is highly variable, and research data collection is costly and time consuming. We evaluated natural language processing techniques applied to electronic medical records (EMR) to identify MS patients and the key clinical traits of their disease course. Materials and methods We used four algorithms based on ICD-9 codes, text keywords, and medications to identify individuals with MS from a de-identified, research version of the EMR at Vanderbilt University. Using a training dataset of the records of 899 individuals, algorithms were constructed to identify and extract detailed information regarding the clinical course of MS from the text of the medical records, including clinical subtype, presence of oligoclonal bands, year of diagnosis, year and origin of first symptom, Expanded Disability Status Scale (EDSS) scores, timed 25-foot walk scores, and MS medications. Algorithms were evaluated on a test set validated by two independent reviewers. Results We identified 5789 individuals with MS. For all clinical traits extracted, precision was at least 87% and specificity was greater than 80%. Recall values for clinical subtype, EDSS scores, and timed 25-foot walk scores were greater than 80%. Discussion and conclusion This collection of clinical data represents one of the largest databases of detailed, clinical traits available for research on MS. This work demonstrates that detailed clinical information is recorded in the EMR and can be extracted for research purposes with high reliability. PMID:24148554

  6. Comprehensive automation of the solid phase extraction gas chromatographic mass spectrometric analysis (SPE-GC/MS) of opioids, cocaine, and metabolites from serum and other matrices.

    PubMed

    Lerch, Oliver; Temme, Oliver; Daldrup, Thomas

    2014-07-01

    The analysis of opioids, cocaine, and metabolites from blood serum is a routine task in forensic laboratories. Commonly, the employed methods include many manual or partly automated steps like protein precipitation, dilution, solid phase extraction, evaporation, and derivatization preceding a gas chromatography (GC)/mass spectrometry (MS) or liquid chromatography (LC)/MS analysis. In this study, a comprehensively automated method was developed from a validated, partly automated routine method. This was possible by replicating method parameters on the automated system. Only marginal optimization of parameters was necessary. The automation relying on an x-y-z robot after manual protein precipitation includes the solid phase extraction, evaporation of the eluate, derivatization (silylation with N-methyl-N-trimethylsilyltrifluoroacetamide, MSTFA), and injection into a GC/MS. A quantitative analysis of almost 170 authentic serum samples and more than 50 authentic samples of other matrices like urine, different tissues, and heart blood on cocaine, benzoylecgonine, methadone, morphine, codeine, 6-monoacetylmorphine, dihydrocodeine, and 7-aminoflunitrazepam was conducted with both methods proving that the analytical results are equivalent even near the limits of quantification (low ng/ml range). To our best knowledge, this application is the first one reported in the literature employing this sample preparation system. PMID:24788888

  7. Evaluation of an Automated Information Extraction Tool for Imaging Data Elements to Populate a Breast Cancer Screening Registry.

    PubMed

    Lacson, Ronilda; Harris, Kimberly; Brawarsky, Phyllis; Tosteson, Tor D; Onega, Tracy; Tosteson, Anna N A; Kaye, Abby; Gonzalez, Irina; Birdwell, Robyn; Haas, Jennifer S

    2015-10-01

    Breast cancer screening is central to early breast cancer detection. Identifying and monitoring process measures for screening is a focus of the National Cancer Institute's Population-based Research Optimizing Screening through Personalized Regimens (PROSPR) initiative, which requires participating centers to report structured data across the cancer screening continuum. We evaluate the accuracy of automated information extraction of imaging findings from radiology reports, which are available as unstructured text. We present prevalence estimates of imaging findings for breast imaging received by women who obtained care in a primary care network participating in PROSPR (n = 139,953 radiology reports) and compared automatically extracted data elements to a "gold standard" based on manual review for a validation sample of 941 randomly selected radiology reports, including mammograms, digital breast tomosynthesis, ultrasound, and magnetic resonance imaging (MRI). The prevalence of imaging findings vary by data element and modality (e.g., suspicious calcification noted in 2.6% of screening mammograms, 12.1% of diagnostic mammograms, and 9.4% of tomosynthesis exams). In the validation sample, the accuracy of identifying imaging findings, including suspicious calcifications, masses, and architectural distortion (on mammogram and tomosynthesis); masses, cysts, non-mass enhancement, and enhancing foci (on MRI); and masses and cysts (on ultrasound), range from 0.8 to1.0 for recall, precision, and F-measure. Information extraction tools can be used for accurate documentation of imaging findings as structured data elements from text reports for a variety of breast imaging modalities. These data can be used to populate screening registries to help elucidate more effective breast cancer screening processes. PMID:25561069

  8. An energy minimization approach to automated extraction of regular building footprints from airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    He, Y.; Zhang, C.; Fraser, C. S.

    2014-08-01

    This paper presents an automated approach to the extraction of building footprints from airborne LiDAR data based on energy minimization. Automated 3D building reconstruction in complex urban scenes has been a long-standing challenge in photogrammetry and computer vision. Building footprints constitute a fundamental component of a 3D building model and they are useful for a variety of applications. Airborne LiDAR provides large-scale elevation representation of urban scene and as such is an important data source for object reconstruction in spatial information systems. However, LiDAR points on building edges often exhibit a jagged pattern, partially due to either occlusion from neighbouring objects, such as overhanging trees, or to the nature of the data itself, including unavoidable noise and irregular point distributions. The explicit 3D reconstruction may thus result in irregular or incomplete building polygons. In the presented work, a vertex-driven Douglas-Peucker method is developed to generate polygonal hypotheses from points forming initial building outlines. The energy function is adopted to examine and evaluate each hypothesis and the optimal polygon is determined through energy minimization. The energy minimization also plays a key role in bridging gaps, where the building outlines are ambiguous due to insufficient LiDAR points. In formulating the energy function, hard constraints such as parallelism and perpendicularity of building edges are imposed, and local and global adjustments are applied. The developed approach has been extensively tested and evaluated on datasets with varying point cloud density over different terrain types. Results are presented and analysed. The successful reconstruction of building footprints, of varying structural complexity, along with a quantitative assessment employing accurate reference data, demonstrate the practical potential of the proposed approach.

  9. Validation of an automated solid-phase extraction method for the analysis of 23 opioids, cocaine, and metabolites in urine with ultra-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Ramírez Fernández, María del Mar; Van Durme, Filip; Wille, Sarah M R; di Fazio, Vincent; Kummer, Natalie; Samyn, Nele

    2014-06-01

    The aim of this work was to automate a sample preparation procedure extracting morphine, hydromorphone, oxymorphone, norcodeine, codeine, dihydrocodeine, oxycodone, 6-monoacetyl-morphine, hydrocodone, ethylmorphine, benzoylecgonine, cocaine, cocaethylene, tramadol, meperidine, pentazocine, fentanyl, norfentanyl, buprenorphine, norbuprenorphine, propoxyphene, methadone and 2-ethylidene-1,5-dimethyl-3,3-diphenylpyrrolidine from urine samples. Samples were extracted by solid-phase extraction (SPE) with cation exchange cartridges using a TECAN Freedom Evo 100 base robotic system, including a hydrolysis step previous extraction when required. Block modules were carefully selected in order to use the same consumable material as in manual procedures to reduce cost and/or manual sample transfers. Moreover, the present configuration included pressure monitoring pipetting increasing pipetting accuracy and detecting sampling errors. The compounds were then separated in a chromatographic run of 9 min using a BEH Phenyl analytical column on a ultra-performance liquid chromatography-tandem mass spectrometry system. Optimization of the SPE was performed with different wash conditions and elution solvents. Intra- and inter-day relative standard deviations (RSDs) were within ±15% and bias was within ±15% for most of the compounds. Recovery was >69% (RSD < 11%) and matrix effects ranged from 1 to 26% when compensated with the internal standard. The limits of quantification ranged from 3 to 25 ng/mL depending on the compound. No cross-contamination in the automated SPE system was observed. The extracted samples were stable for 72 h in the autosampler (4°C). This method was applied to authentic samples (from forensic and toxicology cases) and to proficiency testing schemes containing cocaine, heroin, buprenorphine and methadone, offering fast and reliable results. Automation resulted in improved precision and accuracy, and a minimum operator intervention, leading to safer sample

  10. Linearly Supporting Feature Extraction for Automated Estimation of Stellar Atmospheric Parameters

    NASA Astrophysics Data System (ADS)

    Li, Xiangru; Lu, Yu; Comte, Georges; Luo, Ali; Zhao, Yongheng; Wang, Yongjun

    2015-05-01

    We describe a scheme to extract linearly supporting (LSU) features from stellar spectra to automatically estimate the atmospheric parameters {{T}{\\tt{eff} }}, log g, and [Fe/H]. “Linearly supporting” means that the atmospheric parameters can be accurately estimated from the extracted features through a linear model. The successive steps of the process are as follow: first, decompose the spectrum using a wavelet packet (WP) and represent it by the derived decomposition coefficients; second, detect representative spectral features from the decomposition coefficients using the proposed method Least Absolute Shrinkage and Selection Operator (LARS)bs; third, estimate the atmospheric parameters {{T}{\\tt{eff} }}, log g, and [Fe/H] from the detected features using a linear regression method. One prominent characteristic of this scheme is its ability to evaluate quantitatively the contribution of each detected feature to the atmospheric parameter estimate and also to trace back the physical significance of that feature. This work also shows that the usefulness of a component depends on both the wavelength and frequency. The proposed scheme has been evaluated on both real spectra from the Sloan Digital Sky Survey (SDSS)/SEGUE and synthetic spectra calculated from Kurucz's NEWODF models. On real spectra, we extracted 23 features to estimate {{T}{\\tt{eff} }}, 62 features for log g, and 68 features for [Fe/H]. Test consistencies between our estimates and those provided by the Spectroscopic Parameter Pipeline of SDSS show that the mean absolute errors (MAEs) are 0.0062 dex for log {{T}{\\tt{eff} }} (83 K for {{T}{\\tt{eff} }}), 0.2345 dex for log g, and 0.1564 dex for [Fe/H]. For the synthetic spectra, the MAE test accuracies are 0.0022 dex for log {{T}{\\tt{eff} }} (32 K for {{T}{\\tt{eff} }}), 0.0337 dex for log g, and 0.0268 dex for [Fe/H].

  11. Pedestrian detection in thermal images: An automated scale based region extraction with curvelet space validation

    NASA Astrophysics Data System (ADS)

    Lakshmi, A.; Faheema, A. G. J.; Deodhare, Dipti

    2016-05-01

    Pedestrian detection is a key problem in night vision processing with a dozen of applications that will positively impact the performance of autonomous systems. Despite significant progress, our study shows that performance of state-of-the-art thermal image pedestrian detectors still has much room for improvement. The purpose of this paper is to overcome the challenge faced by the thermal image pedestrian detectors, which employ intensity based Region Of Interest (ROI) extraction followed by feature based validation. The most striking disadvantage faced by the first module, ROI extraction, is the failed detection of cloth insulted parts. To overcome this setback, this paper employs an algorithm and a principle of region growing pursuit tuned to the scale of the pedestrian. The statistics subtended by the pedestrian drastically vary with the scale and deviation from normality approach facilitates scale detection. Further, the paper offers an adaptive mathematical threshold to resolve the problem of subtracting the background while extracting cloth insulated parts as well. The inherent false positives of the ROI extraction module are limited by the choice of good features in pedestrian validation step. One such feature is curvelet feature, which has found its use extensively in optical images, but has as yet no reported results in thermal images. This has been used to arrive at a pedestrian detector with a reduced false positive rate. This work is the first venture made to scrutinize the utility of curvelet for characterizing pedestrians in thermal images. Attempt has also been made to improve the speed of curvelet transform computation. The classification task is realized through the use of the well known methodology of Support Vector Machines (SVMs). The proposed method is substantiated with qualified evaluation methodologies that permits us to carry out probing and informative comparisons across state-of-the-art features, including deep learning methods, with six

  12. Automated DICOM metadata and volumetric anatomical information extraction for radiation dosimetry

    NASA Astrophysics Data System (ADS)

    Papamichail, D.; Ploussi, A.; Kordolaimi, S.; Karavasilis, E.; Papadimitroulas, P.; Syrgiamiotis, V.; Efstathopoulos, E.

    2015-09-01

    Patient-specific dosimetry calculations based on simulation techniques have as a prerequisite the modeling of the modality system and the creation of voxelized phantoms. This procedure requires the knowledge of scanning parameters and patients’ information included in a DICOM file as well as image segmentation. However, the extraction of this information is complicated and time-consuming. The objective of this study was to develop a simple graphical user interface (GUI) to (i) automatically extract metadata from every slice image of a DICOM file in a single query and (ii) interactively specify the regions of interest (ROI) without explicit access to the radiology information system. The user-friendly application developed in Matlab environment. The user can select a series of DICOM files and manage their text and graphical data. The metadata are automatically formatted and presented to the user as a Microsoft Excel file. The volumetric maps are formed by interactively specifying the ROIs and by assigning a specific value in every ROI. The result is stored in DICOM format, for data and trend analysis. The developed GUI is easy, fast and and constitutes a very useful tool for individualized dosimetry. One of the future goals is to incorporate a remote access to a PACS server functionality.

  13. Automated feature extraction and spatial organization of seafloor pockmarks, Belfast Bay, Maine, USA

    USGS Publications Warehouse

    Andrews, B.D.; Brothers, L.L.; Barnhardt, W.A.

    2010-01-01

    Seafloor pockmarks occur worldwide and may represent millions of m3 of continental shelf erosion, but few numerical analyses of their morphology and spatial distribution of pockmarks exist. We introduce a quantitative definition of pockmark morphology and, based on this definition, propose a three-step geomorphometric method to identify and extract pockmarks from high-resolution swath bathymetry. We apply this GIS-implemented approach to 25km2 of bathymetry collected in the Belfast Bay, Maine USA pockmark field. Our model extracted 1767 pockmarks and found a linear pockmark depth-to-diameter ratio for pockmarks field-wide. Mean pockmark depth is 7.6m and mean diameter is 84.8m. Pockmark distribution is non-random, and nearly half of the field's pockmarks occur in chains. The most prominent chains are oriented semi-normal to the steepest gradient in Holocene sediment thickness. A descriptive model yields field-wide spatial statistics indicating that pockmarks are distributed in non-random clusters. Results enable quantitative comparison of pockmarks in fields worldwide as well as similar concave features, such as impact craters, dolines, or salt pools. ?? 2010.

  14. A novel approach for automated shoreline extraction from remote sensing images using low level programming

    NASA Astrophysics Data System (ADS)

    Rigos, Anastasios; Vaiopoulos, Aristidis; Skianis, George; Tsekouras, George; Drakopoulos, Panos

    2015-04-01

    Tracking coastline changes is a crucial task in the context of coastal management and synoptic remotely sensed data has become an essential tool for this purpose. In this work, and within the framework of BeachTour project, we introduce a new method for shoreline extraction from high resolution satellite images. It was applied on two images taken by the WorldView-2 satellite (7 channels, 2m resolution) during July 2011 and August 2014. The location is the well-known tourist destination of Laganas beach spanning 5 km along the southern part of Zakynthos Island, Greece. The atmospheric correction was performed with the ENVI FLAASH procedure and the final images were validated against hyperspectral field measurements. Using three channels (CH2=blue, CH3=green and CH7=near infrared) the Modified Redness Index image was calculated according to: MRI=(CH7)2/[CH2x(CH3)3]. MRI has the property that its value keeps increasing as the water becomes shallower. This is followed by an abrupt reduction trend at the location of the wet sand up to the point where the dry shore face begins. After that it remains low-valued throughout the beach zone. Images based on this index were used for the shoreline extraction process that included the following steps: a) On the MRI based image, only an area near the shoreline was kept (this process is known as image masking). b) On the masked image the Canny edge detector operator was applied. c) Of all edges discovered on step (b) only the biggest was kept. d) If the line revealed on step (c) was unacceptable, i.e. not defining the shoreline or defining only part of it, then either more than one areas on step (c) were kept or on the MRI image the pixel values were bound in a particular interval [Blow, Bhigh] and only the ones belonging in this interval were kept. Then, steps (a)-(d) were repeated. Using this method, which is still under development, we were able to extract the shoreline position and reveal its changes during the 3-year period

  15. Automated on-line renewable solid-phase extraction-liquid chromatography exploiting multisyringe flow injection-bead injection lab-on-valve analysis.

    PubMed

    Quintana, José Benito; Miró, Manuel; Estela, José Manuel; Cerdà, Víctor

    2006-04-15

    In this paper, the third generation of flow injection analysis, also named the lab-on-valve (LOV) approach, is proposed for the first time as a front end to high-performance liquid chromatography (HPLC) for on-line solid-phase extraction (SPE) sample processing by exploiting the bead injection (BI) concept. The proposed microanalytical system based on discontinuous programmable flow features automated packing (and withdrawal after single use) of a small amount of sorbent (<5 mg) into the microconduits of the flow network and quantitative elution of sorbed species into a narrow band (150 microL of 95% MeOH). The hyphenation of multisyringe flow injection analysis (MSFIA) with BI-LOV prior to HPLC analysis is utilized for on-line postextraction treatment to ensure chemical compatibility between the eluate medium and the initial HPLC gradient conditions. This circumvents the band-broadening effect commonly observed in conventional on-line SPE-based sample processors due to the low eluting strength of the mobile phase. The potential of the novel MSFI-BI-LOV hyphenation for on-line handling of complex environmental and biological samples prior to reversed-phase chromatographic separations was assessed for the expeditious determination of five acidic pharmaceutical residues (viz., ketoprofen, naproxen, bezafibrate, diclofenac, and ibuprofen) and one metabolite (viz., salicylic acid) in surface water, urban wastewater, and urine. To this end, the copolymeric divinylbenzene-co-n-vinylpyrrolidone beads (Oasis HLB) were utilized as renewable sorptive entities in the micromachined unit. The automated analytical method features relative recovery percentages of >88%, limits of detection within the range 0.02-0.67 ng mL(-1), and coefficients of variation <11% for the column renewable mode and gives rise to a drastic reduction in operation costs ( approximately 25-fold) as compared to on-line column switching systems. PMID:16615800

  16. Automated structure extraction and XML conversion of life science database flat files.

    PubMed

    Philippi, Stephan; Köhler, Jacob

    2006-10-01

    In the light of the increasing number of biological databases, their integration is a fundamental prerequisite for answering complex biological questions. Database integration, therefore, is an important area of research in bioinformatics. Since most of the publicly available life science databases are still exclusively exchanged by means of proprietary flat files, database integration requires parsers for very different flat file formats. Unfortunately, the development and maintenance of database specific flat file parsers is a nontrivial and time-consuming task, which takes considerable effort in large-scale integration scenarios. This paper introduces heuristically based concepts for automatic structure extraction from life science database flat files. On the basis of these concepts the FlatEx prototype is developed for the automatic conversion of flat files into XML representations. PMID:17044405

  17. Robust semi-automated path extraction for visualising stenosis of the coronary arteries.

    PubMed

    Mueller, Daniel; Maeder, Anthony

    2008-09-01

    Computed tomography angiography (CTA) is useful for diagnosing and planning treatment of heart disease. However, contrast agent in surrounding structures (such as the aorta and left ventricle) makes 3D visualisation of the coronary arteries difficult. This paper presents a composite method employing segmentation and volume rendering to overcome this issue. A key contribution is a novel Fast Marching minimal path cost function for vessel centreline extraction. The resultant centreline is used to compute a measure of vessel lumen, which indicates the degree of stenosis (narrowing of a vessel). Two volume visualisation techniques are presented which utilise the segmented arteries and lumen measure. The system is evaluated and demonstrated using synthetic and clinically obtained datasets. PMID:18603408

  18. Extracted facial feature of racial closely related faces

    NASA Astrophysics Data System (ADS)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  19. Dispersive liquid-liquid microextraction combined with semi-automated in-syringe back extraction as a new approach for the sample preparation of ionizable organic compounds prior to liquid chromatography.

    PubMed

    Melwanki, Mahaveer B; Fuh, Ming-Ren

    2008-07-11

    Dispersive liquid-liquid microextraction (DLLME) followed by a newly designed semi-automated in-syringe back extraction technique has been developed as an extraction methodology for the extraction of polar organic compounds prior to liquid chromatography (LC) measurement. The method is based on the formation of tiny droplets of the extractant in the sample solution using water-immiscible organic solvent (extractant) dissolved in a water-miscible organic dispersive solvent. Extraction of the analytes from aqueous sample into the dispersed organic droplets took place. The extracting organic phase was separated by centrifuging and the sedimented phase was withdrawn into a syringe. Then in-syringe back extraction was utilized to extract the analytes into an aqueous solution prior to LC analysis. Clenbuterol (CB), a basic organic compound used as a model, was extracted from a basified aqueous sample using 25 microL tetrachloroethylene (TCE, extraction solvent) dissolved in 500 microL acetone (as a dispersive solvent). After separation of the organic extracting phase by centrifuging, CB enriched in TCE phase was back extracted into 10 microL of 1% aqueous formic acid (FA) within the syringe. Back extraction was facilitated by repeatedly moving the plunger back and forth within the barrel of syringe, assisted by a syringe pump. Due to the plunger movement, a thin organic film is formed on the inner layer of the syringe that comes in contact with the acidic aqueous phase. Here, CB, a basic analyte, will be protonated and back extracted into FA. Various parameters affecting the extraction efficiency, viz., choice of extraction and dispersive solvent, salt effect, speed of syringe pump, back extraction time period, effect of concentration of base and acid, were evaluated. Under optimum conditions, precision, linearity (correlation coefficient, r(2)=0.9966 over the concentration range of 10-1000 ng mL(-1) CB), detection limit (4.9 ng mL(-1)), enrichment factor (175), relative

  20. Superheated liquid extraction of oleuropein and related biophenols from olive leaves.

    PubMed

    Japón-Luján, R; Luque de Castro, M D

    2006-12-15

    Oleuropein and other healthy olive biophenols (OBPs) such as verbacoside, apigenin-7-glucoside and luteolin-7-glucoside have been extracted from olive leaves by using superheated liquids and a static-dynamic approach. Multivariate methodology has been used to carry out a detailed optimisation of the extraction. Under the optimal working conditions, complete removal without degradation of the target analytes was achieved in 13 min. The extract was injected into a chromatograph-photodiode array detector assembly for individual separation-quantification. The proposed approach - which provides more concentrated extracts than previous alternatives - is very useful to study matrix-extractant analytes partition. In addition, the efficacy of superheated liquids to extract OBPs, the simplicity of the experimental setup, its easy automation and low acquisition and maintenance costs make the industrial implementation of the proposed method advisable. PMID:17045596

  1. Automated extraction of absorption features from Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Geophysical and Environmental Research Imaging Spectrometer (GERIS) data

    NASA Technical Reports Server (NTRS)

    Kruse, Fred A.; Calvin, Wendy M.; Seznec, Olivier

    1988-01-01

    Automated techniques were developed for the extraction and characterization of absorption features from reflectance spectra. The absorption feature extraction algorithms were successfully tested on laboratory, field, and aircraft imaging spectrometer data. A suite of laboratory spectra of the most common minerals was analyzed and absorption band characteristics tabulated. A prototype expert system was designed, implemented, and successfully tested to allow identification of minerals based on the extracted absorption band characteristics. AVIRIS spectra for a site in the northern Grapevine Mountains, Nevada, have been characterized and the minerals sericite (fine grained muscovite) and dolomite were identified. The minerals kaolinite, alunite, and buddingtonite were identified and mapped for a site at Cuprite, Nevada, using the feature extraction algorithms on the new Geophysical and Environmental Research 64 channel imaging spectrometer (GERIS) data. The feature extraction routines (written in FORTRAN and C) were interfaced to the expert system (written in PROLOG) to allow both efficient processing of numerical data and logical spectrum analysis.

  2. Quantification of rosuvastatin in human plasma by automated solid-phase extraction using tandem mass spectrometric detection.

    PubMed

    Hull, C K; Penman, A D; Smith, C K; Martin, P D

    2002-06-01

    An assay employing automated solid-phase extraction (SPE) followed by high-performance liquid chromatography with positive ion TurboIonspray tandem mass spectrometry (LC-MS-MS) was developed and validated for the quantification of rosuvastatin (Crestor) in human plasma. Rosuvastatin is a hydroxy-methyl glutaryl coenzyme A reductase inhibitor currently under development by AstraZeneca. The standard curve range in human plasma was 0.1-30 ng/ml with a lower limit of quantification (LLOQ) verified at 0.1 ng/ml. Inaccuracy was less than 8% and imprecision less than +/-15% at all concentration levels. There was no interference from endogenous substances. The analyte was stable in human plasma following three freeze/thaw cycles and for up to 6 months following storage at both -20 and -70 degrees C. The assay was successfully applied to the analysis of rosuvastatin in human plasma samples derived from clinical trials, allowing the pharmacokinetics of the compound to be determined. PMID:12007766

  3. Automated and portable solid phase extraction platform for immuno-detection of 17β-estradiol in water.

    PubMed

    Heub, Sarah; Tscharner, Noe; Monnier, Véronique; Kehl, Florian; Dittrich, Petra S; Follonier, Stéphane; Barbe, Laurent

    2015-02-13

    A fully automated and portable system for solid phase extraction (SPE) has been developed for the analysis of the natural hormone 17β-estradiol (E2) in environmental water by enzyme linked immuno-sorbent assay (ELISA). The system has been validated with de-ionized and artificial sea water as model samples and allowed for pre-concentration of E2 at levels of 1, 10 and 100 ng/L with only 100 ml of sample. Recoveries ranged from 24±3% to 107±6% depending on the concentration and sample matrix. The method successfully allowed us to determine the concentration of two seawater samples. A concentration of 15.1±0.3 ng/L of E2 was measured in a sample obtained from a food production process, and 8.8±0.7 ng/L in a sample from the Adriatic Sea. The system would be suitable for continuous monitoring of water quality as it is user friendly, and as the method is reproducible and totally compatible with the analysis of water sample by simple immunoassays and other detection methods such as biosensors. PMID:25604269

  4. Dried Blood Spot Proteomics: Surface Extraction of Endogenous Proteins Coupled with Automated Sample Preparation and Mass Spectrometry Analysis

    NASA Astrophysics Data System (ADS)

    Martin, Nicholas J.; Bunch, Josephine; Cooper, Helen J.

    2013-08-01

    Dried blood spots offer many advantages as a sample format including ease and safety of transport and handling. To date, the majority of mass spectrometry analyses of dried blood spots have focused on small molecules or hemoglobin. However, dried blood spots are a potentially rich source of protein biomarkers, an area that has been overlooked. To address this issue, we have applied an untargeted bottom-up proteomics approach to the analysis of dried blood spots. We present an automated and integrated method for extraction of endogenous proteins from the surface of dried blood spots and sample preparation via trypsin digestion by use of the Advion Biosciences Triversa Nanomate robotic platform. Liquid chromatography tandem mass spectrometry of the resulting digests enabled identification of 120 proteins from a single dried blood spot. The proteins identified cross a concentration range of four orders of magnitude. The method is evaluated and the results discussed in terms of the proteins identified and their potential use as biomarkers in screening programs.

  5. Automated Feature Extraction in Brain Tumor by Magnetic Resonance Imaging Using Gaussian Mixture Models

    PubMed Central

    Chaddad, Ahmad

    2015-01-01

    This paper presents a novel method for Glioblastoma (GBM) feature extraction based on Gaussian mixture model (GMM) features using MRI. We addressed the task of the new features to identify GBM using T1 and T2 weighted images (T1-WI, T2-WI) and Fluid-Attenuated Inversion Recovery (FLAIR) MR images. A pathologic area was detected using multithresholding segmentation with morphological operations of MR images. Multiclassifier techniques were considered to evaluate the performance of the feature based scheme in terms of its capability to discriminate GBM and normal tissue. GMM features demonstrated the best performance by the comparative study using principal component analysis (PCA) and wavelet based features. For the T1-WI, the accuracy performance was 97.05% (AUC = 92.73%) with 0.00% missed detection and 2.95% false alarm. In the T2-WI, the same accuracy (97.05%, AUC = 91.70%) value was achieved with 2.95% missed detection and 0.00% false alarm. In FLAIR mode the accuracy decreased to 94.11% (AUC = 95.85%) with 0.00% missed detection and 5.89% false alarm. These experimental results are promising to enhance the characteristics of heterogeneity and hence early treatment of GBM. PMID:26136774

  6. Towards the automated geomorphometric extraction of talus slopes in Martian landscapes

    NASA Astrophysics Data System (ADS)

    Podobnikar, Tomaž; Székely, Balázs

    2015-01-01

    Terrestrial talus slopes are a common feature of mountainous environments. Their geomorphic form is determined by their being constituted of scree, or similar loose and often poorly sorted material. Martian talus slopes are governed by the different nature of the Martian environment, namely: weaker gravity, the wide availability of loose material, the lack of fluvial erosion and the typicality of large escarpments; all these factors make talus slopes a more striking areomorphic feature on Mars than on Earth. This paper concerns the development of a numerical geomorphometric analysis, parameterization and detection of the talus slopes method. We design inventive variables, a multidirectional visibility index (MVI) and a relief above (RA) and propose two techniques of talus slope extraction: ISOcluster and progressive Boolean overlay. Our Martian digital terrain model (DTM) was derived from the ESA Mars Express HRSC imagery, with a resolution of 50 m. The method was tested in the study areas of Nanedi Valles and West Candor Chasma. The major challenge concerned the quality of the DTM. The selection of robust variables was therefore crucial. Our final model is to a certain degree DTM-error tolerant. The results show that the method is selective concerning those slopes that can be considered to constitute a talus slopes area, according to the visual interpretation of HRSC images. Based on an analysis of the DTM, it is possible to infer various geological properties and geophysical processes of the Martian and terrestrial environments; this has a range of applications, such as natural hazard risk management.

  7. Development of an automated method for Folin-Ciocalteu total phenolic assay in artichoke extracts.

    PubMed

    Yoo, Kil Sun; Lee, Eun Jin; Leskovar, Daniel; Patil, Bhimanagouda S

    2012-12-01

    We developed a system to run the Folin-Ciocalteu (F-C) total phenolic assay, in artichoke extract samples, which is fully automatic, consistent, and fast. The system uses 2 high performance liquid chromatography (HPLC) pumps, an autosampler, a column heater, a UV/Vis detector, and a data collection system. To test the system, a pump delivered 10-fold diluted F-C reagent solution at a rate of 0.7 mL/min, and 0.4 g/mL sodium carbonate at a rate of 2.1 mL/min. The autosampler injected 10 μL per 1.2 min, which was mixed with the F-C reagent and heated to 65 °C while it passed through the column heater. The heated reactant was mixed with sodium carbonate and color intensity was measured by the detector at 600 nm. The data collection system recorded the color intensity, and peak area of each sample was calculated as the concentration of the total phenolic content, expressed in μg/mL as either chlorogenic acid or gallic acid. This new method had superb repeatability (0.7% CV) and a high correlation with both the manual method (r(2) = 0.93) and the HPLC method (r(2) = 0.78). Ascorbic acid and quercetin showed variable antioxidant activity, but sugars did not. This method can be efficiently applied to research that needs to test many numbers of antioxidant capacity samples with speed and accuracy. PMID:23163965

  8. Bisphosphonate-Related Osteonecrosis of the Jaw After Tooth Extraction.

    PubMed

    Ribeiro, Ney Robson Bezerra; Silva, Leonardo de Freitas; Santana, Diego Matos; Nogueira, Renato Luiz Maia

    2015-10-01

    Bisphosphonates are widely used for treatment or prevention of bone diseases characterized by high osteoclastic activity. Among the oral medicines used to treat osteoporosis, alendronate has been often used. Despite of the low rate of complications on its use, cases of osteonecrosis of the jaw have been reported on literature after tooth extractions. The main symptoms include pain, tooth mobility, swelling, erythema, and ulceration. The risk factors related to osteonecrosis of the jaw associated with bisphosphonate are exposition time to the medicine, routes of administration, and oral surgical procedures performed. The aim of this work is to report a case of a patient showing osteonecrosis of the jaw associated with the use of oral bisphosphonates after tooth extractions. The patient was treated through the suspension of the alendronate with the removal of the necrotic tissue and the foci of infection. After a year's follow-up, the patient showed no recurrence signs. From the foregoing, the interruption of the alendronate use and the surgical treatment associated to antibiotic therapy showed effective on the patient's treatment. PMID:26468839

  9. Automation or De-automation

    NASA Astrophysics Data System (ADS)

    Gorlach, Igor; Wessel, Oliver

    2008-09-01

    In the global automotive industry, for decades, vehicle manufacturers have continually increased the level of automation of production systems in order to be competitive. However, there is a new trend to decrease the level of automation, especially in final car assembly, for reasons of economy and flexibility. In this research, the final car assembly lines at three production sites of Volkswagen are analysed in order to determine the best level of automation for each, in terms of manufacturing costs, productivity, quality and flexibility. The case study is based on the methodology proposed by the Fraunhofer Institute. The results of the analysis indicate that fully automated assembly systems are not necessarily the best option in terms of cost, productivity and quality combined, which is attributed to high complexity of final car assembly systems; some de-automation is therefore recommended. On the other hand, the analysis shows that low automation can result in poor product quality due to reasons related to plant location, such as inadequate workers' skills, motivation, etc. Hence, the automation strategy should be formulated on the basis of analysis of all relevant aspects of the manufacturing process, such as costs, quality, productivity and flexibility in relation to the local context. A more balanced combination of automated and manual assembly operations provides better utilisation of equipment, reduces production costs and improves throughput.

  10. Path duplication using GPS carrier based relative position for automated ground vehicle convoys

    NASA Astrophysics Data System (ADS)

    Travis, William E., III

    A GPS based automated convoy strategy to duplicate the path of a lead vehicle is presented in this dissertation. Laser scanners and cameras are not used; all information available comes from GPS or inertial systems. An algorithm is detailed that uses GPS carrier phase measurements to determine relative position between two moving ground vehicles. Error analysis shows the accuracy is centimeter level. It is shown that the time to the first solution fix is dependent upon initial relative position accuracy, and that near instantaneous fixes can be realized if that accuracy is less than 20 centimeters. The relative positioning algorithm is then augmented with inertial measurement units to dead reckon through brief outages. Performance analysis of automotive and tactical grade units shows the twenty centimeter threshold can be maintained for only a few seconds with the automotive grade unit and for 14 seconds with the tactical unit. Next, techniques to determine odometry information in vector form are discussed. Three methods are outlined: dead reckoning of inertial sensors, time differencing GPS carrier measurements to determine change in platform position, and aiding the time differenced carrier measurements with inertial measurements. Partial integration of a tactical grade inertial measurement unit provided the lowest error drift for the scenarios investigated, but the time differenced carrier phase approach provided the most cost feasible approach with similar accuracy. Finally, the relative position and odometry algorithms are used to generate a reference by which an automated following vehicle can replicate a lead vehicle's path of travel. The first method presented uses only the relative position information to determine a relative angle to the leader. Using the relative angle as a heading reference for a steering control causes the follower to drive at the lead vehicle, thereby creating a towing effect on the follower when both vehicles are in motion. Effective

  11. Automation of static and dynamic non-dispersive liquid phase microextraction. Part 1: Approaches based on extractant drop-, plug-, film- and microflow-formation.

    PubMed

    Alexovič, Michal; Horstkotte, Burkhard; Solich, Petr; Sabo, Ján

    2016-02-01

    Simplicity, effectiveness, swiftness, and environmental friendliness - these are the typical requirements for the state of the art development of green analytical techniques. Liquid phase microextraction (LPME) stands for a family of elegant sample pretreatment and analyte preconcentration techniques preserving these principles in numerous applications. By using only fractions of solvent and sample compared to classical liquid-liquid extraction, the extraction kinetics, the preconcentration factor, and the cost efficiency can be increased. Moreover, significant improvements can be made by automation, which is still a hot topic in analytical chemistry. This review surveys comprehensively and in two parts the developments of automation of non-dispersive LPME methodologies performed in static and dynamic modes. Their advantages and limitations and the reported analytical performances are discussed and put into perspective with the corresponding manual procedures. The automation strategies, techniques, and their operation advantages as well as their potentials are further described and discussed. In this first part, an introduction to LPME and their static and dynamic operation modes as well as their automation methodologies is given. The LPME techniques are classified according to the different approaches of protection of the extraction solvent using either a tip-like (needle/tube/rod) support (drop-based approaches), a wall support (film-based approaches), or microfluidic devices. In the second part, the LPME techniques based on porous supports for the extraction solvent such as membranes and porous media are overviewed. An outlook on future demands and perspectives in this promising area of analytical chemistry is finally given. PMID:26772123

  12. Automated liquid-liquid extraction workstation for library synthesis and its use in the parallel and chromatography-free synthesis of 2-alkyl-3-alkyl-4-(3H)-quinazolinones.

    PubMed

    Carpintero, Mercedes; Cifuentes, Marta; Ferritto, Rafael; Haro, Rubén; Toledo, Miguel A

    2007-01-01

    An automated liquid-liquid extraction workstation has been developed. This module processes up to 96 samples in an automated and parallel mode avoiding the time-consuming and intensive sample manipulation during the workup process. To validate the workstation, a highly automated and chromatography-free synthesis of differentially substituted quinazolin-4(3H)-ones with two diversity points has been carried out using isatoic anhydride as starting material. PMID:17645313

  13. A Logic-Based Approach to Relation Extraction from Texts

    NASA Astrophysics Data System (ADS)

    Horváth, Tamás; Paass, Gerhard; Reichartz, Frank; Wrobel, Stefan

    In recent years, text mining has moved far beyond the classical problem of text classification with an increased interest in more sophisticated processing of large text corpora, such as, for example, evaluations of complex queries. This and several other tasks are based on the essential step of relation extraction. This problem becomes a typical application of learning logic programs by considering the dependency trees of sentences as relational structures and examples of the target relation as ground atoms of a target predicate. In this way, each example is represented by a definite first-order Horn-clause. We show that an adaptation of Plotkin's least general generalization (LGG) operator can effectively be applied to such clauses and propose a simple and effective divide-and-conquer algorithm for listing a certain set of LGGs. We use these LGGs to generate binary features and compute the hypothesis by applying SVM to the feature vectors obtained. Empirical results on the ACE-2003 benchmark dataset indicate that the performance of our approach is comparable to state-of-the-art kernel methods.

  14. Automated extraction of pressure ridges from SAR images of sea ice - Comparison with surface truth

    NASA Technical Reports Server (NTRS)

    Vesecky, J. F.; Smith, M. P.; Samadani, R.; Daida, J. M.; Comiso, J. C.

    1991-01-01

    The authors estimate the characteristics of ridges and leads in sea ice from SAR (synthetic aperture radar) images. Such estimates are based on the hypothesis that bright filamentary features in SAR sea ice images correspond with pressure ridges. A data set collected in the Greenland Sea in 1987 allows this hypothesis to be evaluated for X-band SAR images. A preliminary analysis of data collected from SAR images and ice elevation (from a laser altimeter) is presented. It is found that SAR image brightness and ice elevation are clearly related. However, the correlation, using the data and techniques applied, is not strong.

  15. Performance verification of the Maxwell 16 Instrument and DNA IQ Reference Sample Kit for automated DNA extraction of known reference samples.

    PubMed

    Krnajski, Z; Geering, S; Steadman, S

    2007-12-01

    Advances in automation have been made for a number of processes conducted in the forensic DNA laboratory. However, because most robotic systems are designed for high-throughput laboratories batching large numbers of samples, smaller laboratories are left with a limited number of cost-effective options for employing automation. The Maxwell 16 Instrument and DNA IQ Reference Sample Kit marketed by Promega are designed for rapid, automated purification of DNA extracts from sample sets consisting of sixteen or fewer samples. Because the system is based on DNA capture by paramagnetic particles with maximum binding capacity, it is designed to generate extracts with yield consistency. The studies herein enabled evaluation of STR profile concordance, consistency of yield, and cross-contamination performance for the Maxwell 16 Instrument. Results indicate that the system performs suitably for streamlining the process of extracting known reference samples generally used for forensic DNA analysis and has many advantages in a small or moderate-sized laboratory environment. PMID:25869266

  16. Medication Incidents Related to Automated Dose Dispensing in Community Pharmacies and Hospitals - A Reporting System Study

    PubMed Central

    Cheung, Ka-Chun; van den Bemt, Patricia M. L. A.; Bouvy, Marcel L.; Wensing, Michel; De Smet, Peter A. G. M.

    2014-01-01

    Introduction Automated dose dispensing (ADD) is being introduced in several countries and the use of this technology is expected to increase as a growing number of elderly people need to manage their medication at home. ADD aims to improve medication safety and treatment adherence, but it may introduce new safety issues. This descriptive study provides insight into the nature and consequences of medication incidents related to ADD, as reported by healthcare professionals in community pharmacies and hospitals. Methods The medication incidents that were submitted to the Dutch Central Medication incidents Registration (CMR) reporting system were selected and characterized independently by two researchers. Main Outcome Measures Person discovering the incident, phase of the medication process in which the incident occurred, immediate cause of the incident, nature of incident from the healthcare provider's perspective, nature of incident from the patient's perspective, and consequent harm to the patient caused by the incident. Results From January 2012 to February 2013 the CMR received 15,113 incidents: 3,685 (24.4%) incidents from community pharmacies and 11,428 (75.6%) incidents from hospitals. Eventually 1 of 50 reported incidents (268/15,113 = 1.8%) were related to ADD; in community pharmacies more incidents (227/3,685 = 6.2%) were related to ADD than in hospitals (41/11,428 = 0.4%). The immediate cause of an incident was often a change in the patient's medicine regimen or relocation. Most reported incidents occurred in two phases: entering the prescription into the pharmacy information system and filling the ADD bag. Conclusion A proportion of incidents was related to ADD and is reported regularly, especially by community pharmacies. In two phases, entering the prescription into the pharmacy information system and filling the ADD bag, most incidents occurred. A change in the patient's medicine regimen or relocation was the immediate causes of an incident

  17. Evaluation of Three Automated Nucleic Acid Extraction Systems for Identification of Respiratory Viruses in Clinical Specimens by Multiplex Real-Time PCR

    PubMed Central

    Kwon, Aerin; Lee, Kyung-A

    2014-01-01

    A total of 84 nasopharyngeal swab specimens were collected from 84 patients. Viral nucleic acid was extracted by three automated extraction systems: QIAcube (Qiagen, Germany), EZ1 Advanced XL (Qiagen), and MICROLAB Nimbus IVD (Hamilton, USA). Fourteen RNA viruses and two DNA viruses were detected using the Anyplex II RV16 Detection kit (Seegene, Republic of Korea). The EZ1 Advanced XL system demonstrated the best analytical sensitivity for all the three viral strains. The nucleic acids extracted by EZ1 Advanced XL showed higher positive rates for virus detection than the others. Meanwhile, the MICROLAB Nimbus IVD system was comprised of fully automated steps from nucleic extraction to PCR setup function that could reduce human errors. For the nucleic acids recovered from nasopharyngeal swab specimens, the QIAcube system showed the fewest false negative results and the best concordance rate, and it may be more suitable for detecting various viruses including RNA and DNA virus strains. Each system showed different sensitivity and specificity for detection of certain viral pathogens and demonstrated different characteristics such as turnaround time and sample capacity. Therefore, these factors should be considered when new nucleic acid extraction systems are introduced to the laboratory. PMID:24868527

  18. The ValleyMorph Tool: An automated extraction tool for transverse topographic symmetry (T-) factor and valley width to valley height (Vf-) ratio

    NASA Astrophysics Data System (ADS)

    Daxberger, Heidi; Dalumpines, Ron; Scott, Darren M.; Riller, Ulrich

    2014-09-01

    In tectonically active regions on Earth, shallow-crustal deformation associated with seismic hazards may pose a threat to human life and property. The study of landform development, such as analysis of the valley width to valley height ratio (Vf-ratio) and the Transverse Topographic Symmetry Factor (T-factor), delineating drainage basin symmetry, can be used as a relative measure of tectonic activity along fault-bound mountain fronts. The fast evolution of digital elevation models (DEM) provides an ideal base for remotely-sensed tectonomorphic studies of large areas using Geographical Information Systems (GIS). However, a manual extraction of the above mentioned morphologic parameters may be tedious and very time consuming. Moreover, basic GIS software suites do not provide the necessary built-in functions. Therefore, we present a newly developed, Python based, ESRI ArcGIS compatible tool and stand-alone script, the ValleyMorph Tool. This tool facilitates an automated extraction of the Vf-ratio and the T-factor data for large regions. Using a digital elevation raster and watershed polygon files as input, the tool provides output in the form of several ArcGIS data tables and shapefiles, ideal for further data manipulation and computation. This coding enables an easy application among the ArcGIS user community and code conversion to earlier ArcGIS versions. The ValleyMorph Tool is easy to use due to a simple graphical user interface. The tool is tested for the southern Central Andes using a total of 3366 watersheds.

  19. AUTOMATED ANALYSIS OF AQUEOUS SAMPLES CONTAINING PESTICIDES, ACIDIC/BASIC/NEUTRAL SEMIVOLATILES AND VOLATILE ORGANIC COMPOUNDS BY SOLID PHASE EXTRACTION COUPLED IN-LINE TO LARGE VOLUME INJECTION GC/MS

    EPA Science Inventory

    Data is presented on the development of a new automated system combining solid phase extraction (SPE) with GC/MS spectrometry for the single-run analysis of water samples containing a broad range of organic compounds. The system uses commercially available automated in-line 10-m...

  20. Submicrometric Magnetic Nanoporous Carbons Derived from Metal-Organic Frameworks Enabling Automated Electromagnet-Assisted Online Solid-Phase Extraction.

    PubMed

    Frizzarin, Rejane M; Palomino Cabello, Carlos; Bauzà, Maria Del Mar; Portugal, Lindomar A; Maya, Fernando; Cerdà, Víctor; Estela, José M; Turnes Palomino, Gemma

    2016-07-19

    We present the first application of submicrometric magnetic nanoporous carbons (μMNPCs) as sorbents for automated solid-phase extraction (SPE). Small zeolitic imidazolate framework-67 crystals are obtained at room temperature and directly carbonized under an inert atmosphere to obtain submicrometric nanoporous carbons containing magnetic cobalt nanoparticles. The μMNPCs have a high contact area, high stability, and their preparation is simple and cost-effective. The prepared μMNPCs are exploited as sorbents in a microcolumn format in a sequential injection analysis (SIA) system with online spectrophotometric detection, which includes a specially designed three-dimensional (3D)-printed holder containing an automatically actuated electromagnet. The combined action of permanent magnets and an automatically actuated electromagnet enabled the movement of the solid bed of particles inside the microcolumn, preventing their aggregation, increasing the versatility of the system, and increasing the preconcentration efficiency. The method was optimized using a full factorial design and Doehlert Matrix. The developed system was applied to the determination of anionic surfactants, exploiting the retention of the ion-pairs formed with Methylene Blue on the μMNPC. Using sodium dodecyl sulfate as a model analyte, quantification was linear from 50 to 1000 μg L(-1), and the detection limit was equal to 17.5 μg L(-1), the coefficient of variation (n = 8; 100 μg L(-1)) was 2.7%, and the analysis throughput was 13 h(-1). The developed approach was applied to the determination of anionic surfactants in water samples (natural water, groundwater, and wastewater), yielding recoveries of 93% to 110% (95% confidence level). PMID:27336802

  1. Satellite mapping and automated feature extraction: Geographic information system-based change detection of the Antarctic coast

    NASA Astrophysics Data System (ADS)

    Kim, Kee-Tae

    Declassified Intelligence Satellite Photograph (DISP) data are important resources for measuring the geometry of the coastline of Antarctica. By using the state-of-art digital imaging technology, bundle block triangulation based on tie points and control points derived from a RADARSAT-1 Synthetic Aperture Radar (SAR) image mosaic and Ohio State University (OSU) Antarctic digital elevation model (DEM), the individual DISP images were accurately assembled into a map quality mosaic of Antarctica as it appeared in 1963. The new map is one of important benchmarks for gauging the response of the Antarctic coastline to changing climate. Automated coastline extraction algorithm design is the second theme of this dissertation. At the pre-processing stage, an adaptive neighborhood filtering was used to remove the film-grain noise while preserving edge features. At the segmentation stage, an adaptive Bayesian approach to image segmentation was used to split the DISP imagery into its homogenous regions, in which the fuzzy c-means clustering (FCM) technique and Gibbs random field (GRF) model were introduced to estimate the conditional and prior probability density functions. A Gaussian mixture model was used to estimate the reliable initial values for the FCM technique. At the post-processing stage, image object formation and labeling, removal of noisy image objects, and vectorization algorithms were sequentially applied to segmented images for extracting a vector representation of coastlines. Results were presented that demonstrate the effectiveness of the algorithm in segmenting the DISP data. In the cases of cloud cover and little contrast scenes, manual editing was carried out based on intermediate image processing and visual inspection in comparison of old paper maps. Through a geographic information system (GIS), the derived DISP coastline data were integrated with earlier and later data to assess continental scale changes in the Antarctic coast. Computing the area of

  2. MG-Digger: An Automated Pipeline to Search for Giant Virus-Related Sequences in Metagenomes.

    PubMed

    Verneau, Jonathan; Levasseur, Anthony; Raoult, Didier; La Scola, Bernard; Colson, Philippe

    2016-01-01

    The number of metagenomic studies conducted each year is growing dramatically. Storage and analysis of such big data is difficult and time-consuming. Interestingly, analysis shows that environmental and human metagenomes include a significant amount of non-annotated sequences, representing a 'dark matter.' We established a bioinformatics pipeline that automatically detects metagenome reads matching query sequences from a given set and applied this tool to the detection of sequences matching large and giant DNA viral members of the proposed order Megavirales or virophages. A total of 1,045 environmental and human metagenomes (≈ 1 Terabase) were collected, processed, and stored on our bioinformatics server. In addition, nucleotide and protein sequences from 93 Megavirales representatives, including 19 giant viruses of amoeba, and 5 virophages, were collected. The pipeline was generated by scripts written in Python language and entitled MG-Digger. Metagenomes previously found to contain megavirus-like sequences were tested as controls. MG-Digger was able to annotate 100s of metagenome sequences as best matching those of giant viruses. These sequences were most often found to be similar to phycodnavirus or mimivirus sequences, but included reads related to recently available pandoraviruses, Pithovirus sibericum, and faustoviruses. Compared to other tools, MG-Digger combined stand-alone use on Linux or Windows operating systems through a user-friendly interface, implementation of ready-to-use customized metagenome databases and query sequence databases, adjustable parameters for BLAST searches, and creation of output files containing selected reads with best match identification. Compared to Metavir 2, a reference tool in viral metagenome analysis, MG-Digger detected 8% more true positive Megavirales-related reads in a control metagenome. The present work shows that massive, automated and recurrent analyses of metagenomes are effective in improving knowledge about the

  3. MG-Digger: An Automated Pipeline to Search for Giant Virus-Related Sequences in Metagenomes

    PubMed Central

    Verneau, Jonathan; Levasseur, Anthony; Raoult, Didier; La Scola, Bernard; Colson, Philippe

    2016-01-01

    The number of metagenomic studies conducted each year is growing dramatically. Storage and analysis of such big data is difficult and time-consuming. Interestingly, analysis shows that environmental and human metagenomes include a significant amount of non-annotated sequences, representing a ‘dark matter.’ We established a bioinformatics pipeline that automatically detects metagenome reads matching query sequences from a given set and applied this tool to the detection of sequences matching large and giant DNA viral members of the proposed order Megavirales or virophages. A total of 1,045 environmental and human metagenomes (≈ 1 Terabase) were collected, processed, and stored on our bioinformatics server. In addition, nucleotide and protein sequences from 93 Megavirales representatives, including 19 giant viruses of amoeba, and 5 virophages, were collected. The pipeline was generated by scripts written in Python language and entitled MG-Digger. Metagenomes previously found to contain megavirus-like sequences were tested as controls. MG-Digger was able to annotate 100s of metagenome sequences as best matching those of giant viruses. These sequences were most often found to be similar to phycodnavirus or mimivirus sequences, but included reads related to recently available pandoraviruses, Pithovirus sibericum, and faustoviruses. Compared to other tools, MG-Digger combined stand-alone use on Linux or Windows operating systems through a user-friendly interface, implementation of ready-to-use customized metagenome databases and query sequence databases, adjustable parameters for BLAST searches, and creation of output files containing selected reads with best match identification. Compared to Metavir 2, a reference tool in viral metagenome analysis, MG-Digger detected 8% more true positive Megavirales-related reads in a control metagenome. The present work shows that massive, automated and recurrent analyses of metagenomes are effective in improving knowledge about

  4. Method for extracting copper, silver and related metals

    DOEpatents

    Moyer, Bruce A.; McDowell, W. J.

    1990-01-01

    A process for selectively extracting precious metals such as silver and gold concurrent with copper extraction from aqueous solutions containing the same. The process utilizes tetrathiamacrocycles and high molecular weight organic acids that exhibit a synergistic relationship when complexing with certain metal ions thereby removing them from ore leach solutions.

  5. Method for extracting copper, silver and related metals

    DOEpatents

    Moyer, B.A.; McDowell, W.J.

    1987-10-23

    A process for selectively extracting precious metals such as silver and gold concurrent with copper extraction from aqueous solutions containing the same. The process utilizes tetrathiamacrocycles and high molecular weight organic acids that exhibit a synergistic relationship when complexing with certain metal ions thereby removing them from ore leach solutions.

  6. Method for extracting copper, silver and related metals

    SciTech Connect

    Moyer, B.A.; McDowell, W.J.

    1990-05-22

    This patent describes a process for selectively extracting precious metal such as silver and gold concurrent with copper extraction from aqueous solutions containing the same. The process utilizes tetrathiamacrocycles and high molecular weight organic acids that exhibit a synergistic relationship when complexing with certain metal ions thereby removing them from ore leach solutions.

  7. Extraction of a group-pair relation: problem-solving relation from web-board documents.

    PubMed

    Pechsiri, Chaveevan; Piriyakul, Rapepun

    2016-01-01

    This paper aims to extract a group-pair relation as a Problem-Solving relation, for example a DiseaseSymptom-Treatment relation and a CarProblem-Repair relation, between two event-explanation groups, a problem-concept group as a symptom/CarProblem-concept group and a solving-concept group as a treatment-concept/repair concept group from hospital-web-board and car-repair-guru-web-board documents. The Problem-Solving relation (particularly Symptom-Treatment relation) including the graphical representation benefits non-professional persons by supporting knowledge of primarily solving problems. The research contains three problems: how to identify an EDU (an Elementary Discourse Unit, which is a simple sentence) with the event concept of either a problem or a solution; how to determine a problem-concept EDU boundary and a solving-concept EDU boundary as two event-explanation groups, and how to determine the Problem-Solving relation between these two event-explanation groups. Therefore, we apply word co-occurrence to identify a problem-concept EDU and a solving-concept EDU, and machine-learning techniques to solve a problem-concept EDU boundary and a solving-concept EDU boundary. We propose using k-mean and Naïve Bayes to determine the Problem-Solving relation between the two event-explanation groups involved with clustering features. In contrast to previous works, the proposed approach enables group-pair relation extraction with high accuracy. PMID:27540498

  8. Effects of a psychophysiological system for adaptive automation on performance, workload, and the event-related potential P300 component

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J 3rd; Freeman, Frederick G.; Scerbo, Mark W.; Mikulka, Peter J.; Pope, Alan T.

    2003-01-01

    The present study examined the effects of an electroencephalographic- (EEG-) based system for adaptive automation on tracking performance and workload. In addition, event-related potentials (ERPs) to a secondary task were derived to determine whether they would provide an additional degree of workload specificity. Participants were run in an adaptive automation condition, in which the system switched between manual and automatic task modes based on the value of each individual's own EEG engagement index; a yoked control condition; or another control group, in which task mode switches followed a random pattern. Adaptive automation improved performance and resulted in lower levels of workload. Further, the P300 component of the ERP paralleled the sensitivity to task demands of the performance and subjective measures across conditions. These results indicate that it is possible to improve performance with a psychophysiological adaptive automation system and that ERPs may provide an alternative means for distinguishing among levels of cognitive task demand in such systems. Actual or potential applications of this research include improved methods for assessing operator workload and performance.

  9. Determination of Low Concentrations of Acetochlor in Water by Automated Solid-Phase Extraction and Gas Chromatography with Mass-Selective Detection

    USGS Publications Warehouse

    Lindley, C.E.; Stewart, J.T.; Sandstrom, M.W.

    1996-01-01

    A sensitive and reliable gas chromatographic/mass spectrometric (GC/MS) method for determining acetochlor in environmental water samples was developed. The method involves automated extraction of the herbicide from a filtered 1 L water sample through a C18 solid-phase extraction column, elution from the column with hexane-isopropyl alcohol (3 + 1), and concentration of the extract with nitrogen gas. The herbicide is quantitated by capillary/column GC/MS with selected-ion monitoring of 3 characteristic ions. The single-operator method detection limit for reagent water samples is 0.0015 ??g/L. Mean recoveries ranged from about 92 to 115% for 3 water matrixes fortified at 0.05 and 0.5 ??g/L. Average single-operator precision, over the course of 1 week, was better than 5%.

  10. On the Relation between Automated Essay Scoring and Modern Views of the Writing Construct

    ERIC Educational Resources Information Center

    Deane, Paul

    2013-01-01

    This paper examines the construct measured by automated essay scoring (AES) systems. AES systems measure features of the text structure, linguistic structure, and conventional print form of essays; as such, the systems primarily measure text production skills. In the current state-of-the-art, AES provide little direct evidence about such matters…

  11. Automated Feature Extraction and Hydrocode Modeling of Impact Related Structures on Mars: Preliminary Report

    NASA Astrophysics Data System (ADS)

    Plesko, C. S.; Asphaug, E.; Brumby, S. P.; Gisler, G. R.

    2003-07-01

    A systematic, combined modeling and observation effort to correlate Martian impact structures craters and their regional aftermaths to the impactors, impact processes and target geologies responsible.

  12. Time-resolved Characterization of Particle Associated Polycyclic Aromatic Hydrocarbons using a newly-developed Sequential Spot Sampler with Automated Extraction and Analysis

    PubMed Central

    Lewis, Gregory S.; Spielman, Steven R.; Hering, Susanne V.

    2014-01-01

    A versatile and compact sampling system, the Sequential Spot Sampler (S3) has been developed for pre-concentrated, time-resolved, dry collection of fine and ultrafine particles. Using a temperature-moderated laminar flow water condensation method, ambient particles as small as 6 nm are deposited within a dry, 1-mm diameter spot. Sequential samples are collected on a multiwell plate. Chemical analyses are laboratory-based, but automated. The sample preparation, extraction and chemical analysis steps are all handled through a commercially-available, needle-based autosampler coupled to a liquid chromatography system. This automation is enabled by the small deposition area of the collection. The entire sample is extracted into 50–100μl volume of solvent, providing quantifiable samples with small collected air volumes. A pair of S3 units was deployed in Stockton (CA) from November 2011 to February 2012. PM2.5 samples were collected every 12 hrs, and analyzed for polycyclic aromatic hydrocarbons (PAHs). In parallel, conventional filter samples were collected for 48 hrs and used to assess the new system’s performance. An automated sample preparation and extraction was developed for samples collected using the S3. Collocated data from the two sequential spot samplers were highly correlated for all measured compounds, with a regression slope of 1.1 and r2=0.9 for all measured concentrations. S3/filter ratios for the mean concentration of each individual PAH vary between 0.82 and 1.33, with the larger variability observed for the semivolatile components. Ratio for total PAH concentrations was 1.08. Total PAH concentrations showed similar temporal trend as ambient PM2.5 concentrations. Source apportionment analysis estimated a significant contribution of biomass burning to ambient PAH concentrations during winter. PMID:25574151

  13. Time-resolved characterization of particle associated polycyclic aromatic hydrocarbons using a newly-developed sequential spot sampler with automated extraction and analysis

    NASA Astrophysics Data System (ADS)

    Eiguren-Fernandez, Arantzazu; Lewis, Gregory S.; Spielman, Steven R.; Hering, Susanne V.

    2014-10-01

    A versatile and compact sampling system, the Sequential Spot Sampler (S3) has been developed for pre-concentrated, time-resolved, dry collection of fine and ultrafine particles. Using a temperature-moderated laminar flow water condensation method, ambient particles as small as 6 nm are deposited within a dry, 1-mm diameter spot. Sequential samples are collected on a multiwell plate. Chemical analyses are laboratory-based, but automated. The sample preparation, extraction and chemical analysis steps are all handled through a commercially-available, needle-based autosampler coupled to a liquid chromatography system. This automation is enabled by the small deposition area of the collection. The entire sample is extracted into 50-100 μL volume of solvent, providing quantifiable samples with small collected air volumes. A pair of S3 units was deployed in Stockton (CA) from November 2011 to February 2012. PM2.5 samples were collected every 12 h, and analyzed for polycyclic aromatic hydrocarbons (PAHs). In parallel, conventional filter samples were collected for 48 h and used to assess the new system's performance. An automated sample preparation and extraction was developed for samples collected using the S3. Collocated data from the two sequential spot samplers were highly correlated for all measured compounds, with a regression slope of 1.1 and r2 = 0.9 for all measured concentrations. S3/filter ratios for the mean concentration of each individual PAH vary between 0.82 and 1.33, with the larger variability observed for the semivolatile components. Ratio for total PAH concentrations was 1.08. Total PAH concentrations showed similar temporal trend as ambient PM2.5 concentrations. Source apportionment analysis estimated a significant contribution of biomass burning to ambient PAH concentrations during winter.

  14. Automated mini-column solid-phase extraction cleanup for high-throughput analysis of chemical contaminants in foods by low-pressure gas chromatography – tandem mass spectrometry

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This study demonstrated the application of an automated high-throughput mini-cartridge solid-phase extraction (mini-SPE) cleanup for the rapid low-pressure gas chromatography – tandem mass spectrometry (LPGC-MS/MS) analysis of pesticides and environmental contaminants in QuEChERS extracts of foods. ...

  15. Toward automated classification of consumers' cancer-related questions with a new taxonomy of expected answer types.

    PubMed

    McRoy, Susan; Jones, Sean; Kurmally, Adam

    2016-09-01

    This article examines methods for automated question classification applied to cancer-related questions that people have asked on the web. This work is part of a broader effort to provide automated question answering for health education. We created a new corpus of consumer-health questions related to cancer and a new taxonomy for those questions. We then compared the effectiveness of different statistical methods for developing classifiers, including weighted classification and resampling. Basic methods for building classifiers were limited by the high variability in the natural distribution of questions and typical refinement approaches of feature selection and merging categories achieved only small improvements to classifier accuracy. Best performance was achieved using weighted classification and resampling methods, the latter yielding an accuracy of F1 = 0.963. Thus, it would appear that statistical classifiers can be trained on natural data, but only if natural distributions of classes are smoothed. Such classifiers would be useful for automated question answering, for enriching web-based content, or assisting clinical professionals to answer questions. PMID:25759063

  16. Extraction of gene-disease relations from Medline using domain dictionaries and machine learning.

    PubMed

    Chun, Hong-Woo; Tsuruoka, Yoshimasa; Kim, Jin-Dong; Shiba, Rie; Nagata, Naoki; Hishiki, Teruyoshi; Tsujii, Jun'ichi

    2006-01-01

    We describe a system that extracts disease-gene relations from Medline. We constructed a dictionary for disease and gene names from six public databases and extracted relation candidates by dictionary matching. Since dictionary matching produces a large number of false positives, we developed a method of machine learning-based named entity recognition (NER) to filter out false recognitions of disease/gene names. We found that the performance of relation extraction is heavily dependent upon the performance of NER filtering and that the filtering improves the precision of relation extraction by 26.7% at the cost of a small reduction in recall. PMID:17094223

  17. Automated solid-phase extraction for the determination of polybrominated diphenyl ethers and polychlorinated biphenyls in serum--application on archived Norwegian samples from 1977 to 2003.

    PubMed

    Thomsen, Cathrine; Liane, Veronica Horpestad; Becher, Georg

    2007-02-01

    An analytical method comprised of automated solid-phase extraction and determination using gas chromatography mass spectrometry (single quadrupole) has been developed for the determination of 12 polybrominated diphenyl ethers (PBDEs), 26 polychlorinated biphenyls (PCBs), two organochlorine compounds (OCs) (hexachlorobenzene and octachlorostyrene) and two brominated phenols (pentabromophenol, and tetrabromobisphenol-A (TBBP-A)). The analytes were extracted using a sorbent of polystyrene-divinylbenzene and an additional clean-up was performed on a sulphuric acid-silica column to remove lipids. The method has been validated by spiking horse serum at five levels. The mean accuracy given as recovery relative to internal standards was 95%, 99%, 93% and 109% for the PBDEs PCBs, OCs and brominated phenols, respectively. The mean repeatability given as RSDs was respectively 6.9%, 8.7%, 7.5% and 15%. Estimated limits of detection (S/N=3) were in the range 0.2-1.8 pg/g serum for the PBDEs and phenols, and from 0.1 pg/g to 56 pg/g serum for the PCBs and OCs. The validated method has been used to investigate the levels of PBDEs and PCBs in 21 pooled serum samples from the general Norwegian population. In serum from men (age 40-50 years) the sum of seven PBDE congeners (IUPAC No. 28, 47, 99, 100, 153, 154 and 183) increased from 1977 (0.5 ng/g lipids) to 1998 (4.8 ng/g lipids). From 1999 to 2003 the concentration of PBDEs seems to have stabilised. On the other hand, the sum of five PCBs (IUPAC No. 101, 118, 138, 153 and 180) in these samples decreased steadily from 1977 (666 ng/g lipids) to 2003 (176 ng/g lipids). Tetrabromobisphenol-A and BDE-209 were detected in almost all samples, but no similar temporal trends to that seen for the PBDEs were observed for these compounds, which might be due to the short half-lives of these brominated flame retardants (FR) in humans. PMID:17023223

  18. Automated position control of a surface array relative to a liquid microjunction surface sampler

    DOEpatents

    Van Berkel, Gary J.; Kertesz, Vilmos; Ford, Michael James

    2007-11-13

    A system and method utilizes an image analysis approach for controlling the probe-to-surface distance of a liquid junction-based surface sampling system for use with mass spectrometric detection. Such an approach enables a hands-free formation of the liquid microjunction used to sample solution composition from the surface and for re-optimization, as necessary, of the microjunction thickness during a surface scan to achieve a fully automated surface sampling system.

  19. A fully automated method for simultaneous determination of aflatoxins and ochratoxin A in dried fruits by pressurized liquid extraction and online solid-phase extraction cleanup coupled to ultra-high-pressure liquid chromatography-tandem mass spectrometry.

    PubMed

    Campone, Luca; Piccinelli, Anna Lisa; Celano, Rita; Russo, Mariateresa; Valdés, Alberto; Ibáñez, Clara; Rastrelli, Luca

    2015-04-01

    According to current demands and future perspectives in food safety, this study reports a fast and fully automated analytical method for the simultaneous analysis of the mycotoxins with high toxicity and wide spread, aflatoxins (AFs) and ochratoxin A (OTA) in dried fruits, a high-risk foodstuff. The method is based on pressurized liquid extraction (PLE), with aqueous methanol (30%) at 110 °C, of the slurried dried fruit and online solid-phase extraction (online SPE) cleanup of the PLE extracts with a C18 cartridge. The purified sample was directly analysed by ultra-high-pressure liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) for sensitive and selective determination of AFs and OTA. The proposed analytical procedure was validated for different dried fruits (vine fruit, fig and apricot), providing method detection and quantification limits much lower than the AFs and OTA maximum levels imposed by EU regulation in dried fruit for direct human consumption. Also, recoveries (83-103%) and repeatability (RSD < 8, n = 3) meet the performance criteria required by EU regulation for the determination of the levels of mycotoxins in foodstuffs. The main advantage of the proposed method is full automation of the whole analytical procedure that reduces the time and cost of the analysis, sample manipulation and solvent consumption, enabling high-throughput analysis and highly accurate and precise results. PMID:25694147

  20. Automated age-related macular degeneration classification in OCT using unsupervised feature learning

    NASA Astrophysics Data System (ADS)

    Venhuizen, Freerk G.; van Ginneken, Bram; Bloemen, Bart; van Grinsven, Mark J. J. P.; Philipsen, Rick; Hoyng, Carel; Theelen, Thomas; Sánchez, Clara I.

    2015-03-01

    Age-related Macular Degeneration (AMD) is a common eye disorder with high prevalence in elderly people. The disease mainly affects the central part of the retina, and could ultimately lead to permanent vision loss. Optical Coherence Tomography (OCT) is becoming the standard imaging modality in diagnosis of AMD and the assessment of its progression. However, the evaluation of the obtained volumetric scan is time consuming, expensive and the signs of early AMD are easy to miss. In this paper we propose a classification method to automatically distinguish AMD patients from healthy subjects with high accuracy. The method is based on an unsupervised feature learning approach, and processes the complete image without the need for an accurate pre-segmentation of the retina. The method can be divided in two steps: an unsupervised clustering stage that extracts a set of small descriptive image patches from the training data, and a supervised training stage that uses these patches to create a patch occurrence histogram for every image on which a random forest classifier is trained. Experiments using 384 volume scans show that the proposed method is capable of identifying AMD patients with high accuracy, obtaining an area under the Receiver Operating Curve of 0:984. Our method allows for a quick and reliable assessment of the presence of AMD pathology in OCT volume scans without the need for accurate layer segmentation algorithms.

  1. Automated flow-based anion-exchange method for high-throughput isolation and real-time monitoring of RuBisCO in plant extracts.

    PubMed

    Suárez, Ruth; Miró, Manuel; Cerdà, Víctor; Perdomo, Juan Alejandro; Galmés, Jeroni

    2011-06-15

    In this work, a miniaturized, completely enclosed multisyringe-flow system is proposed for high-throughput purification of RuBisCO from Triticum aestivum extracts. The automated method capitalizes on the uptake of the target protein at 4°C onto Q-Sepharose Fast Flow strong anion-exchanger packed in a cylindrical microcolumn (105 × 4 mm) followed by a stepwise ionic-strength gradient elution (0-0.8 mol/L NaCl) to eliminate concomitant extract components and retrieve highly purified RuBisCO. The manifold is furnished downstream with a flow-through diode-array UV/vis spectrophotometer for real-time monitoring of the column effluent at the protein-specific wavelength of 280 nm to detect the elution of RuBisCO. Quantitation of RuBisCO and total soluble proteins in the eluate fractions were undertaken using polyacrylamide gel electrophoresis (PAGE) and the spectrophotometric Bradford assay, respectively. A comprehensive investigation of the effect of distinct concentration gradients on the isolation of RuBisCO and experimental conditions (namely, type of resin, column dimensions and mobile-phase flow rate) upon column capacity and analyte breakthrough was effected. The assembled set-up was aimed to critically ascertain the efficiency of preliminary batchwise pre-treatments of crude plant extracts (viz., polyethylenglycol (PEG) precipitation, ammonium sulphate precipitation and sucrose gradient centrifugation) in terms of RuBisCO purification and absolute recovery prior to automated anion-exchange column separation. Under the optimum physical and chemical conditions, the flow-through column system is able to admit crude plant extracts and gives rise to RuBisCO purification yields better than 75%, which might be increased up to 96 ± 9% with a prior PEG fractionation followed by sucrose gradient step. PMID:21641435

  2. Automated Retinal Image Analysis for Evaluation of Focal Hyperpigmentary Changes in Intermediate Age-Related Macular Degeneration

    PubMed Central

    Schmitz-Valckenberg, Steffen; Göbel, Arno P.; Saur, Stefan C.; Steinberg, Julia S.; Thiele, Sarah; Wojek, Christian; Russmann, Christoph; Holz, Frank G.; for the MODIAMD-Study Group

    2016-01-01

    Purpose To develop and evaluate a software tool for automated detection of focal hyperpigmentary changes (FHC) in eyes with intermediate age-related macular degeneration (AMD). Methods Color fundus (CFP) and autofluorescence (AF) photographs of 33 eyes with FHC of 28 AMD patients (mean age 71 years) from the prospective longitudinal natural history MODIAMD-study were included. Fully automated to semiautomated registration of baseline to corresponding follow-up images was evaluated. Following the manual circumscription of individual FHC (four different readings by two readers), a machine-learning algorithm was evaluated for automatic FHC detection. Results The overall pixel distance error for the semiautomated (CFP follow-up to CFP baseline: median 5.7; CFP to AF images from the same visit: median 6.5) was larger as compared for the automated image registration (4.5 and 5.7; P < 0.001 and P < 0.001). The total number of manually circumscribed objects and the corresponding total size varied between 637 to 1163 and 520,848 pixels to 924,860 pixels, respectively. Performance of the learning algorithms showed a sensitivity of 96% at a specificity level of 98% using information from both CFP and AF images and defining small areas of FHC (“speckle appearance”) as “neutral.” Conclusions FHC as a high-risk feature for progression of AMD to late stages can be automatically assessed at different time points with similar sensitivity and specificity as compared to manual outlining. Upon further development of the research prototype, this approach may be useful both in natural history and interventional large-scale studies for a more refined classification and risk assessment of eyes with intermediate AMD. Translational Relevance Automated FHC detection opens the door for a more refined and detailed classification and risk assessment of eyes with intermediate AMD in both natural history and future interventional studies. PMID:26966639

  3. Automated extraction method for the center line of spinal canal and its application to the spinal curvature quantification in torso X-ray CT images

    NASA Astrophysics Data System (ADS)

    Hayashi, Tatsuro; Zhou, Xiangrong; Chen, Huayue; Hara, Takeshi; Miyamoto, Kei; Kobayashi, Tatsunori; Yokoyama, Ryujiro; Kanematsu, Masayuki; Hoshi, Hiroaki; Fujita, Hiroshi

    2010-03-01

    X-ray CT images have been widely used in clinical routine in recent years. CT images scanned by a modern CT scanner can show the details of various organs and tissues. This means various organs and tissues can be simultaneously interpreted on CT images. However, CT image interpretation requires a lot of time and energy. Therefore, support for interpreting CT images based on image-processing techniques is expected. The interpretation of the spinal curvature is important for clinicians because spinal curvature is associated with various spinal disorders. We propose a quantification scheme of the spinal curvature based on the center line of spinal canal on CT images. The proposed scheme consists of four steps: (1) Automated extraction of the skeletal region based on CT number thresholding. (2) Automated extraction of the center line of spinal canal. (3) Generation of the median plane image of spine, which is reformatted based on the spinal canal. (4) Quantification of the spinal curvature. The proposed scheme was applied to 10 cases, and compared with the Cobb angle that is commonly used by clinicians. We found that a high-correlation (for the 95% confidence interval, lumbar lordosis: 0.81-0.99) between values obtained by the proposed (vector) method and Cobb angle. Also, the proposed method can provide the reproducible result (inter- and intra-observer variability: within 2°). These experimental results suggested a possibility that the proposed method was efficient for quantifying the spinal curvature on CT images.

  4. Determination of amlodipine in human plasma using automated online solid-phase extraction HPLC-tandem mass spectrometry: application to a bioequivalence study of Chinese volunteers.

    PubMed

    Shentu, Jianzhong; Fu, Lizhi; Zhou, Huili; Hu, Xing Jiang; Liu, Jian; Chen, Junchun; Wu, Guolan

    2012-11-01

    An automated method (XLC-MS/MS) that uses online solid-phase extraction coupled with HPLC-tandem mass spectrometry was reported here for the first time to quantify amlodipine in human plasma. Automated pre-purification of plasma was performed using 10 mm × 2 mm HySphere C8 EC-SE online solid-phase extraction cartridges. After being eluted from the cartridge, the analyte and the internal standard were separated by HPLC and detected by tandem mass spectrometry. Mass spectrometric detection was achieved in the multiple reaction monitoring mode using a quadrupole tandem mass spectrometer in the positive electrospray ionization mode. The XLC-MS/MS method was validated and yielded excellent specificity. The calibration curve ranged from 0.10 to 10.22 ng/mL, and both the intra- and inter-day precision and accuracy values were within 8%. This method proved to be less laborious and was faster per analysis (high-throughput) than offline sample preparation methods. This method has been successfully applied in clinical pharmacokinetic and bioequivalence analyses. PMID:22770846

  5. High-throughput method of dioxin analysis in aqueous samples using consecutive solid phase extraction steps with the new C18 Ultraflow™ pressurized liquid extraction and automated clean-up.

    PubMed

    Youn, Yeu-Young; Park, Deok Hie; Lee, Yeon Hwa; Lim, Young Hee; Cho, Hye Sung

    2015-01-01

    A high-throughput analytical method has been developed for the determination of seventeen 2,3,7,8-substituted congeners of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in aqueous samples. A recently introduced octadecyl (C18) disk for semi-automated solid-phase extraction of PCDD/Fs in water samples with a high level of particulate material has been tested for the analysis of dioxins. A new type of C18 disk specially designed for the analysis of hexane extractable material (HEM), but never previously reported for use in PCDD/Fs analysis. This kind of disk allows a higher filtration flow, and therefore the time of analysis is reduced. The solid-phase extraction technique is used to change samples from liquid to solid, and therefore pressurized liquid extraction (PLE) can be used in the pre-treatment. In order to achieve efficient purification, extracts from the PLE are purified using an automated Power-prep system with disposable silica, alumina, and carbon columns. Quantitative analyses of PCDD/Fs were performed by GC-HRMS using multi-ion detection (MID) mode. The method was successfully applied to the analysis of water samples from the wastewater treatment system of a vinyl chloride monomer plant. The entire procedure is in agreement with EPA1613 recommendations regarding the blank control, MDLs (method detection limits), accuracy, and precision. The high-throughput method not only meets the requirements of international standards, but also shortens the required analysis time from 2 weeks to 3d. PMID:25112208

  6. Three Experiments Examining the Use of Electroencephalogram,Event-Related Potentials, and Heart-Rate Variability for Real-Time Human-Centered Adaptive Automation Design

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Parasuraman, Raja; Freeman, Frederick G.; Scerbo, Mark W.; Mikulka, Peter J.; Pope, Alan T.

    2003-01-01

    Adaptive automation represents an advanced form of human-centered automation design. The approach to automation provides for real-time and model-based assessments of human-automation interaction, determines whether the human has entered into a hazardous state of awareness and then modulates the task environment to keep the operator in-the-loop , while maintaining an optimal state of task engagement and mental alertness. Because adaptive automation has not matured, numerous challenges remain, including what the criteria are, for determining when adaptive aiding and adaptive function allocation should take place. Human factors experts in the area have suggested a number of measures including the use of psychophysiology. This NASA Technical Paper reports on three experiments that examined the psychophysiological measures of event-related potentials, electroencephalogram, and heart-rate variability for real-time adaptive automation. The results of the experiments confirm the efficacy of these measures for use in both a developmental and operational role for adaptive automation design. The implications of these results and future directions for psychophysiology and human-centered automation design are discussed.

  7. Automated extraction of aorta and pulmonary artery in mediastinum from 3D chest x-ray CT images without contrast medium

    NASA Astrophysics Data System (ADS)

    Kitasaka, Takayuki; Mori, Kensaku; Hasegawa, Jun-ichi; Toriwaki, Jun-ichiro; Katada, Kazuhiro

    2002-05-01

    This paper proposes a method for automated extraction of the aorta and pulmonary artery (PA) in the mediastinum of the chest from uncontrasted chest X-ray CT images. The proposed method employs a model fitting technique to use shape features of blood vessels for extraction. First, edge voxels are detected based on the standard deviation of CT values. A likelihood image, which shows the degree of likelihood on medial axes of vessels, are calculated by applying the Euclidean distance transformation to non-edge voxels. Second, the medial axis of each vessel is obtained by fitting the model. This is done by referring the likelihood image. Finally, the aorta and PA areas are recovered from the medial axes by executing the reverse Euclidean distance transformation. We applied the proposed method to seven cases of uncontrasted chest X-ray CT images and evaluated the results by calculating the coincidence index computed from the extracted regions and the regions manually traced. Experimental results showed that the extracted aorta and the PA areas coincides with manually input regions with the coincidence indexes values 90% and 80-90%,respectively.

  8. Demonstration and validation of automated agricultural field extraction from multi-temporal Landsat data for the majority of United States harvested cropland

    NASA Astrophysics Data System (ADS)

    Yan, L.; Roy, D. P.

    2014-12-01

    The spatial distribution of agricultural fields is a fundamental description of rural landscapes and the location and extent of fields is important to establish the area of land utilized for agricultural yield prediction, resource allocation, and for economic planning, and may be indicative of the degree of agricultural capital investment, mechanization, and labor intensity. To date, field objects have not been extracted from satellite data over large areas because of computational constraints, the complexity of the extraction task, and because consistently processed appropriate resolution data have not been available or affordable. A recently published automated methodology to extract agricultural crop fields from weekly 30 m Web Enabled Landsat data (WELD) time series was refined and applied to 14 states that cover 70% of harvested U.S. cropland (USDA 2012 Census). The methodology was applied to 2010 combined weekly Landsat 5 and 7 WELD data. The field extraction and quantitative validation results are presented for the following 14 states: Iowa, North Dakota, Illinois, Kansas, Minnesota, Nebraska, Texas, South Dakota, Missouri, Indiana, Ohio, Wisconsin, Oklahoma and Michigan (sorted by area of harvested cropland). These states include the top 11 U.S states by harvested cropland area. Implications and recommendations for systematic application to global coverage Landsat data are discussed.

  9. Sieve-based relation extraction of gene regulatory networks from biological literature

    PubMed Central

    2015-01-01

    Background Relation extraction is an essential procedure in literature mining. It focuses on extracting semantic relations between parts of text, called mentions. Biomedical literature includes an enormous amount of textual descriptions of biological entities, their interactions and results of related experiments. To extract them in an explicit, computer readable format, these relations were at first extracted manually from databases. Manual curation was later replaced with automatic or semi-automatic tools with natural language processing capabilities. The current challenge is the development of information extraction procedures that can directly infer more complex relational structures, such as gene regulatory networks. Results We develop a computational approach for extraction of gene regulatory networks from textual data. Our method is designed as a sieve-based system and uses linear-chain conditional random fields and rules for relation extraction. With this method we successfully extracted the sporulation gene regulation network in the bacterium Bacillus subtilis for the information extraction challenge at the BioNLP 2013 conference. To enable extraction of distant relations using first-order models, we transform the data into skip-mention sequences. We infer multiple models, each of which is able to extract different relationship types. Following the shared task, we conducted additional analysis using different system settings that resulted in reducing the reconstruction error of bacterial sporulation network from 0.73 to 0.68, measured as the slot error rate between the predicted and the reference network. We observe that all relation extraction sieves contribute to the predictive performance of the proposed approach. Also, features constructed by considering mention words and their prefixes and suffixes are the most important features for higher accuracy of extraction. Analysis of distances between different mention types in the text shows that our choice

  10. A Comprehensive Automated 3D Approach for Building Extraction, Reconstruction, and Regularization from Airborne Laser Scanning Point Clouds

    PubMed Central

    Dorninger, Peter; Pfeifer, Norbert

    2008-01-01

    Three dimensional city models are necessary for supporting numerous management applications. For the determination of city models for visualization purposes, several standardized workflows do exist. They are either based on photogrammetry or on LiDAR or on a combination of both data acquisition techniques. However, the automated determination of reliable and highly accurate city models is still a challenging task, requiring a workflow comprising several processing steps. The most relevant are building detection, building outline generation, building modeling, and finally, building quality analysis. Commercial software tools for building modeling require, generally, a high degree of human interaction and most automated approaches described in literature stress the steps of such a workflow individually. In this article, we propose a comprehensive approach for automated determination of 3D city models from airborne acquired point cloud data. It is based on the assumption that individual buildings can be modeled properly by a composition of a set of planar faces. Hence, it is based on a reliable 3D segmentation algorithm, detecting planar faces in a point cloud. This segmentation is of crucial importance for the outline detection and for the modeling approach. We describe the theoretical background, the segmentation algorithm, the outline detection, and the modeling approach, and we present and discuss several actual projects.

  11. Simultaneous analysis of organochlorinated pesticides (OCPs) and polychlorinated biphenyls (PCBs) from marine samples using automated pressurized liquid extraction (PLE) and Power Prep™ clean-up.

    PubMed

    Helaleh, Murad I H; Al-Rashdan, Amal; Ibtisam, A

    2012-05-30

    An automated pressurized liquid extraction (PLE) method followed by Power Prep™ clean-up was developed for organochlorinated pesticide (OCP) and polychlorinated biphenyl (PCB) analysis in environmental marine samples of fish, squid, bivalves, shells, octopus and shrimp. OCPs and PCBs were simultaneously determined in a single chromatographic run using gas chromatography-mass spectrometry-negative chemical ionization (GC-MS-NCI). About 5 g of each biological marine sample was mixed with anhydrous sodium sulphate and placed in the extraction cell of the PLE system. PLE is controlled by means of a PC using DMS 6000 software. Purification of the extract was accomplished using automated Power Prep™ clean-up with a pre-packed disposable silica column (6 g) supplied by Fluid Management Systems (FMS). All OCPs and PCBs were eluted from the silica column using two types of solvent: 80 mL of hexane and a 50 mL mixture of hexane and dichloromethane (1:1). A wide variety of fish and shellfish were collected from the fish market and analyzed using this method. The total PCB concentrations were 2.53, 0.25, 0.24, 0.24, 0.17 and 1.38 ng g(-1) (w/w) for fish, squid, bivalves, shells, octopus and shrimp, respectively, and the corresponding total OCP concentrations were 30.47, 2.86, 0.92, 10.72, 5.13 and 18.39 ng g(-1) (w/w). Lipids were removed using an SX-3 Bio-Beads gel permeation chromatography (GPC) column. Analytical criteria such as recovery, reproducibility and repeatability were evaluated through a range of biological matrices. PMID:22608412

  12. Quantitative radiology: automated measurement of polyp volume in computed tomography colonography using Hessian matrix-based shape extraction and volume growing

    PubMed Central

    Epstein, Mark L.; Obara, Piotr R.; Chen, Yisong; Liu, Junchi; Zarshenas, Amin; Makkinejad, Nazanin; Dachman, Abraham H.

    2015-01-01

    Background Current measurement of the single longest dimension of a polyp is subjective and has variations among radiologists. Our purpose was to develop a computerized measurement of polyp volume in computed tomography colonography (CTC). Methods We developed a 3D automated scheme for measuring polyp volume at CTC. Our scheme consisted of segmentation of colon wall to confine polyp segmentation to the colon wall, extraction of a highly polyp-like seed region based on the Hessian matrix, a 3D volume growing technique under the minimum surface expansion criterion for segmentation of polyps, and sub-voxel refinement and surface smoothing for obtaining a smooth polyp surface. Our database consisted of 30 polyp views (15 polyps) in CTC scans from 13 patients. Each patient was scanned in the supine and prone positions. Polyp sizes measured in optical colonoscopy (OC) ranged from 6-18 mm with a mean of 10 mm. A radiologist outlined polyps in each slice and calculated volumes by summation of volumes in each slice. The measurement study was repeated 3 times at least 1 week apart for minimizing a memory effect bias. We used the mean volume of the three studies as “gold standard”. Results Our measurement scheme yielded a mean polyp volume of 0.38 cc (range, 0.15-1.24 cc), whereas a mean “gold standard” manual volume was 0.40 cc (range, 0.15-1.08 cc). The “gold-standard” manual and computer volumetric reached excellent agreement (intra-class correlation coefficient =0.80), with no statistically significant difference [P (F≤f) =0.42]. Conclusions We developed an automated scheme for measuring polyp volume at CTC based on Hessian matrix-based shape extraction and volume growing. Polyp volumes obtained by our automated scheme agreed excellently with “gold standard” manual volumes. Our fully automated scheme can efficiently provide accurate polyp volumes for radiologists; thus, it would help radiologists improve the accuracy and efficiency of polyp volume

  13. Automated Determination of Publications Related to Adverse Drug Reactions in PubMed

    PubMed Central

    Adams, Hayden; Friedman, Carol; Finkelstein, Joseph

    2015-01-01

    Timely dissemination of up-to-date information concerning adverse drug reactions (ADRs) at the point of care can significantly improve medication safety and prevent ADRs. Automated methods for finding relevant articles in MEDLINE which discuss ADRs for specific medications can facilitate decision making at the point of care. Previous work has focused on other types of clinical queries and on retrieval for specific ADRs or drug-ADR pairs, but little work has been published on finding ADR articles for a specific medication. We have developed a method to generate a PubMED query based on MESH, supplementary concepts, and textual terms for a particular medication. Evaluation was performed on a limited sample, resulting in a sensitivity of 90% and precision of 93%. Results demonstrated that this method is highly effective. Future work will integrate this method within an interface aimed at facilitating access to ADR information for specified drugs at the point of care. PMID:26306227

  14. Automation and robotics and related technology issues for Space Station customer servicing

    NASA Technical Reports Server (NTRS)

    Cline, Helmut P.

    1987-01-01

    Several flight servicing support elements are discussed within the context of the Space Station. Particular attention is given to the servicing facility, the mobile servicing center, and the flight telerobotic servicer (FTS). The role that automation and robotics can play in the design and operation of each of these elements is discussed. It is noted that the FTS, which is currently being developed by NASA, will evolve to increasing levels of autonomy to allow for the virtual elimination of routine EVA. Some of the features of the FTS will probably be: dual manipulator arms having reach and dexterity roughly equivalent to that of an EVA-suited astronaut, force reflection capability allowing efficient teleoperation, and capability of operating from a variety of support systems.

  15. Extracting Related Words from Anchor Text Clusters by Focusing on the Page Designer's Intention

    NASA Astrophysics Data System (ADS)

    Liu, Jianquan; Chen, Hanxiong; Furuse, Kazutaka; Ohbo, Nobuo

    Approaches for extracting related words (terms) by co-occurrence work poorly sometimes. Two words frequently co-occurring in the same documents are considered related. However, they may not relate at all because they would have no common meanings nor similar semantics. We address this problem by considering the page designer’s intention and propose a new model to extract related words. Our approach is based on the idea that the web page designers usually make the correlative hyperlinks appear in close zone on the browser. We developed a browser-based crawler to collect “geographically” near hyperlinks, then by clustering these hyperlinks based on their pixel coordinates, we extract related words which can well reflect the designer’s intention. Experimental results show that our method can represent the intention of the web page designer in extremely high precision. Moreover, the experiments indicate that our extracting method can obtain related words in a high average precision.

  16. A Relation Extraction Framework for Biomedical Text Using Hybrid Feature Set.

    PubMed

    Muzaffar, Abdul Wahab; Azam, Farooque; Qamar, Usman

    2015-01-01

    The information extraction from unstructured text segments is a complex task. Although manual information extraction often produces the best results, it is harder to manage biomedical data extraction manually because of the exponential increase in data size. Thus, there is a need for automatic tools and techniques for information extraction in biomedical text mining. Relation extraction is a significant area under biomedical information extraction that has gained much importance in the last two decades. A lot of work has been done on biomedical relation extraction focusing on rule-based and machine learning techniques. In the last decade, the focus has changed to hybrid approaches showing better results. This research presents a hybrid feature set for classification of relations between biomedical entities. The main contribution of this research is done in the semantic feature set where verb phrases are ranked using Unified Medical Language System (UMLS) and a ranking algorithm. Support Vector Machine and Naïve Bayes, the two effective machine learning techniques, are used to classify these relations. Our approach has been validated on the standard biomedical text corpus obtained from MEDLINE 2001. Conclusively, it can be articulated that our framework outperforms all state-of-the-art approaches used for relation extraction on the same corpus. PMID:26347797

  17. A Relation Extraction Framework for Biomedical Text Using Hybrid Feature Set

    PubMed Central

    Muzaffar, Abdul Wahab; Azam, Farooque; Qamar, Usman

    2015-01-01

    The information extraction from unstructured text segments is a complex task. Although manual information extraction often produces the best results, it is harder to manage biomedical data extraction manually because of the exponential increase in data size. Thus, there is a need for automatic tools and techniques for information extraction in biomedical text mining. Relation extraction is a significant area under biomedical information extraction that has gained much importance in the last two decades. A lot of work has been done on biomedical relation extraction focusing on rule-based and machine learning techniques. In the last decade, the focus has changed to hybrid approaches showing better results. This research presents a hybrid feature set for classification of relations between biomedical entities. The main contribution of this research is done in the semantic feature set where verb phrases are ranked using Unified Medical Language System (UMLS) and a ranking algorithm. Support Vector Machine and Naïve Bayes, the two effective machine learning techniques, are used to classify these relations. Our approach has been validated on the standard biomedical text corpus obtained from MEDLINE 2001. Conclusively, it can be articulated that our framework outperforms all state-of-the-art approaches used for relation extraction on the same corpus. PMID:26347797

  18. An Automated Approach to Agricultural Tile Drain Detection and Extraction Utilizing High Resolution Aerial Imagery and Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Johansen, Richard A.

    Subsurface drainage from agricultural fields in the Maumee River watershed is suspected to adversely impact the water quality and contribute to the formation of harmful algal blooms (HABs) in Lake Erie. In early August of 2014, a HAB developed in the western Lake Erie Basin that resulted in over 400,000 people being unable to drink their tap water due to the presence of a toxin from the bloom. HAB development in Lake Erie is aided by excess nutrients from agricultural fields, which are transported through subsurface tile and enter the watershed. Compounding the issue within the Maumee watershed, the trend within the watershed has been to increase the installation of tile drains in both total extent and density. Due to the immense area of drained fields, there is a need to establish an accurate and effective technique to monitor subsurface farmland tile installations and their associated impacts. This thesis aimed at developing an automated method in order to identify subsurface tile locations from high resolution aerial imagery by applying an object-based image analysis (OBIA) approach utilizing eCognition. This process was accomplished through a set of algorithms and image filters, which segment and classify image objects by their spectral and geometric characteristics. The algorithms utilized were based on the relative location of image objects and pixels, in order to maximize the robustness and transferability of the final rule-set. These algorithms were coupled with convolution and histogram image filters to generate results for a 10km2 study area located within Clay Township in Ottawa County, Ohio. The eCognition results were compared to previously collected tile locations from an associated project that applied heads-up digitizing of aerial photography to map field tile. The heads-up digitized locations were used as a baseline for the accuracy assessment. The accuracy assessment generated a range of agreement values from 67.20% - 71.20%, and an average

  19. Discovery of Predicate-Oriented Relations among Named Entities Extracted from Thai Texts

    NASA Astrophysics Data System (ADS)

    Tongtep, Nattapong; Theeramunkong, Thanaruk

    Extracting named entities (NEs) and their relations is more difficult in Thai than in other languages due to several Thai specific characteristics, including no explicit boundaries for words, phrases and sentences; few case markers and modifier clues; high ambiguity in compound words and serial verbs; and flexible word orders. Unlike most previous works which focused on NE relations of specific actions, such as work_for, live_in, located_in, and kill, this paper proposes more general types of NE relations, called predicate-oriented relation (PoR), where an extracted action part (verb) is used as a core component to associate related named entities extracted from Thai Texts. Lacking a practical parser for the Thai language, we present three types of surface features, i.e. punctuation marks (such as token spaces), entity types and the number of entities and then apply five alternative commonly used learning schemes to investigate their performance on predicate-oriented relation extraction. The experimental results show that our approach achieves the F-measure of 97.76%, 99.19%, 95.00% and 93.50% on four different types of predicate-oriented relation (action-location, location-action, action-person and person-action) in crime-related news documents using a data set of 1,736 entity pairs. The effects of NE extraction techniques, feature sets and class unbalance on the performance of relation extraction are explored.

  20. Development of an Automated Column Solid-Phase Extraction Cleanup of QuEChERS Extracts, Using a Zirconia-Based Sorbent, for Pesticide Residue Analyses by LC-MS/MS.

    PubMed

    Morris, Bruce D; Schriner, Richard B

    2015-06-01

    A new, automated, high-throughput, mini-column solid-phase extraction (c-SPE) cleanup method for QuEChERS extracts was developed, using a robotic X-Y-Z instrument autosampler, for analysis of pesticide residues in fruits and vegetables by LC-MS/MS. Removal of avocado matrix and recoveries of 263 pesticides and metabolites were studied, using various stationary phase mixtures, including zirconia-based sorbents, and elution with acetonitrile. These experiments allowed selection of a sorbent mixture consisting of zirconia, C18, and carbon-coated silica, that effectively retained avocado matrix but also retained 53 pesticides with <70% recoveries. Addition of MeOH to the elution solvent improved pesticide recoveries from zirconia, as did citrate ions in CEN QuEChERS extracts. Finally, formate buffer in acetonitrile/MeOH (1:1) was required to give >70% recoveries of all 263 pesticides. Analysis of avocado extracts by LC-Q-Orbitrap-MS showed that the method developed was removing >90% of di- and triacylglycerols. The method was validated for 269 pesticides (including homologues and metabolites) in avocado and citrus. Spike recoveries were within 70-120% and 20% RSD for 243 of these analytes in avocado and 254 in citrus, when calibrated against solvent-only standards, indicating effective matrix removal and minimal electrospray ionization suppression. PMID:25702899

  1. Automated detection of feeding strikes by larval fish using continuous high-speed digital video: a novel method to extract quantitative data from fast, sparse kinematic events.

    PubMed

    Shamur, Eyal; Zilka, Miri; Hassner, Tal; China, Victor; Liberzon, Alex; Holzman, Roi

    2016-06-01

    Using videography to extract quantitative data on animal movement and kinematics constitutes a major tool in biomechanics and behavioral ecology. Advanced recording technologies now enable acquisition of long video sequences encompassing sparse and unpredictable events. Although such events may be ecologically important, analysis of sparse data can be extremely time-consuming and potentially biased; data quality is often strongly dependent on the training level of the observer and subject to contamination by observer-dependent biases. These constraints often limit our ability to study animal performance and fitness. Using long videos of foraging fish larvae, we provide a framework for the automated detection of prey acquisition strikes, a behavior that is infrequent yet critical for larval survival. We compared the performance of four video descriptors and their combinations against manually identified feeding events. For our data, the best single descriptor provided a classification accuracy of 77-95% and detection accuracy of 88-98%, depending on fish species and size. Using a combination of descriptors improved the accuracy of classification by ∼2%, but did not improve detection accuracy. Our results indicate that the effort required by an expert to manually label videos can be greatly reduced to examining only the potential feeding detections in order to filter false detections. Thus, using automated descriptors reduces the amount of manual work needed to identify events of interest from weeks to hours, enabling the assembly of an unbiased large dataset of ecologically relevant behaviors. PMID:26994179

  2. Temporal Relation Extraction in Outcome Variances of Clinical Pathways.

    PubMed

    Yamashita, Takanori; Wakata, Yoshifumi; Hamai, Satoshi; Nakashima, Yasuharu; Iwamoto, Yukihide; Franagan, Brendan; Nakashima, Naoki; Hirokawa, Sachio

    2015-01-01

    Recently the clinical pathway has progressed with digitalization and the analysis of activity. There are many previous studies on the clinical pathway but not many feed directly into medical practice. We constructed a mind map system that applies the spanning tree. This system can visualize temporal relations in outcome variances, and indicate outcomes that affect long-term hospitalization. PMID:26262376

  3. CD-REST: a system for extracting chemical-induced disease relation in literature.

    PubMed

    Xu, Jun; Wu, Yonghui; Zhang, Yaoyun; Wang, Jingqi; Lee, Hee-Jin; Xu, Hua

    2016-01-01

    Mining chemical-induced disease relations embedded in the vast biomedical literature could facilitate a wide range of computational biomedical applications, such as pharmacovigilance. The BioCreative V organized a Chemical Disease Relation (CDR) Track regarding chemical-induced disease relation extraction from biomedical literature in 2015. We participated in all subtasks of this challenge. In this article, we present our participation system Chemical Disease Relation Extraction SysTem (CD-REST), an end-to-end system for extracting chemical-induced disease relations in biomedical literature. CD-REST consists of two main components: (1) a chemical and disease named entity recognition and normalization module, which employs the Conditional Random Fields algorithm for entity recognition and a Vector Space Model-based approach for normalization; and (2) a relation extraction module that classifies both sentence-level and document-level candidate drug-disease pairs by support vector machines. Our system achieved the best performance on the chemical-induced disease relation extraction subtask in the BioCreative V CDR Track, demonstrating the effectiveness of our proposed machine learning-based approaches for automatic extraction of chemical-induced disease relations in biomedical literature. The CD-REST system provides web services using HTTP POST request. The web services can be accessed fromhttp://clinicalnlptool.com/cdr The online CD-REST demonstration system is available athttp://clinicalnlptool.com/cdr/cdr.html. Database URL:http://clinicalnlptool.com/cdr;http://clinicalnlptool.com/cdr/cdr.html. PMID:27016700

  4. Extraction of Children's Friendship Relation from Activity Level

    NASA Astrophysics Data System (ADS)

    Kono, Aki; Shintani, Kimio; Katsuki, Takuya; Kihara, Shin'ya; Ueda, Mari; Kaneda, Shigeo; Haga, Hirohide

    Children learn to fit into society through living in a group, and it's greatly influenced by their friend relations. Although preschool teachers need to observe them to assist in the growth of children's social progress and support the development each child's personality, only experienced teachers can watch over children while providing high-quality guidance. To resolve the problem, this paper proposes a mathematical and objective method that assists teachers with observation. It uses numerical data of activity level recorded by pedometers, and we make tree diagram called dendrogram based on hierarchical clustering with recorded activity level. Also, we calculate children's ``breadth'' and ``depth'' of friend relations by using more than one dendrogram. When we record children's activity level in a certain kindergarten for two months and evaluated the proposed method, the results usually coincide with remarks of teachers about the children.

  5. Relative contribution of restorative treatment to tooth extraction in a teaching institution.

    PubMed

    Alomari, Q D; Khalaf, M E; Al-Shawaf, N M

    2013-06-01

    Teeth can be extracted due to multiple factors. The aim of this retrospective cross-sectional study was to identify the relative contribution of restorative treatments to tooth loss. The study reviewed records of 826 patients (1102 teeth). Patient's gender, age and education were obtained. In addition to the main reason for extraction (caries, periodontal disease, pre-prosthetic extraction, restorative failure and remaining root), the following information was collected about each extracted tooth: type, the status of caries if any (primary or secondary) and pulpal status (normal or reversible pulpitis, irreversible pulpitis, necrotic or root canal treated) and type and size of restoration, if present. Following data collection, descriptive analysis was performed. A log-linear model was used to examine the association between restorative treatment and tooth loss and between reasons for tooth loss and type of tooth. Lower molars followed by upper molars were the most commonly extracted teeth. Teeth with no restorations or with crowns were less likely to be extracted (P < 0·001). Lower and upper molars and lower premolars were more likely to be extracted due to restorative failure, while lower anterior teeth were more likely to be extracted due to periodontal disease (P < 0·05). Twenty two per cent of the extractions was due to restorative failure, and at least 65·9% of these teeth had secondary caries. Gender, age and educational level were factors that affect tooth loss. In conclusion, teeth receiving multiple restorative therapies were more likely to be extracted. PMID:23600993

  6. Comparison of Boiling and Robotics Automation Method in DNA Extraction for Metagenomic Sequencing of Human Oral Microbes.

    PubMed

    Yamagishi, Junya; Sato, Yukuto; Shinozaki, Natsuko; Ye, Bin; Tsuboi, Akito; Nagasaki, Masao; Yamashita, Riu

    2016-01-01

    The rapid improvement of next-generation sequencing performance now enables us to analyze huge sample sets with more than ten thousand specimens. However, DNA extraction can still be a limiting step in such metagenomic approaches. In this study, we analyzed human oral microbes to compare the performance of three DNA extraction methods: PowerSoil (a method widely used in this field), QIAsymphony (a robotics method), and a simple boiling method. Dental plaque was initially collected from three volunteers in the pilot study and then expanded to 12 volunteers in the follow-up study. Bacterial flora was estimated by sequencing the V4 region of 16S rRNA following species-level profiling. Our results indicate that the efficiency of PowerSoil and QIAsymphony was comparable to the boiling method. Therefore, the boiling method may be a promising alternative because of its simplicity, cost effectiveness, and short handling time. Moreover, this method was reliable for estimating bacterial species and could be used in the future to examine the correlation between oral flora and health status. Despite this, differences in the efficiency of DNA extraction for various bacterial species were observed among the three methods. Based on these findings, there is no "gold standard" for DNA extraction. In future, we suggest that the DNA extraction method should be selected on a case-by-case basis considering the aims and specimens of the study. PMID:27104353

  7. Comparison of Boiling and Robotics Automation Method in DNA Extraction for Metagenomic Sequencing of Human Oral Microbes

    PubMed Central

    Shinozaki, Natsuko; Ye, Bin; Tsuboi, Akito; Nagasaki, Masao; Yamashita, Riu

    2016-01-01

    The rapid improvement of next-generation sequencing performance now enables us to analyze huge sample sets with more than ten thousand specimens. However, DNA extraction can still be a limiting step in such metagenomic approaches. In this study, we analyzed human oral microbes to compare the performance of three DNA extraction methods: PowerSoil (a method widely used in this field), QIAsymphony (a robotics method), and a simple boiling method. Dental plaque was initially collected from three volunteers in the pilot study and then expanded to 12 volunteers in the follow-up study. Bacterial flora was estimated by sequencing the V4 region of 16S rRNA following species-level profiling. Our results indicate that the efficiency of PowerSoil and QIAsymphony was comparable to the boiling method. Therefore, the boiling method may be a promising alternative because of its simplicity, cost effectiveness, and short handling time. Moreover, this method was reliable for estimating bacterial species and could be used in the future to examine the correlation between oral flora and health status. Despite this, differences in the efficiency of DNA extraction for various bacterial species were observed among the three methods. Based on these findings, there is no “gold standard” for DNA extraction. In future, we suggest that the DNA extraction method should be selected on a case-by-case basis considering the aims and specimens of the study. PMID:27104353

  8. Detection of Pharmacovigilance-Related adverse Events Using Electronic Health Records and automated Methods

    PubMed Central

    Haerian, K; Varn, D; Vaidya, S; Ena, L; Chase, HS; Friedman, C

    2013-01-01

    Electronic health records (EHRs) are an important source of data for detection of adverse drug reactions (ADRs). However, adverse events are frequently due not to medications but to the patients’ underlying conditions. Mining to detect ADRs from EHR data must account for confounders. We developed an automated method using natural-language processing (NLP) and a knowledge source to differentiate cases in which the patient’s disease is responsible for the event rather than a drug. Our method was applied to 199,920 hospitalization records, concentrating on two serious ADRs: rhabdomyolysis (n = 687) and agranulocytosis (n = 772). Our method automatically identified 75% of the cases, those with disease etiology. The sensitivity and specificity were 93.8% (confidence interval: 88.9-96.7%) and 91.8% (confidence interval: 84.0-96.2%), respectively. The method resulted in considerable saving of time: for every 1 h spent in development, there was a saving of at least 20 h in manual review. The review of the remaining 25% of the cases therefore became more feasible, allowing us to identify the medications that had caused the ADRs. PMID:22713699

  9. A Framework for the Relative and Absolute Performance Evaluation of Automated Spectroscopy Systems

    NASA Astrophysics Data System (ADS)

    Portnoy, David; Heimberg, Peter; Heimberg, Jennifer; Feuerbach, Robert; McQuarrie, Allan; Noonan, William; Mattson, John

    2009-12-01

    The development of high-speed, high-performance gamma-ray spectroscopy algorithms is critical to the success of many automated threat detection systems. In response to this need a proliferation of such algorithms has taken place. With this proliferation comes the necessary and non-trivial task of validation. There is (and always will be) insufficient experimental data to determine performance of spectroscopy algorithms over the relevant factor space at any reasonable precision. In the case of gamma-ray spectroscopy, there are hundreds of radioisotopes of interest, which may come in arbitrary admixtures, there are many materials of unknown quantity, which may be found in the intervening space between the source and the detection system, and there are also irregular variations in the detector systems themselves. All of these factors and more should be explored to determine algorithm/system performance. This paper describes a statistical framework for the performance estimation and comparison of gamma-ray spectroscopy algorithms. The framework relies heavily on data of increasing levels of artificiality to sufficiently cover the factor space. At each level rigorous statistical methods are employed to validate performance estimates.

  10. A Framework for the Relative and Absolute Performance Evaluation of Automated Spectroscopy Systems

    SciTech Connect

    Portnoy, David; Heimberg, Peter; Heimberg, Jennifer; Feuerbach, Robert; McQuarrie, Allan; Noonan, William; Mattson, John

    2009-12-02

    The development of high-speed, high-performance gamma-ray spectroscopy algorithms is critical to the success of many automated threat detection systems. In response to this need a proliferation of such algorithms has taken place. With this proliferation comes the necessary and non-trivial task of validation. There is (and always will be) insufficient experimental data to determine performance of spectroscopy algorithms over the relevant factor space at any reasonable precision. In the case of gamma-ray spectroscopy, there are hundreds of radioisotopes of interest, which may come in arbitrary admixtures, there are many materials of unknown quantity, which may be found in the intervening space between the source and the detection system, and there are also irregular variations in the detector systems themselves. All of these factors and more should be explored to determine algorithm/system performance. This paper describes a statistical framework for the performance estimation and comparison of gamma-ray spectroscopy algorithms. The framework relies heavily on data of increasing levels of artificiality to sufficiently cover the factor space. At each level rigorous statistical methods are employed to validate performance estimates.

  11. An unsupervised text mining method for relation extraction from biomedical literature.

    PubMed

    Quan, Changqin; Wang, Meng; Ren, Fuji

    2014-01-01

    The wealth of interaction information provided in biomedical articles motivated the implementation of text mining approaches to automatically extract biomedical relations. This paper presents an unsupervised method based on pattern clustering and sentence parsing to deal with biomedical relation extraction. Pattern clustering algorithm is based on Polynomial Kernel method, which identifies interaction words from unlabeled data; these interaction words are then used in relation extraction between entity pairs. Dependency parsing and phrase structure parsing are combined for relation extraction. Based on the semi-supervised KNN algorithm, we extend the proposed unsupervised approach to a semi-supervised approach by combining pattern clustering, dependency parsing and phrase structure parsing rules. We evaluated the approaches on two different tasks: (1) Protein-protein interactions extraction, and (2) Gene-suicide association extraction. The evaluation of task (1) on the benchmark dataset (AImed corpus) showed that our proposed unsupervised approach outperformed three supervised methods. The three supervised methods are rule based, SVM based, and Kernel based separately. The proposed semi-supervised approach is superior to the existing semi-supervised methods. The evaluation on gene-suicide association extraction on a smaller dataset from Genetic Association Database and a larger dataset from publicly available PubMed showed that the proposed unsupervised and semi-supervised methods achieved much higher F-scores than co-occurrence based method. PMID:25036529

  12. A METHOD FOR AUTOMATED ANALYSIS OF 10 ML WATER SAMPLES CONTAINING ACIDIC, BASIC, AND NEUTRAL SEMIVOLATILE COMPOUNDS LISTED IN USEPA METHOD 8270 BY SOLID PHASE EXTRACTION COUPLED IN-LINE TO LARGE VOLUME INJECTION GAS CHROMATOGRAPHY/MASS SPECTROMETRY

    EPA Science Inventory

    Data is presented showing the progress made towards the development of a new automated system combining solid phase extraction (SPE) with gas chromatography/mass spectrometry for the single run analysis of water samples containing a broad range of acid, base and neutral compounds...

  13. Recommendations relative to the scientific missions of a Mars Automated Roving Vehicle (MARV)

    NASA Technical Reports Server (NTRS)

    Spencer, R. L. (Editor)

    1973-01-01

    Scientific objectives of the MARV mission are outlined and specific science systems requirements and experimental payloads defined. All aspects of the Martian surface relative to biotic and geologic elements and those relating to geophysical and geochemical properties are explored.

  14. Is the use o f Gunnera perpensa extracts in endometritis related to antibacterial activity?

    PubMed

    McGaw, L J; Gehring, R; Katsoulis, L; Eloff, J N

    2005-06-01

    Rhizome extracts of Gunnera perpensa are used in traditional remedies in South Africa to treat endometritis both in humans and animals. An investigation was undertaken to determine whether this plant possesses antibacterial activity, which may explain its efficacy. Gunnera perpensa rhizome extracts were prepared serially with solvents of increasing polarity and tested for antibacterial activity. Test bacteria included the Gram-positive Enterococcus faecalis and Staphylococcus aureus and the Gram-negative Escherichia coli and Pseudomonas aeruginosa. A moderate to weak level of antibacterial activity in most of the extracts resulted, with the best minimal inhibitory concentration (MIC) value of 2.61 mg ml(-1) shown by the acetone extract against S. aureus. The extracts were also submitted to the brine shrimp assay to detect possible toxic or pharmacological effects. All the extracts were lethal to the brine shrimp larvae at a concentration of 5 mg ml(-1). The acetone extract was extremely toxic at 1 mg ml(-1), with some toxicity evident at 0.1 mg ml(-1). The remainder of the extracts generally displayed little activity at concentrations lower than 5 mg ml(-1). In summary, the results indicate that although the extracts demonstrated a level of pharmacological activity, the relatively weak antibacterial activity is unlikely to justify the use of G. perpensa rhizomes in the traditional treatment of endometritis. Rather, the slightly antibacterial nature of the rhizomes may contribute to an additive effect, along with their known uterotonic activity, to the overall efficacy of the preparation. PMID:16137130

  15. Fully automated detection of diabetic macular edema and dry age-related macular degeneration from optical coherence tomography images

    PubMed Central

    Srinivasan, Pratul P.; Kim, Leo A.; Mettu, Priyatham S.; Cousins, Scott W.; Comer, Grant M.; Izatt, Joseph A.; Farsiu, Sina

    2014-01-01

    We present a novel fully automated algorithm for the detection of retinal diseases via optical coherence tomography (OCT) imaging. Our algorithm utilizes multiscale histograms of oriented gradient descriptors as feature vectors of a support vector machine based classifier. The spectral domain OCT data sets used for cross-validation consisted of volumetric scans acquired from 45 subjects: 15 normal subjects, 15 patients with dry age-related macular degeneration (AMD), and 15 patients with diabetic macular edema (DME). Our classifier correctly identified 100% of cases with AMD, 100% cases with DME, and 86.67% cases of normal subjects. This algorithm is a potentially impactful tool for the remote diagnosis of ophthalmic diseases. PMID:25360373

  16. Automated solid-phase extraction and liquid chromatography-electrospray ionization-mass spectrometry for the determination of flunitrazepam and its metabolites in human urine and plasma samples.

    PubMed

    Jourdil, N; Bessard, J; Vincent, F; Eysseric, H; Bessard, G

    2003-05-25

    A sensitive and specific method using reversed-phase liquid chromatography coupled with electrospray ionization-mass spectrometry (LC-ESI-MS) has been developed for the quantitative determination of flunitrazepam (F) and its metabolites 7-aminoflunitrazepam (7-AF), N-desmethylflunitrazepam (N-DMF) and 3-hydroxyflunitrazepam (3-OHF) in biological fluids. After the addition of deuterium labelled standards of F,7-AF and N-DMF, the drugs were isolated from urine or plasma by automated solid-phase extraction, then chromatographed in an isocratic elution mode with a salt-free eluent. The quantification was performed using selected ion monitoring of protonated molecular ions (M+H(+)). Experiments were carried out to improve the extraction recovery (81-100%) and the sensitivity (limit of detection 0.025 ng/ml for F and 7-AF, 0.040 ng/ml for N-DMF and 0.200 ng/ml for 3-OHF). The method was applied to the determination of F and metabolites in drug addicts including withdrawal urine samples and in one date-rape plasma and urine sample. PMID:12705961

  17. Comparison of Two Commercial Automated Nucleic Acid Extraction and Integrated Quantitation Real-Time PCR Platforms for the Detection of Cytomegalovirus in Plasma

    PubMed Central

    Tsai, Huey-Pin; Tsai, You-Yuan; Lin, I-Ting; Kuo, Pin-Hwa; Chen, Tsai-Yun; Chang, Kung-Chao; Wang, Jen-Ren

    2016-01-01

    Quantitation of cytomegalovirus (CMV) viral load in the transplant patients has become a standard practice for monitoring the response to antiviral therapy. The cut-off values of CMV viral load assays for preemptive therapy are different due to the various assay designs employed. To establish a sensitive and reliable diagnostic assay for preemptive therapy of CMV infection, two commercial automated platforms including m2000sp extraction system integrated the Abbott RealTime (m2000rt) and the Roche COBAS AmpliPrep for extraction integrated COBAS Taqman (CAP/CTM) were evaluated using WHO international CMV standards and 110 plasma specimens from transplant patients. The performance characteristics, correlation, and workflow of the two platforms were investigated. The Abbott RealTime assay correlated well with the Roche CAP/CTM assay (R2 = 0.9379, P<0.01). The Abbott RealTime assay exhibited higher sensitivity for the detection of CMV viral load, and viral load values measured with Abbott RealTime assay were on average 0.76 log10 IU/mL higher than those measured with the Roche CAP/CTM assay (P<0.0001). Workflow analysis on a small batch size at one time, using the Roche CAP/CTM platform had a shorter hands-on time than the Abbott RealTime platform. In conclusion, these two assays can provide reliable data for different purpose in a clinical virology laboratory setting. PMID:27494707

  18. Comparison of Two Automated Solid Phase Extractions for the Detection of Ten Fentanyl Analogs and Metabolites in Human Urine Using Liquid Chromatography Tandem Mass Spectrometry

    PubMed Central

    Shaner, Rebecca L.; Kaplan, Pearl; Hamelin, Elizabeth I.; Bragg, William; Johnson, Rudolph C.

    2016-01-01

    Two types of automated solid phase extraction (SPE) were assessed for the determination of human exposure to fentanyls in urine. High sensitivity is required to detect these compounds following exposure because of the low dose required for therapeutic effect and the rapid clearance from the body for these compounds. To achieve this sensitivity, two acceptable methods for the detection of human exposure to seven fentanyl analogs and three metabolites were developed using either off-line 96-well plate SPE or on-line SPE. Each system offers different advantages: off-line 96-well plate SPE allows for high throughput analysis of many samples, which is needed for large sample numbers, while on-line SPE removes almost all analyst manipulation of the samples, minimizing the analyst time needed for sample preparation. Both sample preparations were coupled with reversed phase liquid chromatography and isotope dilution tandem mass spectrometry (LC-MS/MS) for analyte detection. For both methods, the resulting precision was within 15%, the accuracy within 25%, and the sensitivity was comparable with the limits of detection ranging from 0.002-0.041ng/mL. Additionally, matrix effects were substantially decreased from previous reports for both extraction protocols. The results of this comparison showed that both methods were acceptable for the detection of exposures to fentanyl analogs and metabolites in urine. PMID:24893271

  19. Regenerable immuno-biochip for screening ochratoxin A in green coffee extract using an automated microarray chip reader with chemiluminescence detection.

    PubMed

    Sauceda-Friebe, Jimena C; Karsunke, Xaver Y Z; Vazac, Susanna; Biselli, Scarlett; Niessner, Reinhard; Knopp, Dietmar

    2011-03-18

    Ochratoxin A (OTA) can contaminate foodstuffs in the ppb to ppm range and once formed, it is difficult to remove. Because of its toxicity and potential risks to human health, the need exists for rapid, efficient detection methods that comply with legal maximum residual limits. In this work we have synthesized an OTA conjugate functionalized with a water-soluble peptide for covalent immobilization on a glass biochip by means of contact spotting. The chip was used for OTA determination with an indirect competitive immunoassay format with flow-through reagent addition and chemiluminescence detection, carried out with the stand-alone automated Munich Chip Reader 3 (MCR 3) platform. A buffer model and real green coffee extracts were used for this purpose. At the present, covalent conjugate immobilization allowed for at least 20 assay-regeneration cycles of the biochip surface. The total analysis time for a single sample, including measurement and surface regeneration, was 12 min and the LOQ of OTA in green coffee extract was 0.3 μg L(-1) which corresponds to 7 μg kg(-1). PMID:21397079

  20. An automated flow injection system for metal determination by flame atomic absorption spectrometry involving on-line fabric disk sorptive extraction technique.

    PubMed

    Anthemidis, A; Kazantzi, V; Samanidou, V; Kabir, A; Furton, K G

    2016-08-15

    A novel flow injection-fabric disk sorptive extraction (FI-FDSE) system was developed for automated determination of trace metals. The platform was based on a minicolumn packed with sol-gel coated fabric media in the form of disks, incorporated into an on-line solid-phase extraction system, coupled with flame atomic absorption spectrometry (FAAS). This configuration provides minor backpressure, resulting in high loading flow rates and shorter analytical cycles. The potentials of this technique were demonstrated for trace lead and cadmium determination in environmental water samples. The applicability of different sol-gel coated FPSE media was investigated. The on-line formed complex of metal with ammonium pyrrolidine dithiocarbamate (APDC) was retained onto the fabric surface and methyl isobutyl ketone (MIBK) was used to elute the analytes prior to atomization. For 90s preconcentration time, enrichment factors of 140 and 38 and detection limits (3σ) of 1.8 and 0.4μgL(-1) were achieved for lead and cadmium determination, respectively, with a sampling frequency of 30h(-1). The accuracy of the proposed method was estimated by analyzing standard reference materials and spiked water samples. PMID:27260436

  1. Automated evaluation of electronic discharge notes to assess quality of care for cardiovascular diseases using Medical Language Extraction and Encoding System (MedLEE)

    PubMed Central

    Lin, Jou-Wei; Yang, Chen-Wei

    2010-01-01

    The objective of this study was to develop and validate an automated acquisition system to assess quality of care (QC) measures for cardiovascular diseases. This system combining searching and retrieval algorithms was designed to extract QC measures from electronic discharge notes and to estimate the attainment rates to the current standards of care. It was developed on the patients with ST-segment elevation myocardial infarction and tested on the patients with unstable angina/non-ST-segment elevation myocardial infarction, both diseases sharing almost the same QC measures. The system was able to reach a reasonable agreement (κ value) with medical experts from 0.65 (early reperfusion rate) to 0.97 (β-blockers and lipid-lowering agents before discharge) for different QC measures in the test set, and then applied to evaluate QC in the patients who underwent coronary artery bypass grafting surgery. The result has validated a new tool to reliably extract QC measures for cardiovascular diseases. PMID:20442141

  2. High-throughput, Automated Extraction of DNA and RNA from Clinical Samples using TruTip Technology on Common Liquid Handling Robots

    PubMed Central

    Holmberg, Rebecca C.; Gindlesperger, Alissa; Stokes, Tinsley; Brady, Dane; Thakore, Nitu; Belgrader, Philip; Cooney, Christopher G.; Chandler, Darrell P.

    2013-01-01

    TruTip is a simple nucleic acid extraction technology whereby a porous, monolithic binding matrix is inserted into a pipette tip. The geometry of the monolith can be adapted for specific pipette tips ranging in volume from 1.0 to 5.0 ml. The large porosity of the monolith enables viscous or complex samples to readily pass through it with minimal fluidic backpressure. Bi-directional flow maximizes residence time between the monolith and sample, and enables large sample volumes to be processed within a single TruTip. The fundamental steps, irrespective of sample volume or TruTip geometry, include cell lysis, nucleic acid binding to the inner pores of the TruTip monolith, washing away unbound sample components and lysis buffers, and eluting purified and concentrated nucleic acids into an appropriate buffer. The attributes and adaptability of TruTip are demonstrated in three automated clinical sample processing protocols using an Eppendorf epMotion 5070, Hamilton STAR and STARplus liquid handling robots, including RNA isolation from nasopharyngeal aspirate, genomic DNA isolation from whole blood, and fetal DNA extraction and enrichment from large volumes of maternal plasma (respectively). PMID:23793016

  3. High quality DNA obtained with an automated DNA extraction method with 70+ year old formalin-fixed celloidin-embedded (FFCE) blocks from the indiana medical history museum

    PubMed Central

    Niland, Erin E; McGuire, Audrey; Cox, Mary H; Sandusky, George E

    2012-01-01

    DNA and RNA have been used as markers of tissue quality and integrity throughout the last few decades. In this research study, genomic quality DNA of kidney, liver, heart, lung, spleen, and brain were analyzed in tissues from post-mortem patients and surgical cancer cases spanning the past century. DNA extraction was performed on over 180 samples from: 70+ year old formalin-fixed celloidin-embedded (FFCE) tissues, formalin-fixed paraffin-embedded (FFPE) tissue samples from surgical cases and post-mortem cases from the 1970’s, 1980’s, 1990’s, and 2000’s, tissues fixed in 10% neutral buffered formalin/stored in 70% ethanol from the 1990’s, 70+ year old tissues fixed in unbuffered formalin of various concentrations, and fresh tissue as a control. To extract DNA from FFCE samples and ethanol-soaked samples, a modified standard operating procedure was used in which all tissues were homogenized, digested with a proteinase K solution for a long period of time (24-48 hours), and DNA was extracted using the Autogen Flexstar automated extraction machine. To extract DNA from FFPE, all tissues were soaked in xylene to remove the paraffin from the tissue prior to digestion, and FFPE tissues were not homogenized. The results were as follows: celloidin-embedded and paraffin-embedded tissues yielded the highest DNA concentration and greatest DNA quality, while the formalin in various concentrations, and long term formalin/ethanol-stored tissue yielded both the lowest DNA concentration and quality of the tissues tested. The average DNA yield for the various fixatives was: 367.77 μg/ mL FFCE, 590.7 μg/mL FFPE, 53.74 μg/mL formalin-fixed/70% ethanol-stored and 33.2 μg/mL unbuffered formalin tissues. The average OD readings for FFCE, FFPE, formalin-fixed/70% ethanol-stored tissues, and tissues fixed in unbuffered formalin were 1.86, 1.87, 1.43, and 1.48 respectively. The results show that usable DNA can be extracted from tissue fixed in formalin and embedded in celloidin

  4. Rapid analysis of three β-agonist residues in food of animal origin by automated on-line solid-phase extraction coupled to liquid chromatography and tandem mass spectrometry.

    PubMed

    Mi, Jiebo; Li, Shujing; Xu, Hong; Liang, Wei; Sun, Tao

    2014-09-01

    An automated online solid-phase extraction with liquid chromatography and tandem mass spectrometry method was developed and validated for the detection of clenbuterol, salbutamol, and ractopamine in food of animal origin. The samples from the food matrix were pretreated with an online solid-phase extraction cartridge by Oasis MCX for <5 min after acid hydrolysis for 30 min. The peak focusing mode was used to elute the target compounds directly onto a C18 column. Chromatographic separation was achieved under gradient conditions using a mobile phase composed of acetonitrile/0.1% formic acid in aqueous solution. Each analyte was detected in two multiple reaction monitoring transitions via an electrospray ionization source in a positive mode. The relative standard deviations ranged from 2.6 to 10.5%, and recovery was between 76.7 and 107.2% at all quality control levels. The limits of quantification of three β-agonists were in the range of 0.024-0.29 μg/kg in pork, sausage, and milk powder, respectively. This newly developed method offers high sensitivity and minimum sample pretreatment for the high-throughput analysis of β-agonist residues. PMID:24916570

  5. Automated Identification of Closed Mesoscale Cellular Convection and Impact of Resolution on Related Mesoscale Dynamics

    NASA Astrophysics Data System (ADS)

    Martini, M.; Gustafson, W. I.; Yang, Q.; Xiao, H.

    2013-12-01

    Organized mesoscale cellular convection (MCC) is a common feature of marine stratocumulus that forms in response to a balance between mesoscale dynamics and smaller scale processes such as cloud radiative cooling and microphysics. Cloud resolving models begin to resolve some, but not all, of these processes with less of the mesoscale dynamics resolved as one progresses from <1 km to 10 km grid spacing. While limited domain cloud resolving models can use high resolution to simulate MCC, global cloud resolving models must resort to using grid spacings closer to 5 to 10 km. This effectively truncates the scales through which the dynamics can act and impacts the MCC characteristics, potentially altering the climate impact of these clouds in climate models. To understand the impact of this truncation, we use the Weather Research and Forecasting model with chemistry (WRF-Chem) and fully coupled cloud-aerosol interactions to simulate marine low clouds during the VOCALS-REx campaign over the Southeast Pacific. A suite of experiments with 1-, 3- and 9-km grid spacing indicates resolution dependent behavior. The simulations with finer grid spacing have lower liquid water paths and cloud fractions, while cloud tops are higher. When compared to observed liquid water paths from GOES and MODIS, the 3-km simulation has better agreement over the coastal regions while the 9-km simulation better agrees over remote regions. The observed diurnal cycle is reasonably well simulated. To isolate organized MCC characteristics we developed a new automated method, which uses a variation of the watershed segmentation technique that combines the detection of cloud boundaries with a test for coincident vertical velocity characteristics. This has the advantage of ensuring that the detected cloud fields are dynamically consistent for closed MCC and helps minimize false detections from secondary circulations. We demonstrate that the 3-km simulation is able to reproduce the scaling between

  6. Semi-automated extraction and delineation of 3D roads of street scene from mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Fang, Lina; Li, Jonathan

    2013-05-01

    Accurate 3D road information is important for applications such as road maintenance and virtual 3D modeling. Mobile laser scanning (MLS) is an efficient technique for capturing dense point clouds that can be used to construct detailed road models for large areas. This paper presents a method for extracting and delineating roads from large-scale MLS point clouds. The proposed method partitions MLS point clouds into a set of consecutive "scanning lines", which each consists of a road cross section. A moving window operator is used to filter out non-ground points line by line, and curb points are detected based on curb patterns. The detected curb points are tracked and refined so that they are both globally consistent and locally similar. To evaluate the validity of the proposed method, experiments were conducted using two types of street-scene point clouds captured by Optech's Lynx Mobile Mapper System. The completeness, correctness, and quality of the extracted roads are over 94.42%, 91.13%, and 91.3%, respectively, which proves the proposed method is a promising solution for extracting 3D roads from MLS point clouds.

  7. Extraction conditions of white rose petals for the inhibition of enzymes related to skin aging

    PubMed Central

    Choi, Ehn-Kyoung; Guo, Haiyu; Choi, Jae-Kwon; Jang, Su-Kil; Shin, Kyungha; Cha, Ye-Seul; Choi, Youngjin; Seo, Da-Woom; Lee, Yoon-Bok

    2015-01-01

    In order to assess inhibitory potentials of white rose petal extracts (WRPE) on the activities of enzymes related to dermal aging according to the extraction conditions, three extraction methods were adopted. WRPE was prepared by extracting dried white rose (Rosa hybrida) petals with 50% ethanol (WRPE-EtOH), Pectinex® SMASH XXL enzyme (WRPE-enzyme) or high temperature-high pressure (WRPE-HTHP). In the inhibition of matrix metalloproteinase-1, although the enzyme activity was fully inhibited by all 3 extracts at 100 µg/mL in 60 min, partial inhibition (50-70%) was achieved only by WRPE-EtOH and WRPE-enzyme at 50 µg/mL. High concentrations (≥250 µg/mL) of all 3 extracts markedly inhibited the elastase activity. However, at low concentrations (15.6-125 µg/mL), only WRPE-EtOH inhibited the enzyme activity. Notably, WRPE-EtOH was superior to WRPE-enzyme and WRPE-HTHP in the inhibition of tyrosinase. WRPE-EtOH significantly inhibited the enzyme activity from 31.2 µM, reaching 80% inhibition at 125 µM. In addition to its strong antioxidative activity, the ethanol extract of white rose petals was confirmed to be effective in inhibiting skin aging-related enzymes. Therefore, it is suggested that WRPE-EtOH could be a good candidate for the improvement of skin aging such as wrinkle formation and pigmentation. PMID:26472968

  8. Extraction conditions of white rose petals for the inhibition of enzymes related to skin aging.

    PubMed

    Choi, Ehn-Kyoung; Guo, Haiyu; Choi, Jae-Kwon; Jang, Su-Kil; Shin, Kyungha; Cha, Ye-Seul; Choi, Youngjin; Seo, Da-Woom; Lee, Yoon-Bok; Joo, Seong-So; Kim, Yun-Bae

    2015-09-01

    In order to assess inhibitory potentials of white rose petal extracts (WRPE) on the activities of enzymes related to dermal aging according to the extraction conditions, three extraction methods were adopted. WRPE was prepared by extracting dried white rose (Rosa hybrida) petals with 50% ethanol (WRPE-EtOH), Pectinex® SMASH XXL enzyme (WRPE-enzyme) or high temperature-high pressure (WRPE-HTHP). In the inhibition of matrix metalloproteinase-1, although the enzyme activity was fully inhibited by all 3 extracts at 100 µg/mL in 60 min, partial inhibition (50-70%) was achieved only by WRPE-EtOH and WRPE-enzyme at 50 µg/mL. High concentrations (≥250 µg/mL) of all 3 extracts markedly inhibited the elastase activity. However, at low concentrations (15.6-125 µg/mL), only WRPE-EtOH inhibited the enzyme activity. Notably, WRPE-EtOH was superior to WRPE-enzyme and WRPE-HTHP in the inhibition of tyrosinase. WRPE-EtOH significantly inhibited the enzyme activity from 31.2 µM, reaching 80% inhibition at 125 µM. In addition to its strong antioxidative activity, the ethanol extract of white rose petals was confirmed to be effective in inhibiting skin aging-related enzymes. Therefore, it is suggested that WRPE-EtOH could be a good candidate for the improvement of skin aging such as wrinkle formation and pigmentation. PMID:26472968

  9. Fully automated analysis of beta-lactams in bovine milk by online solid phase extraction-liquid chromatography-electrospray-tandem mass spectrometry.

    PubMed

    Kantiani, Lina; Farré, Marinella; Sibum, Martin; Postigo, Cristina; López de Alda, Miren; Barceló, Damiá

    2009-06-01

    A fully automated method for the detection of beta-lactam antibiotics, including six penicillins (amoxicillin, ampicillin, cloxacillin, dicloxacillin, oxacillin, and penicillin G) and four cephalosporins (cefazolin, ceftiofur, cefoperazone, and cefalexin) in bovine milk samples has been developed. The outlined method is based on online solid-phase extraction-liquid chromatography/electrospray-tandem mass spectrometry (SPE-LC/ESI-MS-MS). Target compounds were concentrated from 500 microL of centrifuged milk samples using an online SPE procedure with C18 HD cartridges. Target analytes were eluted with a gradient mobile phase (water + 0.1% formic acid/methanol + 0.1% formic acid) at a flow rate of 0.7 mL/min. Chromatographic separation was achieved within 10 min using a C-12 reversed phase analytical column. For unequivocal identification and confirmation, two multiple reaction monitoring (MRM) transitions were acquired for each analyte in the positive electrospray ionization mode (ESI(+)). Method limits of detection (LODs) in milk were well below the maximum residue limits (MRLs) set by the European Union for all compounds. Limits of quantification in milk were between 0.09 ng/mL and 1.44 ng/mL. The developed method was validated according to EU's requirements, and accuracy results ranged from 80 to 116%. Finally, the method was applied to the analysis of twenty real samples previously screened by the inhibition of microbial growth test Eclipse 100. This new developed method offers high sensitivity and accuracy of results, minimum sample pre-treatment, and uses for the first time an automated online SPE offering a high throughput analysis. Because of all these characteristics, the proposed method is applicable and could be deemed necessary within the field of food control and safety. PMID:19402673

  10. Maneuver Automation Software

    NASA Technical Reports Server (NTRS)

    Uffelman, Hal; Goodson, Troy; Pellegrin, Michael; Stavert, Lynn; Burk, Thomas; Beach, David; Signorelli, Joel; Jones, Jeremy; Hahn, Yungsun; Attiyah, Ahlam; Illsley, Jeannette

    2009-01-01

    The Maneuver Automation Software (MAS) automates the process of generating commands for maneuvers to keep the spacecraft of the Cassini-Huygens mission on a predetermined prime mission trajectory. Before MAS became available, a team of approximately 10 members had to work about two weeks to design, test, and implement each maneuver in a process that involved running many maneuver-related application programs and then serially handing off data products to other parts of the team. MAS enables a three-member team to design, test, and implement a maneuver in about one-half hour after Navigation has process-tracking data. MAS accepts more than 60 parameters and 22 files as input directly from users. MAS consists of Practical Extraction and Reporting Language (PERL) scripts that link, sequence, and execute the maneuver- related application programs: "Pushing a single button" on a graphical user interface causes MAS to run navigation programs that design a maneuver; programs that create sequences of commands to execute the maneuver on the spacecraft; and a program that generates predictions about maneuver performance and generates reports and other files that enable users to quickly review and verify the maneuver design. MAS can also generate presentation materials, initiate electronic command request forms, and archive all data products for future reference.

  11. Automated data analysis.

    NASA Astrophysics Data System (ADS)

    Teuber, D.

    Automated data analysis assists the astronomer in the decision making processes applied for extracting astronomical information from data. Automated data analysis is the step between image processing and model interpretation. Tools developed in AI are applied (classification, expert system). Programming languages and computers are chosen to fulfil the increasing requirements. Expert systems have begun in astronomy. Data banks permit the astronomical community to share the large body of resulting information.

  12. CD-REST: a system for extracting chemical-induced disease relation in literature

    PubMed Central

    Xu, Jun; Wu, Yonghui; Zhang, Yaoyun; Wang, Jingqi; Lee, Hee-Jin; Xu, Hua

    2016-01-01

    Mining chemical-induced disease relations embedded in the vast biomedical literature could facilitate a wide range of computational biomedical applications, such as pharmacovigilance. The BioCreative V organized a Chemical Disease Relation (CDR) Track regarding chemical-induced disease relation extraction from biomedical literature in 2015. We participated in all subtasks of this challenge. In this article, we present our participation system Chemical Disease Relation Extraction SysTem (CD-REST), an end-to-end system for extracting chemical-induced disease relations in biomedical literature. CD-REST consists of two main components: (1) a chemical and disease named entity recognition and normalization module, which employs the Conditional Random Fields algorithm for entity recognition and a Vector Space Model-based approach for normalization; and (2) a relation extraction module that classifies both sentence-level and document-level candidate drug–disease pairs by support vector machines. Our system achieved the best performance on the chemical-induced disease relation extraction subtask in the BioCreative V CDR Track, demonstrating the effectiveness of our proposed machine learning-based approaches for automatic extraction of chemical-induced disease relations in biomedical literature. The CD-REST system provides web services using HTTP POST request. The web services can be accessed from http://clinicalnlptool.com/cdr. The online CD-REST demonstration system is available at http://clinicalnlptool.com/cdr/cdr.html. Database URL: http://clinicalnlptool.com/cdr; http://clinicalnlptool.com/cdr/cdr.html PMID:27016700

  13. Determination of pyronaridine in whole blood by automated solid phase extraction and high-performance liquid chromatography.

    PubMed

    Blessborn, Daniel; Lindegårdh, Niklas; Ericsson, Orjan; Hellgren, Urban; Bergqvist, Yngve

    2003-06-01

    A new extraction procedure for the analysis of pyronaridine in whole blood is presented. A weak cation exchanger with a carboxylic acid (CBA) sorbent was found to be a suitable solid phase sorbent for the extraction of pyronaridine. High-performance liquid chromatography with UV detection at 278 nm and an electrochemical detector at +0.75 V is used. The electrochemical detector gives higher selectivity than the UV detector. The separation was performed using a C18 reversed phase column with mobile phase of acetonitrile-phosphate buffer (0.01 mol/L, pH 2.5)- sodium perchlorate (1.0 mol/L; 22:77:1, v/v/v). The within-day RSDs were below 5% at all concentration levels between 75 nmol/L and 1500 nmol/L, and the between-day RSDs were below 14% at all concentration levels. The limit of quantification was about 50 nmol/L in 1000 microL whole blood with an RSD of 20% or less on a day-to-day basis. The stability of pyronaridine is increased if the pH is less than 3 in water solutions. In whole blood, the concentration decreases by about 10% for each freeze-thaw cycle performed. At room temperature (about 22 degrees C), pyronaridine concentration in whole blood decreases by about 10% within 12 to 24 hours. PMID:12766551

  14. Revealing Dimensions of Thinking in Open-Ended Self-Descriptions: An Automated Meaning Extraction Method for Natural Language

    PubMed Central

    2008-01-01

    A new method for extracting common themes from written text is introduced and applied to 1,165 open-ended self-descriptive narratives. Drawing on a lexical approach to personality, the most commonly-used adjectives within narratives written by college students were identified using computerized text analytic tools. A factor analysis on the use of these adjectives in the self-descriptions produced a 7-factor solution consisting of psychologically meaningful dimensions. Some dimensions were unipolar (e.g., Negativity factor, wherein most loaded items were negatively valenced adjectives); others were dimensional in that semantically opposite words clustered together (e.g., Sociability factor, wherein terms such as shy, outgoing, reserved, and loud all loaded in the same direction). The factors exhibited modest reliability across different types of writ writing samples and were correlated with self-reports and behaviors consistent with the dimensions. Similar analyses with additional content words (adjectives, adverbs, nouns, and verbs) yielded additional psychological dimensions associated with physical appearance, school, relationships, etc. in which people contextualize their self-concepts. The results suggest that the meaning extraction method is a promising strategy that determines the dimensions along which people think about themselves. PMID:18802499

  15. Automated and quantitative headspace in-tube extraction for the accurate determination of highly volatile compounds from wines and beers.

    PubMed

    Zapata, Julián; Mateo-Vivaracho, Laura; Lopez, Ricardo; Ferreira, Vicente

    2012-03-23

    An automatic headspace in-tube extraction (ITEX) method for the accurate determination of acetaldehyde, ethyl acetate, diacetyl and other volatile compounds from wine and beer has been developed and validated. Method accuracy is based on the nearly quantitative transference of volatile compounds from the sample to the ITEX trap. For achieving that goal most methodological aspects and parameters have been carefully examined. The vial and sample sizes and the trapping materials were found to be critical due to the pernicious saturation effects of ethanol. Small 2 mL vials containing very small amounts of sample (20 μL of 1:10 diluted sample) and a trap filled with 22 mg of Bond Elut ENV resins could guarantee a complete trapping of sample vapors. The complete extraction requires 100 × 0.5 mL pumping strokes at 60 °C and takes 24 min. Analytes are further desorbed at 240 °C into the GC injector under a 1:5 split ratio. The proportion of analytes finally transferred to the trap ranged from 85 to 99%. The validation of the method showed satisfactory figures of merit. Determination coefficients were better than 0.995 in all cases and good repeatability was also obtained (better than 7% in all cases). Reproducibility was better than 8.3% except for acetaldehyde (13.1%). Detection limits were below the odor detection thresholds of these target compounds in wine and beer and well below the normal ranges of occurrence. Recoveries were not significantly different to 100%, except in the case of acetaldehyde. In such a case it could be determined that the method is not able to break some of the adducts that this compound forms with sulfites. However, such problem was avoided after incubating the sample with glyoxal. The method can constitute a general and reliable alternative for the analysis of very volatile compounds in other difficult matrixes. PMID:22340891

  16. Exploiting syntactic and semantics information for chemical-disease relation extraction.

    PubMed

    Zhou, Huiwei; Deng, Huijie; Chen, Long; Yang, Yunlong; Jia, Chen; Huang, Degen

    2016-01-01

    Identifying chemical-disease relations (CDR) from biomedical literature could improve chemical safety and toxicity studies. This article proposes a novel syntactic and semantic information exploitation method for CDR extraction. The proposed method consists of a feature-based model, a tree kernel-based model and a neural network model. The feature-based model exploits lexical features, the tree kernel-based model captures syntactic structure features, and the neural network model generates semantic representations. The motivation of our method is to fully utilize the nice properties of the three models to explore diverse information for CDR extraction. Experiments on the BioCreative V CDR dataset show that the three models are all effective for CDR extraction, and their combination could further improve extraction performance.Database URL:http://www.biocreative.org/resources/corpora/biocreative-v-cdr-corpus/. PMID:27081156

  17. Exploiting syntactic and semantics information for chemical–disease relation extraction

    PubMed Central

    Zhou, Huiwei; Deng, Huijie; Chen, Long; Yang, Yunlong; Jia, Chen; Huang, Degen

    2016-01-01

    Identifying chemical–disease relations (CDR) from biomedical literature could improve chemical safety and toxicity studies. This article proposes a novel syntactic and semantic information exploitation method for CDR extraction. The proposed method consists of a feature-based model, a tree kernel-based model and a neural network model. The feature-based model exploits lexical features, the tree kernel-based model captures syntactic structure features, and the neural network model generates semantic representations. The motivation of our method is to fully utilize the nice properties of the three models to explore diverse information for CDR extraction. Experiments on the BioCreative V CDR dataset show that the three models are all effective for CDR extraction, and their combination could further improve extraction performance. Database URL: http://www.biocreative.org/resources/corpora/biocreative-v-cdr-corpus/. PMID:27081156

  18. UHPLC/HRMS Analysis of African Mango (Irvingia gabonensis) Seeds, Extract and Related Dietary Supplements

    PubMed Central

    Sun, Jianghao; Chen, Pei

    2012-01-01

    Dietary Supplements based on an extract from Irvingia gabonensis (African Mango, AM for abbreviation) seeds are one of the popular herbal weight loss dietary supplements in the US market. The extract is believed to be a natural and healthy way to lose weight and improve overall health. However, the chemical composition of African mango based-dietary supplements (AMDS) has never been reported. In this study, the chemical constituents of African mango seeds, African mango seeds extract (AMSE), and different kinds of commercially available African mango based dietary supplements (AMDS) have been investigated using an ultra high-performance liquid chromatography with high resolution mass spectrometry (UHPLC-HRMS) method. Ellagic acid, mono, di, tri-O methyl-ellagic acids and their glycosides were found as major components in African Mango seeds. These compounds may be used for quality control of African Mango extract and related dietary supplements. PMID:22880691

  19. The extraction of pharmacogenetic and pharmacogenomic relations--a case study using PharmGKB.

    PubMed

    Buyko, Ekaterina; Beisswanger, Elena; Hahn, Udo

    2012-01-01

    In this paper, we report on adapting the JREX relation extraction engine, originally developed For the elicitation of protein-protein interaction relations, to the domains of pharmacogenetics and pharmacogenomics. We propose an intrinsic and an extrinsic evaluation scenario which is based on knowledge contained in the PharmGKB knowledge base. Porting JREX yields favorable results in the range of 80% F-score for Gene-Disease, Gene-Drug, and Drug-Disease relations. PMID:22174293

  20. Human Factors In Aircraft Automation

    NASA Technical Reports Server (NTRS)

    Billings, Charles

    1995-01-01

    Report presents survey of state of art in human factors in automation of aircraft operation. Presents examination of aircraft automation and effects on flight crews in relation to human error and aircraft accidents.

  1. Semi-automated Technique to Extract Boundary of Valley/mountain Glaciers using Glacio-morphological Information from Digital Elevation Model

    NASA Astrophysics Data System (ADS)

    Chakraborty, M.; Panigrahy, S.; Kundu, S.

    2014-11-01

    A semi automated technique has been developed to extract the spatial extension of valleys and mountain glaciers. The method is based on morphological properties of glaciated area extracted from Digital Elevation Model (DEM). Identification of glacial boundary based on spectral information from optical remote sensing imageries produces errors due to misclassification of debris-covered ablation area with surrounding rocky terrain and perennially snow-covered slope with debris free glaciated area. Elevation information DEM of Shuttle Radar Topography Mission (SRTM), CartoDEM and ASTER DEM have been used. A part of western Himalayas was selected as the study area that contains large glaciated basins, e.g., Bhagirathi, Baspa, Chandra basin. First order derivatives, slope aspect, and second order derivatives like, profile and plan curvatures are computed from the DEM. The derivatives are used to quantify and characterise the morphological aspects of the glaciated area and used in the decision rule models to generate the glacial boundaries. The ridge lines of the study areas are also generated from the plan curvature and used in the model to delineate the catchments areas of the glaciers. The slope based boundary is checked for consistency with the boundary from profile curvature and combined manually to generate the final glacier boundary. Area and length under the derived boundary of Gangotri glacier of Bhagirathi catchments are 90.25 sq km and 30.5 km. The result has been checked with high resolution optical data. This objective approach is important to delineate glaciated area, measure the length, width and area and generate glacial hypsometry, concentration factor of the glaciers. Accuracy of the result depends up on the quality of the DEM. DEM generated by SAR interferometric technique is found superior over DEM generated from other interpolation techniques.

  2. Integrated DNA and RNA extraction and purification on an automated microfluidic cassette from bacterial and viral pathogens causing community-acquired lower respiratory tract infections.

    PubMed

    Van Heirstraeten, Liesbet; Spang, Peter; Schwind, Carmen; Drese, Klaus S; Ritzi-Lehnert, Marion; Nieto, Benjamin; Camps, Marta; Landgraf, Bryan; Guasch, Francesc; Corbera, Antoni Homs; Samitier, Josep; Goossens, Herman; Malhotra-Kumar, Surbhi; Roeser, Tina

    2014-05-01

    In this paper, we describe the development of an automated sample preparation procedure for etiological agents of community-acquired lower respiratory tract infections (CA-LRTI). The consecutive assay steps, including sample re-suspension, pre-treatment, lysis, nucleic acid purification, and concentration, were integrated into a microfluidic lab-on-a-chip (LOC) cassette that is operated hands-free by a demonstrator setup, providing fluidic and valve actuation. The performance of the assay was evaluated on viral and Gram-positive and Gram-negative bacterial broth cultures previously sampled using a nasopharyngeal swab. Sample preparation on the microfluidic cassette resulted in higher or similar concentrations of pure bacterial DNA or viral RNA compared to manual benchtop experiments. The miniaturization and integration of the complete sample preparation procedure, to extract purified nucleic acids from real samples of CA-LRTI pathogens to, and above, lab quality and efficiency, represent important steps towards its application in a point-of-care test (POCT) for rapid diagnosis of CA-LRTI. PMID:24615272

  3. Extracting medical information from narrative patient records: the case of medication-related information

    PubMed Central

    Grouin, Cyril; Zweigenbaum, Pierre

    2010-01-01

    Objective While essential for patient care, information related to medication is often written as free text in clinical records and, therefore, difficult to use in computerized systems. This paper describes an approach to automatically extract medication information from clinical records, which was developed to participate in the i2b2 2009 challenge, as well as different strategies to improve the extraction. Design Our approach relies on a semantic lexicon and extraction rules as a two-phase strategy: first, drug names are recognized and, then, the context of these names is explored to extract drug-related information (mode, dosage, etc) according to rules capturing the document structure and the syntax of each kind of information. Different configurations are tested to improve this baseline system along several dimensions, particularly drug name recognition—this step being a determining factor to extract drug-related information. Changes were tested at the level of the lexicons and of the extraction rules. Results The initial system participating in i2b2 achieved good results (global F-measure of 77%). Further testing of different configurations substantially improved the system (global F-measure of 81%), performing well for all types of information (eg, 84% for drug names and 88% for modes), except for durations and reasons, which remain problematic. Conclusion This study demonstrates that a simple rule-based system can achieve good performance on the medication extraction task. We also showed that controlled modifications (lexicon filtering and rule refinement) were the improvements that best raised the performance. PMID:20819863

  4. A new adaptive algorithm for automated feature extraction in exponentially damped signals for health monitoring of smart structures

    NASA Astrophysics Data System (ADS)

    Qarib, Hossein; Adeli, Hojjat

    2015-12-01

    In this paper authors introduce a new adaptive signal processing technique for feature extraction and parameter estimation in noisy exponentially damped signals. The iterative 3-stage method is based on the adroit integration of the strengths of parametric and nonparametric methods such as multiple signal categorization, matrix pencil, and empirical mode decomposition algorithms. The first stage is a new adaptive filtration or noise removal scheme. The second stage is a hybrid parametric-nonparametric signal parameter estimation technique based on an output-only system identification technique. The third stage is optimization of estimated parameters using a combination of the primal-dual path-following interior point algorithm and genetic algorithm. The methodology is evaluated using a synthetic signal and a signal obtained experimentally from transverse vibrations of a steel cantilever beam. The method is successful in estimating the frequencies accurately. Further, it estimates the damping exponents. The proposed adaptive filtration method does not include any frequency domain manipulation. Consequently, the time domain signal is not affected as a result of frequency domain and inverse transformations.

  5. Automated extraction of typing information for bacterial pathogens from whole genome sequence data: Neisseria meningitidis as an exemplar.

    PubMed

    Jolley, K A; Maiden, M C

    2013-01-01

    Whole genome sequence (WGS) data are increasingly used to characterise bacterial pathogens. These data provide detailed information on the genotypes and likely phenotypes of aetiological agents, enabling the relationships of samples from potential disease outbreaks to be established precisely. However, the generation of increasing quantities of sequence data does not, in itself, resolve the problems that many microbiological typing methods have addressed over the last 100 years or so; indeed, providing large volumes of unstructured data can confuse rather than resolve these issues. Here we review the nascent field of storage of WGS data for clinical application and show how curated sequence-based typing schemes on websites have generated an infrastructure that can exploit WGS for bacterial typing efficiently. We review the tools that have been implemented within the PubMLST website to extract clinically useful, strain-characterisation information that can be provided to physicians and public health professionals in a timely, concise and understandable way. These data can be used to inform medical decisions such as how to treat a patient, whether to instigate public health action, and what action might be appropriate. The information is compatible both with previous sequence-based typing data and also with data obtained in the absence of WGS, providing a flexible infrastructure for WGS-based clinical microbiology. PMID:23369391

  6. Development and validation of an automated liquid-liquid extraction GC/MS method for the determination of THC, 11-OH-THC, and free THC-carboxylic acid (THC-COOH) from blood serum.

    PubMed

    Purschke, Kirsten; Heinl, Sonja; Lerch, Oliver; Erdmann, Freidoon; Veit, Florian

    2016-06-01

    The analysis of Δ(9)-tetrahydrocannabinol (THC) and its metabolites 11-hydroxy-Δ(9)-tetrahydrocannabinol (11-OH-THC), and 11-nor-9-carboxy-Δ(9)-tetrahydrocannabinol (THC-COOH) from blood serum is a routine task in forensic toxicology laboratories. For examination of consumption habits, the concentration of the phase I metabolite THC-COOH is used. Recommendations for interpretation of analysis values in medical-psychological assessments (regranting of driver's licenses, Germany) include threshold values for the free, unconjugated THC-COOH. Using a fully automated two-step liquid-liquid extraction, THC, 11-OH-THC, and free, unconjugated THC-COOH were extracted from blood serum, silylated with N-methyl-N-(trimethylsilyl) trifluoroacetamide (MSTFA), and analyzed by GC/MS. The automation was carried out by an x-y-z sample robot equipped with modules for shaking, centrifugation, and solvent evaporation. This method was based on a previously developed manual sample preparation method. Validation guidelines of the Society of Toxicological and Forensic Chemistry (GTFCh) were fulfilled for both methods, at which the focus of this article is the automated one. Limits of detection and quantification for THC were 0.3 and 0.6 μg/L, for 11-OH-THC were 0.1 and 0.8 μg/L, and for THC-COOH were 0.3 and 1.1 μg/L, when extracting only 0.5 mL of blood serum. Therefore, the required limit of quantification for THC of 1 μg/L in driving under the influence of cannabis cases in Germany (and other countries) can be reached and the method can be employed in that context. Real and external control samples were analyzed, and a round robin test was passed successfully. To date, the method is employed in the Institute of Legal Medicine in Giessen, Germany, in daily routine. Automation helps in avoiding errors during sample preparation and reduces the workload of the laboratory personnel. Due to its flexibility, the analysis system can be employed for other liquid-liquid extractions as

  7. Automation of reverse engineering process in aircraft modeling and related optimization problems

    NASA Technical Reports Server (NTRS)

    Li, W.; Swetits, J.

    1994-01-01

    During the year of 1994, the engineering problems in aircraft modeling were studied. The initial concern was to obtain a surface model with desirable geometric characteristics. Much of the effort during the first half of the year was to find an efficient way of solving a computationally difficult optimization model. Since the smoothing technique in the proposal 'Surface Modeling and Optimization Studies of Aerodynamic Configurations' requires solutions of a sequence of large-scale quadratic programming problems, it is important to design algorithms that can solve each quadratic program in a few interactions. This research led to three papers by Dr. W. Li, which were submitted to SIAM Journal on Optimization and Mathematical Programming. Two of these papers have been accepted for publication. Even though significant progress has been made during this phase of research and computation times was reduced from 30 min. to 2 min. for a sample problem, it was not good enough for on-line processing of digitized data points. After discussion with Dr. Robert E. Smith Jr., it was decided not to enforce shape constraints in order in order to simplify the model. As a consequence, P. Dierckx's nonparametric spline fitting approach was adopted, where one has only one control parameter for the fitting process - the error tolerance. At the same time the surface modeling software developed by Imageware was tested. Research indicated a substantially improved fitting of digitalized data points can be achieved if a proper parameterization of the spline surface is chosen. A winning strategy is to incorporate Dierckx's surface fitting with a natural parameterization for aircraft parts. The report consists of 4 chapters. Chapter 1 provides an overview of reverse engineering related to aircraft modeling and some preliminary findings of the effort in the second half of the year. Chapters 2-4 are the research results by Dr. W. Li on penalty functions and conjugate gradient methods for

  8. COMPUTER AUTOMATED STUDY OF THE STRUCTURE-MUTAGENICITY RELATIONSHIPS OF NON-FUSED-RING NITROARENES AND RELATED COMPOUNDS

    EPA Science Inventory

    A quantitative structure-activity analysis of the mutagenicity of non-fused ring nitroaromatic compounds is reported. The analysis is performed on the basis of substructural fragment descriptors according to a recently developed methodology acronymed CASE (Computer Automated Stru...

  9. Study on electrical current variations in electromembrane extraction process: Relation between extraction recovery and magnitude of electrical current.

    PubMed

    Rahmani, Turaj; Rahimi, Atyeh; Nojavan, Saeed

    2016-01-15

    This contribution presents an experimental approach to improve analytical performance of electromembrane extraction (EME) procedure, which is based on the scrutiny of current pattern under different extraction conditions such as using different organic solvents as supported liquid membrane, electrical potentials, pH values of donor and acceptor phases, variable extraction times, temperatures, stirring rates, different hollow fiber lengths and the addition of salts or organic solvents to the sample matrix. In this study, four basic drugs with different polarities were extracted under different conditions with the corresponding electrical current patterns compared against extraction recoveries. The extraction process was demonstrated in terms of EME-HPLC analyses of selected basic drugs. Comparing the obtained extraction recoveries with the electrical current patterns, most cases exhibited minimum recovery and repeatability at the highest investigated magnitude of electrical current. . It was further found that identical current patterns are associated with repeated extraction efficiencies. In other words, the pattern should be repeated for a successful extraction. The results showed completely different electrical currents under different extraction conditions, so that all variable parameters have contributions into the electrical current pattern. Finally, the current patterns of extractions from wastewater, plasma and urine samples were demonstrated. The results indicated an increase in the electrical current when extracting from complex matrices; this was seen to decrease the extraction efficiency. PMID:26709301

  10. Automated Learning of Subcellular Variation among Punctate Protein Patterns and a Generative Model of Their Relation to Microtubules.

    PubMed

    Johnson, Gregory R; Li, Jieyue; Shariff, Aabid; Rohde, Gustavo K; Murphy, Robert F

    2015-12-01

    Characterizing the spatial distribution of proteins directly from microscopy images is a difficult problem with numerous applications in cell biology (e.g. identifying motor-related proteins) and clinical research (e.g. identification of cancer biomarkers). Here we describe the design of a system that provides automated analysis of punctate protein patterns in microscope images, including quantification of their relationships to microtubules. We constructed the system using confocal immunofluorescence microscopy images from the Human Protein Atlas project for 11 punctate proteins in three cultured cell lines. These proteins have previously been characterized as being primarily located in punctate structures, but their images had all been annotated by visual examination as being simply "vesicular". We were able to show that these patterns could be distinguished from each other with high accuracy, and we were able to assign to one of these subclasses hundreds of proteins whose subcellular localization had not previously been well defined. In addition to providing these novel annotations, we built a generative approach to modeling of punctate distributions that captures the essential characteristics of the distinct patterns. Such models are expected to be valuable for representing and summarizing each pattern and for constructing systems biology simulations of cell behaviors. PMID:26624011

  11. RADARS, a bioinformatics solution that automates proteome mass spectral analysis, optimises protein identification, and archives data in a relational database.

    PubMed

    Field, Helen I; Fenyö, David; Beavis, Ronald C

    2002-01-01

    RADARS, a rapid, automated, data archiving and retrieval software system for high-throughput proteomic mass spectral data processing and storage, is described. The majority of mass spectrometer data files are compatible with RADARS, for consistent processing. The system automatically takes unprocessed data files, identifies proteins via in silico database searching, then stores the processed data and search results in a relational database suitable for customized reporting. The system is robust, used in 24/7 operation, accessible to multiple users of an intranet through a web browser, may be monitored by Virtual Private Network, and is secure. RADARS is scalable for use on one or many computers, and is suited to multiple processor systems. It can incorporate any local database in FASTA format, and can search protein and DNA databases online. A key feature is a suite of visualisation tools (many available gratis), allowing facile manipulation of spectra, by hand annotation, reanalysis, and access to all procedures. We also described the use of Sonar MS/MS, a novel, rapid search engine requiring 40 MB RAM per process for searches against a genomic or EST database translated in all six reading frames. RADARS reduces the cost of analysis by its efficient algorithms: Sonar MS/MS can identifiy proteins without accurate knowledge of the parent ion mass and without protein tags. Statistical scoring methods provide close-to-expert accuracy and brings robust data analysis to the non-expert user. PMID:11788990

  12. Automated Learning of Subcellular Variation among Punctate Protein Patterns and a Generative Model of Their Relation to Microtubules

    PubMed Central

    Johnson, Gregory R.; Li, Jieyue; Shariff, Aabid; Rohde, Gustavo K.; Murphy, Robert F.

    2015-01-01

    Characterizing the spatial distribution of proteins directly from microscopy images is a difficult problem with numerous applications in cell biology (e.g. identifying motor-related proteins) and clinical research (e.g. identification of cancer biomarkers). Here we describe the design of a system that provides automated analysis of punctate protein patterns in microscope images, including quantification of their relationships to microtubules. We constructed the system using confocal immunofluorescence microscopy images from the Human Protein Atlas project for 11 punctate proteins in three cultured cell lines. These proteins have previously been characterized as being primarily located in punctate structures, but their images had all been annotated by visual examination as being simply “vesicular”. We were able to show that these patterns could be distinguished from each other with high accuracy, and we were able to assign to one of these subclasses hundreds of proteins whose subcellular localization had not previously been well defined. In addition to providing these novel annotations, we built a generative approach to modeling of punctate distributions that captures the essential characteristics of the distinct patterns. Such models are expected to be valuable for representing and summarizing each pattern and for constructing systems biology simulations of cell behaviors. PMID:26624011

  13. Relation of retinal blood flow and retinal oxygen extraction during stimulation with diffuse luminance flicker

    PubMed Central

    Palkovits, Stefan; Lasta, Michael; Told, Reinhard; Schmidl, Doreen; Werkmeister, René; Cherecheanu, Alina Popa; Garhöfer, Gerhard; Schmetterer, Leopold

    2015-01-01

    Cerebral and retinal blood flow are dependent on local neuronal activity. Several studies quantified the increase in cerebral blood flow and oxygen consumption during activity. In the present study we investigated the relation between changes in retinal blood flow and oxygen extraction during stimulation with diffuse luminance flicker and the influence of breathing gas mixtures with different fractions of O2 (FiO2; 100% 15% and 12%). Twenty-four healthy subjects were included. Retinal blood flow was studied by combining measurement of vessel diameters using the Dynamic Vessel Analyser with measurements of blood velocity using laser Doppler velocimetry. Oxygen saturation was measured using spectroscopic reflectometry and oxygen extraction was calculated. Flicker stimulation increased retinal blood flow (57.7 ± 17.8%) and oxygen extraction (34.6 ± 24.1%; p < 0.001 each). During 100% oxygen breathing the response of retinal blood flow and oxygen extraction was increased (p < 0.01 each). By contrast, breathing gas mixtures with 12% and 15% FiO2 did not alter flicker–induced retinal haemodynamic changes. The present study indicates that at a comparable increase in blood flow the increase in oxygen extraction in the retina is larger than in the brain. During systemic hyperoxia the blood flow and oxygen extraction responses to neural stimulation are augmented. The underlying mechanism is unknown. PMID:26672758

  14. Relation of retinal blood flow and retinal oxygen extraction during stimulation with diffuse luminance flicker.

    PubMed

    Palkovits, Stefan; Lasta, Michael; Told, Reinhard; Schmidl, Doreen; Werkmeister, René; Cherecheanu, Alina Popa; Garhöfer, Gerhard; Schmetterer, Leopold

    2015-01-01

    Cerebral and retinal blood flow are dependent on local neuronal activity. Several studies quantified the increase in cerebral blood flow and oxygen consumption during activity. In the present study we investigated the relation between changes in retinal blood flow and oxygen extraction during stimulation with diffuse luminance flicker and the influence of breathing gas mixtures with different fractions of O2 (FiO2; 100% 15% and 12%). Twenty-four healthy subjects were included. Retinal blood flow was studied by combining measurement of vessel diameters using the Dynamic Vessel Analyser with measurements of blood velocity using laser Doppler velocimetry. Oxygen saturation was measured using spectroscopic reflectometry and oxygen extraction was calculated. Flicker stimulation increased retinal blood flow (57.7 ± 17.8%) and oxygen extraction (34.6 ± 24.1%; p < 0.001 each). During 100% oxygen breathing the response of retinal blood flow and oxygen extraction was increased (p < 0.01 each). By contrast, breathing gas mixtures with 12% and 15% FiO2 did not alter flicker-induced retinal haemodynamic changes. The present study indicates that at a comparable increase in blood flow the increase in oxygen extraction in the retina is larger than in the brain. During systemic hyperoxia the blood flow and oxygen extraction responses to neural stimulation are augmented. The underlying mechanism is unknown. PMID:26672758

  15. Influence of DNA extraction methods on relative telomere length measurements and its impact on epidemiological studies

    PubMed Central

    Raschenberger, Julia; Lamina, Claudia; Haun, Margot; Kollerits, Barbara; Coassin, Stefan; Boes, Eva; Kedenko, Ludmilla; Köttgen, Anna; Kronenberg, Florian

    2016-01-01

    Measurement of telomere length is widely used in epidemiologic studies. Insufficient standardization of the measurements processes has, however, complicated the comparison of results between studies. We aimed to investigate whether DNA extraction methods have an influence on measured values of relative telomere length (RTL) and whether this has consequences for epidemiological studies. We performed four experiments with RTL measurement in quadruplicate by qPCR using DNA extracted with different methods: 1) a standardized validation experiment including three extraction methods (magnetic-particle-method EZ1, salting-out-method INV, phenol-chloroform-isoamyl-alcohol PCI) each in the same 20 samples demonstrated pronounced differences in RTL with lowest values with EZ1 followed by INV and PCI-isolated DNA; 2) a comparison of 307 samples from an epidemiological study showing EZ1-measurements 40% lower than INV-measurements; 3) a matching-approach of two similar non-diseased control groups including 143 pairs of subjects revealed significantly shorter RTL in EZ1 than INV-extracted DNA (0.844 ± 0.157 vs. 1.357 ± 0.242); 4) an association analysis of RTL with prevalent cardiovascular disease detected a stronger association with INV than with EZ1-extracted DNA. In summary, DNA extraction methods have a pronounced influence on the measured RTL-values. This might result in spurious or lost associations in epidemiological studies under certain circumstances. PMID:27138987

  16. Influence of DNA extraction methods on relative telomere length measurements and its impact on epidemiological studies.

    PubMed

    Raschenberger, Julia; Lamina, Claudia; Haun, Margot; Kollerits, Barbara; Coassin, Stefan; Boes, Eva; Kedenko, Ludmilla; Köttgen, Anna; Kronenberg, Florian

    2016-01-01

    Measurement of telomere length is widely used in epidemiologic studies. Insufficient standardization of the measurements processes has, however, complicated the comparison of results between studies. We aimed to investigate whether DNA extraction methods have an influence on measured values of relative telomere length (RTL) and whether this has consequences for epidemiological studies. We performed four experiments with RTL measurement in quadruplicate by qPCR using DNA extracted with different methods: 1) a standardized validation experiment including three extraction methods (magnetic-particle-method EZ1, salting-out-method INV, phenol-chloroform-isoamyl-alcohol PCI) each in the same 20 samples demonstrated pronounced differences in RTL with lowest values with EZ1 followed by INV and PCI-isolated DNA; 2) a comparison of 307 samples from an epidemiological study showing EZ1-measurements 40% lower than INV-measurements; 3) a matching-approach of two similar non-diseased control groups including 143 pairs of subjects revealed significantly shorter RTL in EZ1 than INV-extracted DNA (0.844 ± 0.157 vs. 1.357 ± 0.242); 4) an association analysis of RTL with prevalent cardiovascular disease detected a stronger association with INV than with EZ1-extracted DNA. In summary, DNA extraction methods have a pronounced influence on the measured RTL-values. This might result in spurious or lost associations in epidemiological studies under certain circumstances. PMID:27138987

  17. The Application of Thermal Plasma to Extraction Metallurgy and Related Fields

    NASA Technical Reports Server (NTRS)

    Akashi, K.

    1980-01-01

    Various applications of thermal plasma to extraction metallurgy and related fields are surveyed, chiefly on the basis of documents published during the past two or three years. Applications to melting and smelting, to thermal decomposition, to reduction, to manufacturing of inorganic compounds, and to other fields are considered.

  18. A method for automatically extracting infectious disease-related primers and probes from the literature

    PubMed Central

    2010-01-01

    Background Primer and probe sequences are the main components of nucleic acid-based detection systems. Biologists use primers and probes for different tasks, some related to the diagnosis and prescription of infectious diseases. The biological literature is the main information source for empirically validated primer and probe sequences. Therefore, it is becoming increasingly important for researchers to navigate this important information. In this paper, we present a four-phase method for extracting and annotating primer/probe sequences from the literature. These phases are: (1) convert each document into a tree of paper sections, (2) detect the candidate sequences using a set of finite state machine-based recognizers, (3) refine problem sequences using a rule-based expert system, and (4) annotate the extracted sequences with their related organism/gene information. Results We tested our approach using a test set composed of 297 manuscripts. The extracted sequences and their organism/gene annotations were manually evaluated by a panel of molecular biologists. The results of the evaluation show that our approach is suitable for automatically extracting DNA sequences, achieving precision/recall rates of 97.98% and 95.77%, respectively. In addition, 76.66% of the detected sequences were correctly annotated with their organism name. The system also provided correct gene-related information for 46.18% of the sequences assigned a correct organism name. Conclusions We believe that the proposed method can facilitate routine tasks for biomedical researchers using molecular methods to diagnose and prescribe different infectious diseases. In addition, the proposed method can be expanded to detect and extract other biological sequences from the literature. The extracted information can also be used to readily update available primer/probe databases or to create new databases from scratch. PMID:20682041

  19. Progress Toward Automated Cost Estimation

    NASA Technical Reports Server (NTRS)

    Brown, Joseph A.

    1992-01-01

    Report discusses efforts to develop standard system of automated cost estimation (ACE) and computer-aided design (CAD). Advantage of system is time saved and accuracy enhanced by automating extraction of quantities from design drawings, consultation of price lists, and application of cost and markup formulas.

  20. Systematic review automation technologies

    PubMed Central

    2014-01-01

    Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects. We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time. PMID:25005128

  1. PPInterFinder--a mining tool for extracting causal relations on human proteins from literature.

    PubMed

    Raja, Kalpana; Subramani, Suresh; Natarajan, Jeyakumar

    2013-01-01

    One of the most common and challenging problem in biomedical text mining is to mine protein-protein interactions (PPIs) from MEDLINE abstracts and full-text research articles because PPIs play a major role in understanding the various biological processes and the impact of proteins in diseases. We implemented, PPInterFinder--a web-based text mining tool to extract human PPIs from biomedical literature. PPInterFinder uses relation keyword co-occurrences with protein names to extract information on PPIs from MEDLINE abstracts and consists of three phases. First, it identifies the relation keyword using a parser with Tregex and a relation keyword dictionary. Next, it automatically identifies the candidate PPI pairs with a set of rules related to PPI recognition. Finally, it extracts the relations by matching the sentence with a set of 11 specific patterns based on the syntactic nature of PPI pair. We find that PPInterFinder is capable of predicting PPIs with the accuracy of 66.05% on AIMED corpus and outperforms most of the existing systems. DATABASE URL: http://www.biomining-bu.in/ppinterfinder/ PMID:23325628

  2. Antimutagenicity of Methanolic Extracts from Anemopsis californica in Relation to Their Antioxidant Activity

    PubMed Central

    Del-Toro-Sánchez, Carmen Lizette; Bautista-Bautista, Nereyda; Blasco-Cabal, José Luis; Gonzalez-Ávila, Marisela; Gutiérrez-Lomelí, Melesio; Arriaga-Alba, Myriam

    2014-01-01

    Anemopsis californica has been used empirically to treat infectious diseases. However, there are no antimutagenic evaluation reports on this plant. The present study evaluated the antioxidant activity in relation to the mutagenic and antimutagenic activity properties of leaf (LME) and stem (SME) methanolic extracts of A. californica collected in the central Mexican state of Querétaro. Antioxidant properties and total phenols of extracts were evaluated using DPPH (1,1-diphenyl-2-picrylhydrazyl) and Folin-Ciocalteu methods, respectively. Mutagenicity was evaluated using the Ames test employing Salmonella enterica serovar Typhimurium strains (TA98, TA100, and TA102), with and without an aroclor 1254 (S9 mixture). Antimutagenesis was performed against mutations induced on the Ames test with MNNG, 2AA, or 4NQO. SME presented the highest antioxidant capacity and total phenolic content. None of the extracts exhibited mutagenicity in the Ames test. The extracts produced a significant reduction in 2AA-induced mutations in S. typhimurium TA98. In both extracts, mutagenesis induced by 4NQO or methyl-N′-nitro-N-nitrosoguanidine (MNNG) was reduced only if the exposure of strains was <10 μg/Petri dish. A. californca antioxidant properties and its capacity to reduce point mutations render it suitable to enhance medical cancer treatments. The significant effect against antimutagenic 2AA suggests that their consumption would provide protection against carcinogenic polycyclic aromatic compounds. PMID:25152760

  3. Automated solvent concentrator

    NASA Technical Reports Server (NTRS)

    Griffith, J. S.; Stuart, J. L.

    1976-01-01

    Designed for automated drug identification system (AUDRI), device increases concentration by 100. Sample is first filtered, removing particulate contaminants and reducing water content of sample. Sample is extracted from filtered residue by specific solvent. Concentrator provides input material to analysis subsystem.

  4. Determination of talinolol in human plasma using automated on-line solid phase extraction combined with atmospheric pressure chemical ionization tandem mass spectrometry.

    PubMed

    Bourgogne, Emmanuel; Grivet, Chantal; Hopfgartner, Gérard

    2005-06-01

    A specific LC-MS/MS assay was developed for the automated determination of talinolol in human plasma, using on-line solid phase extraction system (prospekt 2) combined with atmospheric pressure chemical ionization (APCI) tandem mass spectrometry. The method involved simple precipitation of plasma proteins with perchloric acid (contained propranolol) as the internal standard (IS) and injection of the supernatant onto a C8 End Capped (10 mmx2 mm) cartridge without any evaporation step. Using the back-flush mode, the analytes were transferred onto an analytical column (XTerra C18, 50 mmx4.6 mm) for chromatographic separation and mass spectrometry detection. One of the particularities of the assay is that the SPE cartridge is used as a column switching device and not as an SPE cartridge. Therefore, the same SPE cartridge could be used more than 28 times, significantly reducing the analysis cost. APCI ionization was selected to overcome any potential matrix suppression effects because the analyte and IS co-eluted. The mean precision and accuracy in the concentration range 2.5-200 ng/mL was found to be 103% and 7.4%, respectively. The data was assessed from QC samples during the validation phase of the assay. The lower limit of quantification was 2.5 ng/mL, using a 250 microL plasma aliquot. The LC-MS/MS method provided the requisite selectivity, sensitivity, robustness accuracy and precision to assess pharmacokinetics of the compound in several hundred human plasma samples. PMID:15866498

  5. Extracting the frequencies of the pinna spectral notches in measured head related impulse responses

    NASA Astrophysics Data System (ADS)

    Raykar, Vikas C.; Duraiswami, Ramani; Yegnanarayana, B.

    2005-07-01

    The head related impulse response (HRIR) characterizes the auditory cues created by scattering of sound off a person's anatomy. The experimentally measured HRIR depends on several factors such as reflections from body parts (torso, shoulder, and knees), head diffraction, and reflection/diffraction effects due to the pinna. Structural models (Algazi et al., 2002; Brown and Duda, 1998) seek to establish direct relationships between the features in the HRIR and the anatomy. While there is evidence that particular features in the HRIR can be explained by anthropometry, the creation of such models from experimental data is hampered by the fact that the extraction of the features in the HRIR is not automatic. One of the prominent features observed in the HRIR, and one that has been shown to be important for elevation perception, are the deep spectral notches attributed to the pinna. In this paper we propose a method to robustly extract the frequencies of the pinna spectral notches from the measured HRIR, distinguishing them from other confounding features. The method also extracts the resonances described by Shaw (1997). The techniques are applied to the publicly available CIPIC HRIR database (Algazi et al., 2001c). The extracted notch frequencies are related to the physical dimensions and shape of the pinna.

  6. Extraction of percept-related induced local field potential during spontaneously reversing perception.

    PubMed

    Wang, Zhisong; Logothetis, Nikos K; Liang, Hualou

    2009-01-01

    The question of how perception arises from neuronal activity in the visual cortex is of fundamental importance in cognitive neuroscience. To address this question, we adopt a unique experimental paradigm in which bistable structure-from-motion (SFM) stimuli are employed to dissociate the visual input from perception while monitoring the cortical neural activity called local field potential (LFP). Consequently, the stimulus-evoked activity of LFP is not related to perception but the oscillatory induced activity of LFP may be percept-related. In this paper we focus on extracting the percept-related features of the induced activity from LFP in a monkey's visual cortex for decoding its bistable structure-from-motion perception. We first estimate the stimulus-evoked activity via a wavelet-based method and remove it from the single-trial LFP. We then use the common spatial patterns (CSP) approach to design spatial filters to extract the percept-related features from the remaining induced activity. We exploit the linear discriminant analysis (LDA) classifier on the extracted features to decode the reported perception on a single-trial basis. We demonstrate that our approach has excellent performance in estimating the stimulus-evoked activity, outperforming the Wiener filter, least mean square (LMS), and a local regression method called the locally weighted scatterplot smoothing (LOWESS), and that our approach is effective in extracting the discriminative features of the percept-related induced activity from LFP, which leads to excellent decoding performance. We also discover that the enhanced gamma band synchronization and reduced alpha band desynchronization may be the underpinnings of the induced activity. PMID:19608383

  7. Integrating Multiple On-line Knowledge Bases for Disease-Lab Test Relation Extraction.

    PubMed

    Zhang, Yaoyun; Soysal, Ergin; Moon, Sungrim; Wang, Jingqi; Tao, Cui; Xu, Hua

    2015-01-01

    A computable knowledge base containing relations between diseases and lab tests would be a great resource for many biomedical informatics applications. This paper describes our initial step towards establishing a comprehensive knowledge base of disease and lab tests relations utilizing three public on-line resources. LabTestsOnline, MedlinePlus and Wikipedia are integrated to create a freely available, computable disease-lab test knowledgebase. Disease and lab test concepts are identified using MetaMap and relations between diseases and lab tests are determined based on source-specific rules. Experimental results demonstrate a high precision for relation extraction, with Wikipedia achieving the highest precision of 87%. Combining the three sources reached a recall of 51.40%, when compared with a subset of disease-lab test relations extracted from a reference book. Moreover, we found additional disease-lab test relations from on-line resources, indicating they are complementary to existing reference books for building a comprehensive disease and lab test relation knowledge base. PMID:26306271

  8. A crowdsourcing workflow for extracting chemical-induced disease relations from free text

    PubMed Central

    Li, Tong Shu; Bravo, Àlex; Furlong, Laura I.; Good, Benjamin M.; Su, Andrew I.

    2016-01-01

    Relations between chemicals and diseases are one of the most queried biomedical interactions. Although expert manual curation is the standard method for extracting these relations from the literature, it is expensive and impractical to apply to large numbers of documents, and therefore alternative methods are required. We describe here a crowdsourcing workflow for extracting chemical-induced disease relations from free text as part of the BioCreative V Chemical Disease Relation challenge. Five non-expert workers on the CrowdFlower platform were shown each potential chemical-induced disease relation highlighted in the original source text and asked to make binary judgments about whether the text supported the relation. Worker responses were aggregated through voting, and relations receiving four or more votes were predicted as true. On the official evaluation dataset of 500 PubMed abstracts, the crowd attained a 0.505 F-score (0.475 precision, 0.540 recall), with a maximum theoretical recall of 0.751 due to errors with named entity recognition. The total crowdsourcing cost was $1290.67 ($2.58 per abstract) and took a total of 7 h. A qualitative error analysis revealed that 46.66% of sampled errors were due to task limitations and gold standard errors, indicating that performance can still be improved. All code and results are publicly available at https://github.com/SuLab/crowd_cid_relex Database URL: https://github.com/SuLab/crowd_cid_relex PMID:27087308

  9. Integrating Multiple On-line Knowledge Bases for Disease-Lab Test Relation Extraction

    PubMed Central

    Zhang, Yaoyun; Soysal, Ergin; Moon, Sungrim; Wang, Jingqi; Tao, Cui; Xu, Hua

    2015-01-01

    A computable knowledge base containing relations between diseases and lab tests would be a great resource for many biomedical informatics applications. This paper describes our initial step towards establishing a comprehensive knowledge base of disease and lab tests relations utilizing three public on-line resources. LabTestsOnline, MedlinePlus and Wikipedia are integrated to create a freely available, computable disease-lab test knowledgebase. Disease and lab test concepts are identified using MetaMap and relations between diseases and lab tests are determined based on source-specific rules. Experimental results demonstrate a high precision for relation extraction, with Wikipedia achieving the highest precision of 87%. Combining the three sources reached a recall of 51.40%, when compared with a subset of disease-lab test relations extracted from a reference book. Moreover, we found additional disease-lab test relations from on-line resources, indicating they are complementary to existing reference books for building a comprehensive disease and lab test relation knowledge base. PMID:26306271

  10. A crowdsourcing workflow for extracting chemical-induced disease relations from free text.

    PubMed

    Li, Tong Shu; Bravo, Àlex; Furlong, Laura I; Good, Benjamin M; Su, Andrew I

    2016-01-01

    Relations between chemicals and diseases are one of the most queried biomedical interactions. Although expert manual curation is the standard method for extracting these relations from the literature, it is expensive and impractical to apply to large numbers of documents, and therefore alternative methods are required. We describe here a crowdsourcing workflow for extracting chemical-induced disease relations from free text as part of the BioCreative V Chemical Disease Relation challenge. Five non-expert workers on the CrowdFlower platform were shown each potential chemical-induced disease relation highlighted in the original source text and asked to make binary judgments about whether the text supported the relation. Worker responses were aggregated through voting, and relations receiving four or more votes were predicted as true. On the official evaluation dataset of 500 PubMed abstracts, the crowd attained a 0.505F-score (0.475 precision, 0.540 recall), with a maximum theoretical recall of 0.751 due to errors with named entity recognition. The total crowdsourcing cost was $1290.67 ($2.58 per abstract) and took a total of 7 h. A qualitative error analysis revealed that 46.66% of sampled errors were due to task limitations and gold standard errors, indicating that performance can still be improved. All code and results are publicly available athttps://github.com/SuLab/crowd_cid_relexDatabase URL:https://github.com/SuLab/crowd_cid_relex. PMID:27087308

  11. Achyrocline satureioides (Lam.) D.C. Hydroalcoholic Extract Inhibits Neutrophil Functions Related to Innate Host Defense

    PubMed Central

    Barioni, Eric Diego; Machado, Isabel Daufenback; Rodrigues, Stephen Fernandes de Paula; Ferraz-de-Paula, Viviane; Wagner, Theodoro Marcel; Cogliati, Bruno; Corrêa dos Santos, Matheus; Machado, Marina da Silva; de Andrade, Sérgio Faloni; Niero, Rivaldo; Farsky, Sandra Helena Poliselli

    2013-01-01

    Achyrocline satureioides (Lam.) D.C. is a herb native to South America, and its inflorescences are popularly employed to treat inflammatory diseases. Here, the effects of the in vivo actions of the hydroalcoholic extract obtained from inflorescences of A. satureioides on neutrophil trafficking into inflamed tissue were investigated. Male Wistar rats were orally treated with A. satureioides extract, and inflammation was induced one hour later by lipopolysaccharide injection into the subcutaneous tissue. The number of leukocytes and the amount of chemotactic mediators were quantified in the inflammatory exudate, and adhesion molecule and toll-like receptor 4 (TLR-4) expressions and phorbol-myristate-acetate- (PMA-) stimulated oxidative burst were quantified in circulating neutrophils. Leukocyte-endothelial interactions were quantified in the mesentery tissue. Enzymes and tissue morphology of the liver and kidney were evaluated. Treatment with A. satureioides extract reduced neutrophil influx and secretion of leukotriene B4 and CINC-1 in the exudates, the number of rolling and adhered leukocytes in the mesentery postcapillary venules, neutrophil L-selectin, β2-integrin and TLR-4 expression, and oxidative burst, but did not cause an alteration in the morphology and activities of liver and kidney. Together, the data show that A. satureioides extract inhibits neutrophil functions related to the innate response and does not cause systemic toxicity. PMID:23476704

  12. 77 FR 123 - Final Reissuance of General NPDES Permits (GP) for Facilities Related to Oil and Gas Extraction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-03

    ... From the Federal Register Online via the Government Publishing Office ENVIRONMENTAL PROTECTION AGENCY Final Reissuance of General NPDES Permits (GP) for Facilities Related to Oil and Gas Extraction... permit. SUMMARY: A GP regulating the activities of facilities related to oil and gas extraction on...

  13. YAdumper: extracting and translating large information volumes from relational databases to structured flat files.

    PubMed

    Fernández, José M; Valencia, Alfonso

    2004-10-12

    Downloading the information stored in relational databases into XML and other flat formats is a common task in bioinformatics. This periodical dumping of information requires considerable CPU time, disk and memory resources. YAdumper has been developed as a purpose-specific tool to deal with the integral structured information download of relational databases. YAdumper is a Java application that organizes database extraction following an XML template based on an external Document Type Declaration. Compared with other non-native alternatives, YAdumper substantially reduces memory requirements and considerably improves writing performance. PMID:15117748

  14. A knowledge-driven approach to extract disease-related biomarkers from the literature.

    PubMed

    Bravo, À; Cases, M; Queralt-Rosinach, N; Sanz, F; Furlong, L I

    2014-01-01

    The biomedical literature represents a rich source of biomarker information. However, both the size of literature databases and their lack of standardization hamper the automatic exploitation of the information contained in these resources. Text mining approaches have proven to be useful for the exploitation of information contained in the scientific publications. Here, we show that a knowledge-driven text mining approach can exploit a large literature database to extract a dataset of biomarkers related to diseases covering all therapeutic areas. Our methodology takes advantage of the annotation of MEDLINE publications pertaining to biomarkers with MeSH terms, narrowing the search to specific publications and, therefore, minimizing the false positive ratio. It is based on a dictionary-based named entity recognition system and a relation extraction module. The application of this methodology resulted in the identification of 131,012 disease-biomarker associations between 2,803 genes and 2,751 diseases, and represents a valuable knowledge base for those interested in disease-related biomarkers. Additionally, we present a bibliometric analysis of the journals reporting biomarker related information during the last 40 years. PMID:24839601

  15. Drought Resilience of Water Supplies for Shale Gas Extraction and Related Power Generation in Texas

    NASA Astrophysics Data System (ADS)

    Reedy, R. C.; Scanlon, B. R.; Nicot, J. P.; Uhlman, K.

    2014-12-01

    There is considerable concern about water availability to support energy production in Texas, particularly considering that many of the shale plays are in semiarid areas of Texas and the state experienced the most extreme drought on record in 2011. The Eagle Ford shale play provides an excellent case study. Hydraulic fracturing water use for shale gas extraction in the play totaled ~ 12 billion gallons (bgal) in 2012, representing ~7 - 10% of total water use in the 16 county play area. The dominant source of water is groundwater which is not highly vulnerable to drought from a recharge perspective because water is primarily stored in the confined portion of aquifers that were recharged thousands of years ago. Water supply drought vulnerability results primarily from increased water use for irrigation. Irrigation water use in the Eagle Ford play was 30 billion gallons higher in the 2011 drought year relative to 2010. Recent trends toward increased use of brackish groundwater for shale gas extraction in the Eagle Ford also reduce pressure on fresh water resources. Evaluating the impacts of natural gas development on water resources should consider the use of natural gas in power generation, which now represents 50% of power generation in Texas. Water consumed in extracting the natural gas required for power generation is equivalent to ~7% of the water consumed in cooling these power plants in the state. However, natural gas production from shale plays can be overall beneficial in terms of water resources in the state because natural gas combined cycle power generation decreases water consumption by ~60% relative to traditional coal, nuclear, and natural gas plants that use steam turbine generation. This reduced water consumption enhances drought resilience of power generation in the state. In addition, natural gas combined cycle plants provide peaking capacity that complements increasing renewable wind generation which has no cooling water requirement. However, water

  16. Orbital transfer vehicle launch operations study: Automated technology knowledge base, volume 4

    NASA Technical Reports Server (NTRS)

    1986-01-01

    A simplified retrieval strategy for compiling automation-related bibliographies from NASA/RECON is presented. Two subsets of NASA Thesaurus subject terms were extracted: a primary list, which is used to obtain an initial set of citations; and a secondary list, which is used to limit or further specify a large initial set of citations. These subject term lists are presented in Appendix A as the Automated Technology Knowledge Base (ATKB) Thesaurus.

  17. Automated Lattice Perturbation Theory

    SciTech Connect

    Monahan, Christopher

    2014-11-01

    I review recent developments in automated lattice perturbation theory. Starting with an overview of lattice perturbation theory, I focus on the three automation packages currently "on the market": HiPPy/HPsrc, Pastor and PhySyCAl. I highlight some recent applications of these methods, particularly in B physics. In the final section I briefly discuss the related, but distinct, approach of numerical stochastic perturbation theory.

  18. Ginseng Berry Extract Supplementation Improves Age-Related Decline of Insulin Signaling in Mice

    PubMed Central

    Seo, Eunhui; Kim, Sunmi; Lee, Sang Jun; Oh, Byung-Chul; Jun, Hee-Sook

    2015-01-01

    The aim of this study was to evaluate the effects of ginseng berry extract on insulin sensitivity and associated molecular mechanisms in aged mice. C57BL/6 mice (15 months old) were maintained on a regular diet (CON) or a regular diet supplemented with 0.05% ginseng berry extract (GBD) for 24 or 32 weeks. GBD-fed mice showed significantly lower serum insulin levels (p = 0.016) and insulin resistance scores (HOMA-IR) (p = 0.012), suggesting that GBD improved insulin sensitivity. Pancreatic islet hypertrophy was also ameliorated in GBD-fed mice (p = 0.007). Protein levels of tyrosine phosphorylated insulin receptor substrate (IRS)-1 (p = 0.047), and protein kinase B (AKT) (p = 0.037), were up-regulated in the muscle of insulin-injected GBD-fed mice compared with CON-fed mice. The expressions of forkhead box protein O1 (FOXO1) (p = 0.036) and peroxisome proliferator-activated receptor gamma (PPARγ) (p = 0.032), which are known as aging- and insulin resistance-related genes, were also increased in the muscle of GBD-fed mice. We conclude that ginseng berry extract consumption might increase activation of IRS-1 and AKT, contributing to the improvement of insulin sensitivity in aged mice. PMID:25912041

  19. Ginseng berry extract supplementation improves age-related decline of insulin signaling in mice.

    PubMed

    Seo, Eunhui; Kim, Sunmi; Lee, Sang Jun; Oh, Byung-Chul; Jun, Hee-Sook

    2015-04-01

    The aim of this study was to evaluate the effects of ginseng berry extract on insulin sensitivity and associated molecular mechanisms in aged mice. C57BL/6 mice (15 months old) were maintained on a regular diet (CON) or a regular diet supplemented with 0.05% ginseng berry extract (GBD) for 24 or 32 weeks. GBD-fed mice showed significantly lower serum insulin levels (p = 0.016) and insulin resistance scores (HOMA-IR) (p = 0.012), suggesting that GBD improved insulin sensitivity. Pancreatic islet hypertrophy was also ameliorated in GBD-fed mice (p = 0.007). Protein levels of tyrosine phosphorylated insulin receptor substrate (IRS)-1 (p = 0.047), and protein kinase B (AKT) (p = 0.037), were up-regulated in the muscle of insulin-injected GBD-fed mice compared with CON-fed mice. The expressions of forkhead box protein O1 (FOXO1) (p = 0.036) and peroxisome proliferator-activated receptor gamma (PPARγ) (p = 0.032), which are known as aging- and insulin resistance-related genes, were also increased in the muscle of GBD-fed mice. We conclude that ginseng berry extract consumption might increase activation of IRS-1 and AKT, contributing to the improvement of insulin sensitivity in aged mice. PMID:25912041

  20. Free nerve ending density on skin extracted by circumcision and its relation to premature ejaculation.

    PubMed

    Malkoc, Ercan; Ates, Ferhat; Tekeli, Hakan; Kurt, Bulent; Turker, Turker; Basal, Seref

    2012-01-01

    Many studies have shown that skin tissue extracted by circumcision can cause differences in sexual function, especially at the time of ejaculation. Sensitivity changes in penile skin and sexual satisfaction deriving from circumcision starting from premature ejaculation (PE) are discussed. Furthermore, most of these studies rely on questionnaires. Extracted free nerve endings (FNE) on the foreskin, which can detect temperature, mechanical stimuli (touch, pressure, stretch) or pain (nociception), have not been researched. Our aim is to determine FNEs in foreskin and the affects on sexual function, especially PE. This prospective study was done on adults who voluntarily applied to be circumcised between September 2010 and October 2011. The ejaculation latency times (ELT) before circumcision have been assessed, and a PE diagnostic tool (PEDT) form was filled out by the urologist according to the answers given by the volunteers. The proximal and distal ends of the foreskin were marked before circumcision, and the extracted foreskin was sent to the pathology department to determine FNEs. Twenty volunteers (average age 21.25 ± 0.44 years) were included in the study. The average ELT was 103.55 ± 68.39 seconds, and the average PE score was 4.35 ± 3.13. Proximal, middle, and distal tip nerve densities were compared. Proximal and distal (P = .003) and proximal and middle (P = .011) segments differed from each other, whereas middle and distal were similar (P = .119). There were not any correlations between PEDT scores and total nerve endings number (r = .018, P = .942). Also there were not any correlations between mean ELT and PEDT scores (r = .054, P = .822). The tissue extracted by circumcision has intensive FNEs, yet FNE intensity has no relation to PE. PMID:22604629

  1. On the effects of a plant extract of Orthosiphon stamineus on sebum-related skin imperfections.

    PubMed

    Vogelgesang, B; Abdul-Malak, N; Reymermier, C; Altobelli, C; Saget, J

    2011-02-01

    Overproduction of sebum is very common and results in an undesirable oily, shiny complexion with enlarged pores. Sebum secretion is basically under the control of 5-α reductase, and more particularly under that of type 1 isozyme. But it is also highly sensitive to environmental factors such as temperature, humidity and food. Moreover, in Asia, the edicts of a flawless facial skin turn oily skin into a major concern for Asian women. We identified Orthosiphon stamineus leaf extract as an interesting ingredient for reducing the oily appearance of skin thanks to its ability to reduce 5-α reductase type 1 expression in normal human epidermal keratinocytes in vitro. This was confirmed ex vivo, where Orthosiphon stamineus leaf extract was shown to reduce 5-α reductase activity as well as the production of squalene, one of the main components of sebum that was used as a tracer of sebum. To evaluate the efficacy of Orthosiphon stamineus leaf extract at reducing sebum-related skin imperfections in vivo, we performed two different clinical studies, one in France on a panel of Caucasian volunteers and the other one in Thailand on a panel of Asian volunteers. Using instrumental techniques as well as clinical evaluation and self-evaluation, we could highlight that an O/W cosmetic formula containing 2% of Orthosiphon stamineus leaf extract could visibly reduce the oily appearance of skin as well as the size of pores, thus leading to a significant improvement of complexion evenness and radiance. Overall, the results obtained were better than those observed with the same formula containing 1% of zinc gluconate, an ingredient frequently used in oily skin care products. PMID:20807263

  2. Effect of Selenium-Enriched Agaricus bisporus (Higher Basidiomycetes) Extracts, Obtained by Pressurized Water Extraction, on the Expression of Cholesterol Homeostasis Related Genes by Low-Density Array.

    PubMed

    Gil-Ramírez, Alicia; Soler-Rivas, Cristina; Rodriguez-Casado, Arantxa; Ruiz-Rodríguez, Alejandro; Reglero, Guillermo; Marín, Francisco Ramón

    2015-01-01

    Culinary-medicinal mushrooms are able to lower blood cholesterol levels in animal models by different mechanisms. They might impair the endogenous cholesterol synthesis and exogenous cholesterol absorption during digestion. Mushroom extracts, obtained using pressurized water extractions (PWE) from Agaricus bisporus basidiomes, supplemented or not supplemented with selenium, were applied to HepG2 cell cultures to study the expression of 19 genes related to cholesterol homeostasis by low-density arrays (LDA). Only the PWE fractions obtained at 25°C showed 3-hydroxy-3-methylglutaryl-CoA reductase (HMGCR) inhibitory activity. Besides the enzymatic inhibition, PWE extracts may downregulate some of the key genes involved in the cholesterol homeostasis, such as the squalene synthase gene (FDFT1), since its mRNA expression falls by one third of its initial value. In summary, A. bisporus extracts may also modulate biological cholesterol levels by molecular mechanisms further than the enzymatic way previously reported. PMID:25746616

  3. PPI-IRO: a two-stage method for protein-protein interaction extraction based on interaction relation ontology.

    PubMed

    Li, Chuan-Xi; Chen, Peng; Wang, Ru-Jing; Wang, Xiu-Jie; Su, Ya-Ru; Li, Jinyan

    2014-01-01

    Mining Protein-Protein Interactions (PPIs) from the fast-growing biomedical literature resources has been proven as an effective approach for the identification of biological regulatory networks. This paper presents a novel method based on the idea of Interaction Relation Ontology (IRO), which specifies and organises words of various proteins interaction relationships. Our method is a two-stage PPI extraction method. At first, IRO is applied in a binary classifier to determine whether sentences contain a relation or not. Then, IRO is taken to guide PPI extraction by building sentence dependency parse tree. Comprehensive and quantitative evaluations and detailed analyses are used to demonstrate the significant performance of IRO on relation sentences classification and PPI extraction. Our PPI extraction method yielded a recall of around 80% and 90% and an F1 of around 54% and 66% on corpora of AIMed and BioInfer, respectively, which are superior to most existing extraction methods. PMID:25757257

  4. Thematic orders and the comprehension of subject-extracted relative clauses in Mandarin Chinese

    PubMed Central

    Lin, Chien-Jer Charles

    2015-01-01

    This study investigates the comprehension of three kinds of subject-extracted relative clauses (SRs) in Mandarin Chinese: standard SRs, relative clauses involving the disposal ba construction (“disposal SRs”), and relative clauses involving the long passive bei constructions (“passive SRs”). In a self-paced reading experiment, the regions before the relativizer (where the sentential fragments are temporarily ambiguous) showed reading patterns consistent with expectation-based incremental processing: standard SRs, with the highest constructional frequency and the least complex syntactic structure, were processed faster than the other two variants. However, in the regions after the relativizer and the head noun where the existence of a relative clause is unambiguously indicated, a top-down global effect of thematic ordering was observed: passive SRs, whose thematic role order conforms to the canonical thematic order of Chinese, were read faster than both the standard SRs and the disposal SRs. Taken together, these results suggest that two expectation-based processing factors are involved in the comprehension of Chinese relative clauses, including both the structural probabilities of pre-relativizer constituents and the overall surface thematic orders in the relative clauses. PMID:26441697

  5. A knowledge-poor approach to chemical-disease relation extraction

    PubMed Central

    Alam, Firoj; Corazza, Anna; Lavelli, Alberto; Zanoli, Roberto

    2016-01-01

    The article describes a knowledge-poor approach to the task of extracting Chemical-Disease Relations from PubMed abstracts. A first version of the approach was applied during the participation in the BioCreative V track 3, both in Disease Named Entity Recognition and Normalization (DNER) and in Chemical-induced diseases (CID) relation extraction. For both tasks, we have adopted a general-purpose approach based on machine learning techniques integrated with a limited number of domain-specific knowledge resources and using freely available tools for preprocessing data. Crucially, the system only uses the data sets provided by the organizers. The aim is to design an easily portable approach with a limited need of domain-specific knowledge resources. In the participation in the BioCreative V task, we ranked 5 out of 16 in DNER, and 7 out of 18 in CID. In this article, we present our follow-up study in particular on CID by performing further experiments, extending our approach and improving the performance. PMID:27189609

  6. A knowledge-poor approach to chemical-disease relation extraction.

    PubMed

    Alam, Firoj; Corazza, Anna; Lavelli, Alberto; Zanoli, Roberto

    2016-01-01

    The article describes a knowledge-poor approach to the task of extracting Chemical-Disease Relations from PubMed abstracts. A first version of the approach was applied during the participation in the BioCreative V track 3, both in Disease Named Entity Recognition and Normalization (DNER) and in Chemical-induced diseases (CID) relation extraction. For both tasks, we have adopted a general-purpose approach based on machine learning techniques integrated with a limited number of domain-specific knowledge resources and using freely available tools for preprocessing data. Crucially, the system only uses the data sets provided by the organizers. The aim is to design an easily portable approach with a limited need of domain-specific knowledge resources. In the participation in the BioCreative V task, we ranked 5 out of 16 in DNER, and 7 out of 18 in CID. In this article, we present our follow-up study in particular on CID by performing further experiments, extending our approach and improving the performance. PMID:27189609

  7. Process automation

    SciTech Connect

    Moser, D.R.

    1986-01-01

    Process automation technology has been pursued in the chemical processing industries and to a very limited extent in nuclear fuel reprocessing. Its effective use has been restricted in the past by the lack of diverse and reliable process instrumentation and the unavailability of sophisticated software designed for process control. The Integrated Equipment Test (IET) facility was developed by the Consolidated Fuel Reprocessing Program (CFRP) in part to demonstrate new concepts for control of advanced nuclear fuel reprocessing plants. A demonstration of fuel reprocessing equipment automation using advanced instrumentation and a modern, microprocessor-based control system is nearing completion in the facility. This facility provides for the synergistic testing of all chemical process features of a prototypical fuel reprocessing plant that can be attained with unirradiated uranium-bearing feed materials. The unique equipment and mission of the IET facility make it an ideal test bed for automation studies. This effort will provide for the demonstration of the plant automation concept and for the development of techniques for similar applications in a full-scale plant. A set of preliminary recommendations for implementing process automation has been compiled. Some of these concepts are not generally recognized or accepted. The automation work now under way in the IET facility should be useful to others in helping avoid costly mistakes because of the underutilization or misapplication of process automation. 6 figs.

  8. Automated Cooperative Trajectories

    NASA Technical Reports Server (NTRS)

    Hanson, Curt; Pahle, Joseph; Brown, Nelson

    2015-01-01

    This presentation is an overview of the Automated Cooperative Trajectories project. An introduction to the phenomena of wake vortices is given, along with a summary of past research into the possibility of extracting energy from the wake by flying close parallel trajectories. Challenges and barriers to adoption of civilian automatic wake surfing technology are identified. A hardware-in-the-loop simulation is described that will support future research. Finally, a roadmap for future research and technology transition is proposed.

  9. Extraction of Surface-Related Features in a Recurrent Model of V1-V2 Interactions

    PubMed Central

    Weidenbacher, Ulrich; Neumann, Heiko

    2009-01-01

    Background Humans can effortlessly segment surfaces and objects from two-dimensional (2D) images that are projections of the 3D world. The projection from 3D to 2D leads partially to occlusions of surfaces depending on their position in depth and on viewpoint. One way for the human visual system to infer monocular depth cues could be to extract and interpret occlusions. It has been suggested that the perception of contour junctions, in particular T-junctions, may be used as cue for occlusion of opaque surfaces. Furthermore, X-junctions could be used to signal occlusion of transparent surfaces. Methodology/Principal Findings In this contribution, we propose a neural model that suggests how surface-related cues for occlusion can be extracted from a 2D luminance image. The approach is based on feedforward and feedback mechanisms found in visual cortical areas V1 and V2. In a first step, contours are completed over time by generating groupings of like-oriented contrasts. Few iterations of feedforward and feedback processing lead to a stable representation of completed contours and at the same time to a suppression of image noise. In a second step, contour junctions are localized and read out from the distributed representation of boundary groupings. Moreover, surface-related junctions are made explicit such that they are evaluated to interact as to generate surface-segmentations in static images. In addition, we compare our extracted junction signals with a standard computer vision approach for junction detection to demonstrate that our approach outperforms simple feedforward computation-based approaches. Conclusions/Significance A model is proposed that uses feedforward and feedback mechanisms to combine contextually relevant features in order to generate consistent boundary groupings of surfaces. Perceptually important junction configurations are robustly extracted from neural representations to signal cues for occlusion and transparency. Unlike previous proposals

  10. Semi-automated fault system extraction and displacement analysis of an excavated oyster reef using high-resolution laser scanned data

    NASA Astrophysics Data System (ADS)

    Molnár, Gábor; Székely, Balázs; Harzhauser, Mathias; Djuricic, Ana; Mandic, Oleg; Dorninger, Peter; Nothegger, Clemens; Exner, Ulrike; Pfeifer, Norbert

    2015-04-01

    In this contribution we present a semi-automated method for reconstructing the brittle deformation field of an excavated Miocene oyster reef, in Stetten, Korneuburg Basin, Lower Austria. Oyster shells up to 80 cm in size were scattered in a shallow estuarine bay forming a continuous and almost isochronous layer as a consequence of a catastrophic event in the Miocene. This shell bed was preserved by burial of several hundred meters of sandy to silty sediments. Later the layers were tilted westward, uplifted and erosion almost exhumed them. An excavation revealed a 27 by 17 meters area of the oyster covered layer. During the tectonic processes the sediment volume suffered brittle deformation. Faults mostly with some centimeter normal component and NW-SE striking affected the oyster covered volume, dissecting many shells and the surrounding matrix as well. Faults and displacements due to them can be traced along the site typically at several meters long, and as fossil oysters are broken and parts are displaced due to the faulting, along some faults it is possible to follow these displacements in 3D. In order to quantify these varying displacements and to map the undulating fault traces high-resolution scanning of the excavated and cleaned surface of the oyster bed has been carried out using a terrestrial laser scanner. The resulting point clouds have been co-georeferenced at mm accuracy and a 1mm resolution 3D point cloud of the surface has been created. As the faults are well-represented in the point cloud, this enables us to measure the dislocations of the dissected shell parts along the fault lines. We used a semi-automatic method to quantify these dislocations. First we manually digitized the fault lines in 2D as an initial model. In the next step we estimated the vertical (i.e. perpendicular to the layer) component of the dislocation along these fault lines comparing the elevations on two sides of the faults with moving averaging windows. To estimate the strike

  11. miRTex: A Text Mining System for miRNA-Gene Relation Extraction

    PubMed Central

    Li, Gang; Ross, Karen E.; Arighi, Cecilia N.; Peng, Yifan; Wu, Cathy H.; Vijay-Shanker, K.

    2015-01-01

    MicroRNAs (miRNAs) regulate a wide range of cellular and developmental processes through gene expression suppression or mRNA degradation. Experimentally validated miRNA gene targets are often reported in the literature. In this paper, we describe miRTex, a text mining system that extracts miRNA-target relations, as well as miRNA-gene and gene-miRNA regulation relations. The system achieves good precision and recall when evaluated on a literature corpus of 150 abstracts with F-scores close to 0.90 on the three different types of relations. We conducted full-scale text mining using miRTex to process all the Medline abstracts and all the full-length articles in the PubMed Central Open Access Subset. The results for all the Medline abstracts are stored in a database for interactive query and file download via the website at http://proteininformationresource.org/mirtex. Using miRTex, we identified genes potentially regulated by miRNAs in Triple Negative Breast Cancer, as well as miRNA-gene relations that, in conjunction with kinase-substrate relations, regulate the response to abiotic stress in Arabidopsis thaliana. These two use cases demonstrate the usefulness of miRTex text mining in the analysis of miRNA-regulated biological processes. PMID:26407127

  12. Evaluation of the chemical compatibility of plastic contact materials and pharmaceutical products; safety considerations related to extractables and leachables.

    PubMed

    Jenke, Dennis

    2007-10-01

    A review is provided on the general topic of the compatibility of plastic materials with pharmaceutical products, with specific emphasis on the safety aspects associated with extractables and leachables related to such plastic materials. PMID:17701994

  13. Quantification of 31 illicit and medicinal drugs and metabolites in whole blood by fully automated solid-phase extraction and ultra-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Bjørk, Marie Kjærgaard; Simonsen, Kirsten Wiese; Andersen, David Wederkinck; Dalsgaard, Petur Weihe; Sigurðardóttir, Stella Rögn; Linnet, Kristian; Rasmussen, Brian Schou

    2013-03-01

    An efficient method for analyzing illegal and medicinal drugs in whole blood using fully automated sample preparation and short ultra-high-performance liquid chromatography-tandem mass spectrometry (MS/MS) run time is presented. A selection of 31 drugs, including amphetamines, cocaine, opioids, and benzodiazepines, was used. In order to increase the efficiency of routine analysis, a robotic system based on automated liquid handling and capable of handling all unit operation for sample preparation was built on a Freedom Evo 200 platform with several add-ons from Tecan and third-party vendors. Solid-phase extraction was performed using Strata X-C plates. Extraction time for 96 samples was less than 3 h. Chromatography was performed using an ACQUITY UPLC system (Waters Corporation, Milford, USA). Analytes were separated on a 100 mm × 2.1 mm, 1.7 μm Acquity UPLC CSH C(18) column using a 6.5 min 0.1 % ammonia (25 %) in water/0.1 % ammonia (25 %) in methanol gradient and quantified by MS/MS (Waters Quattro Premier XE) in multiple-reaction monitoring mode. Full validation, including linearity, precision and trueness, matrix effect, ion suppression/enhancement of co-eluting analytes, recovery, and specificity, was performed. The method was employed successfully in the laboratory and used for routine analysis of forensic material. In combination with tetrahydrocannabinol analysis, the method covered 96 % of cases involving driving under the influence of drugs. The manual labor involved in preparing blood samples, solvents, etc., was reduced to a half an hour per batch. The automated sample preparation setup also minimized human exposure to hazardous materials, provided highly improved ergonomics, and eliminated manual pipetting. PMID:23292043

  14. Automation pilot

    NASA Technical Reports Server (NTRS)

    1983-01-01

    An important concept of the Action Information Management System (AIMS) approach is to evaluate office automation technology in the context of hands on use by technical program managers in the conduct of human acceptance difficulties which may accompany the transition to a significantly changing work environment. The improved productivity and communications which result from application of office automation technology are already well established for general office environments, but benefits unique to NASA are anticipated and these will be explored in detail.

  15. Automated Urinalysis

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Information from NASA Tech Briefs assisted DiaSys Corporation in the development of the R/S 2000 which automates urinalysis, eliminating most manual procedures. An automatic aspirator is inserted into a standard specimen tube, the "Sample" button is pressed, and within three seconds a consistent amount of urine sediment is transferred to a microscope. The instrument speeds up, standardizes, automates and makes urine analysis safer. Additional products based on the same technology are anticipated.

  16. Automated microdialysis-based system for in situ microsampling and investigation of lead bioavailability in terrestrial environments under physiologically based extraction conditions.

    PubMed

    Rosende, María; Magalhães, Luis M; Segundo, Marcela A; Miró, Manuel

    2013-10-15

    In situ automatic microdialysis sampling under batch-flow conditions is herein proposed for the first time for expedient assessment of the kinetics of lead bioaccessibility/bioavailability in contaminated and agricultural soils exploiting the harmonized physiologically based extraction test (UBM). Capitalized upon a concentric microdialysis probe immersed in synthetic gut fluids, the miniaturized flow system is harnessed for continuous monitoring of lead transfer across the permselective microdialysis membrane to mimic the diffusive transport of metal species through the epithelium of the stomach and of the small intestine. Besides, the addition of the UBM gastrointestinal fluid surrogates at a specified time frame is fully mechanized. Distinct microdialysis probe configurations and membranes types were investigated in detail to ensure passive sampling under steady-state dialytic conditions for lead. Using a 3-cm-long polysulfone membrane with averaged molecular weight cutoff of 30 kDa in a concentric probe and a perfusate flow rate of 2.0 μL min(-1), microdialysis relative recoveries in the gastric phase were close to 100%, thereby omitting the need for probe calibration. The automatic leaching method was validated in terms of bias in the analysis of four soils with different physicochemical properties and containing a wide range of lead content (16 ± 3 to 1216 ± 42 mg kg(-1)) using mass balance assessment as a quality control tool. No significant differences between the mass balance and the total lead concentration in the suite of analyzed soils were encountered (α = 0.05). Our finding that the extraction of soil-borne lead for merely one hour in the GI phase suffices for assessment of the bioavailable fraction as a result of the fast immobilization of lead species at near-neutral conditions would assist in providing risk assessment data from the UBM test on a short notice. PMID:24016003

  17. Extract from Eugenia punicifolia is an antioxidant and inhibits enzymes related to metabolic syndrome.

    PubMed

    Lopes Galeno, Denise Morais; Carvalho, Rosany Piccolotto; Boleti, Ana Paula de Araújo; Lima, Arleilson Sousa; Oliveira de Almeida, Patricia Danielle; Pacheco, Carolina Carvalho; Pereira de Souza, Tatiane; Lima, Emerson Silva

    2014-01-01

    The present study aimed to investigate in vitro biological activities of extract of Eugenia punicifolia leaves (EEP), emphasizing the inhibitory activity of enzymes related to metabolic syndrome and its antioxidant effects. The antioxidant activity was analyzed by free radicals scavengers in vitro assays: DPPH·, ABTS(·+), O2(·−), and NO· and a cell-based assay. EEP were tested in inhibitory colorimetric assays using α-amylase, α-glucosidase, xanthine oxidase, and pancreatic lipase enzymes. The EEP exhibited activity in ABTS(·+), DPPH·, and O2(·−) scavenger (IC50 = 10.5 ± 1.2, 28.84 ± 0.54, and 38.12 ± 2.6 μg/mL), respectively. EEP did not show cytotoxic effects, and it showed antioxidant activity in cells in a concentration-dependent manner. EEP exhibited inhibition of α-amylase, α-glucosidase, and xanthine oxidase activities in vitro assays (IC50 = 122.8 ± 6.3; 2.9 ± 0.1; 23.5 ± 2.6), respectively; however, EEP did not inhibit the lipase activity. The findings supported that extract of E. punicifolia leaves is a natural antioxidant and inhibitor of enzymes, such as α-amylase, α-glucosidase, and xanthine oxidase, which can result in a reduction in the carbohydrate absorption rate and decrease of risks factors of cardiovascular disease, thereby providing a novel dietary opportunity for the prevention of metabolic syndrome. PMID:24078187

  18. Extracting a kinetic relation from the dynamics of a bistable chain

    NASA Astrophysics Data System (ADS)

    Zhao, Qingze; Purohit, Prashant K.

    2014-06-01

    We integrate Newton's second law for a chain of masses and bistable springs with a spinodal region with the goal of extracting a kinetic relation for propagating phase boundaries. Our numerical experiments correspond to the impact on a bar made of phase changing material. By reading off the spring extensions ahead and behind the phase boundaries in our numerical experiments, we compute a driving force and plot it as a function of the phase boundary velocity to get a kinetic relation. We then show that this kinetic relation results in solutions to Riemann problems in continuum bars that agree with the corresponding numerical experiments on the discrete mass-spring chain. We also integrate Langevin's equations of motion for the same chain of masses and springs to account for the presence of a heat bath at a fixed temperature. We find that the xt-plane looks similar to the purely mechanical numerical experiments at low temperatures but at high temperatures there is an increased incidence of random nucleation events. Using results from both impact and Riemann problems, we show that the kinetic relation is a function of the bath temperature.

  19. Pathogenesis-related protein expression in the apoplast of wheat leaves protected against leaf rust following application of plant extracts.

    PubMed

    Naz, Rabia; Bano, Asghari; Wilson, Neil L; Guest, David; Roberts, Thomas H

    2014-09-01

    Leaf rust (Puccinia triticina) is a major disease of wheat. We tested aqueous leaf extracts of Jacaranda mimosifolia (Bignoniaceae), Thevetia peruviana (Apocynaceae), and Calotropis procera (Apocynaceae) for their ability to protect wheat from leaf rust. Extracts from all three species inhibited P. triticina urediniospore germination in vitro. Plants sprayed with extracts before inoculation developed significantly lower levels of disease incidence (number of plants infected) than unsprayed, inoculated controls. Sprays combining 0.6% leaf extracts and 2 mM salicylic acid with the fungicide Amistar Xtra at 0.05% (azoxystrobin at 10 μg/liter + cyproconazole at 4 μg/liter) reduced disease incidence significantly more effectively than sprays of fungicide at 0.1% alone. Extracts of J. mimosifolia were most active, either alone (1.2%) or in lower doses (0.6%) in combination with 0.05% Amistar Xtra. Leaf extracts combined with fungicide strongly stimulated defense-related gene expression and the subsequent accumulation of pathogenesis-related (PR) proteins in the apoplast of inoculated wheat leaves. The level of protection afforded was significantly correlated with the ability of extracts to increase PR protein expression. We conclude that pretreatment of wheat leaves with spray formulations containing previously untested plant leaf extracts enhances protection against leaf rust provided by fungicide sprays, offering an alternative disease management strategy. PMID:24624956

  20. Automated imagery orthorectification pilot

    NASA Astrophysics Data System (ADS)

    Slonecker, E. Terrence; Johnson, Brad; McMahon, Joe

    2009-10-01

    Automated orthorectification of raw image products is now possible based on the comprehensive metadata collected by Global Positioning Systems and Inertial Measurement Unit technology aboard aircraft and satellite digital imaging systems, and based on emerging pattern-matching and automated image-to-image and control point selection capabilities in many advanced image processing systems. Automated orthorectification of standard aerial photography is also possible if a camera calibration report and sufficient metadata is available. Orthorectification of historical imagery, for which only limited metadata was available, was also attempted and found to require some user input, creating a semi-automated process that still has significant potential to reduce processing time and expense for the conversion of archival historical imagery into geospatially enabled, digital formats, facilitating preservation and utilization of a vast archive of historical imagery. Over 90 percent of the frames of historical aerial photos used in this experiment were successfully orthorectified to the accuracy of the USGS 100K base map series utilized for the geospatial reference of the archive. The accuracy standard for the 100K series maps is approximately 167 feet (51 meters). The main problems associated with orthorectification failure were cloud cover, shadow and historical landscape change which confused automated image-to-image matching processes. Further research is recommended to optimize automated orthorectification methods and enable broad operational use, especially as related to historical imagery archives.

  1. Quantification of five compounds with heterogeneous physicochemical properties (morphine, 6-monoacetylmorphine, cyamemazine, meprobamate and caffeine) in 11 fluids and tissues, using automated solid-phase extraction and gas chromatography-tandem mass spectrometry.

    PubMed

    Bévalot, Fabien; Bottinelli, Charline; Cartiser, Nathalie; Fanton, Laurent; Guitton, Jérôme

    2014-06-01

    An automated solid-phase extraction (SPE) protocol followed by gas chromatography coupled with tandem mass spectrometry was developed for quantification of caffeine, cyamemazine, meprobamate, morphine and 6-monoacetylmorphine (6-MAM) in 11 biological matrices [blood, urine, bile, vitreous humor, liver, kidney, lung and skeletal muscle, brain, adipose tissue and bone marrow (BM)]. The assay was validated for linearity, within- and between-day precision and accuracy, limits of quantification, selectivity, extraction recovery (ER), sample dilution and autosampler stability on BM. For the other matrices, partial validation was performed (limits of quantification, linearity, within-day precision, accuracy, selectivity and ER). The lower limits of quantification were 12.5 ng/mL(ng/g) for 6-MAM, morphine and cyamemazine, 100 ng/mL(ng/g) for meprobamate and 50 ng/mL(ng/g) for caffeine. Analysis of real-case samples demonstrated the performance of the assay in forensic toxicology to investigate challenging cases in which, for example, blood is not available or in which analysis in alternative matrices could be relevant. The SPE protocol was also assessed as an extraction procedure that could target other relevant analytes of interest. The extraction procedure was applied to 12 molecules of forensic interest with various physicochemical properties (alimemazine, alprazolam, amitriptyline, citalopram, cocaine, diazepam, levomepromazine, nordazepam, tramadol, venlafaxine, pentobarbital and phenobarbital). All drugs were able to be detected at therapeutic concentrations in blood and in the alternate matrices. PMID:24790060

  2. On the Agreement between Manual and Automated Methods for Single-Trial Detection and Estimation of Features from Event-Related Potentials.

    PubMed

    Biurrun Manresa, José A; Arguissain, Federico G; Medina Redondo, David E; Mørch, Carsten D; Andersen, Ole K

    2015-01-01

    The agreement between humans and algorithms on whether an event-related potential (ERP) is present or not and the level of variation in the estimated values of its relevant features are largely unknown. Thus, the aim of this study was to determine the categorical and quantitative agreement between manual and automated methods for single-trial detection and estimation of ERP features. To this end, ERPs were elicited in sixteen healthy volunteers using electrical stimulation at graded intensities below and above the nociceptive withdrawal reflex threshold. Presence/absence of an ERP peak (categorical outcome) and its amplitude and latency (quantitative outcome) in each single-trial were evaluated independently by two human observers and two automated algorithms taken from existing literature. Categorical agreement was assessed using percentage positive and negative agreement and Cohen's κ, whereas quantitative agreement was evaluated using Bland-Altman analysis and the coefficient of variation. Typical values for the categorical agreement between manual and automated methods were derived, as well as reference values for the average and maximum differences that can be expected if one method is used instead of the others. Results showed that the human observers presented the highest categorical and quantitative agreement, and there were significantly large differences between detection and estimation of quantitative features among methods. In conclusion, substantial care should be taken in the selection of the detection/estimation approach, since factors like stimulation intensity and expected number of trials with/without response can play a significant role in the outcome of a study. PMID:26258532

  3. Numerical studies of constraints and gravitational wave extraction in general relativity

    NASA Astrophysics Data System (ADS)

    Fiske, David Robert

    Within classical physics, general relativity is the theory of gravity. Its equations are non-linear partial differential equations for which relatively few closed form solutions are known. Because of the growing observational need for solutions representing gravitational waves from astrophysically plausible sources, a subfield of general relativity; numerical relativity, has a emerged with the goal of generating numerical solutions to the Einstein equations. This dissertation focuses on two fundamental problems in modern numerical relativity: (1)Creating a theoretical treatment of the constraints in the presence of constraint-violating numerical errors, and (2)Designing and implementing an algorithm to compute the spherical harmonic decomposition of radiation quantities for comparison with observation. On the issue of the constraints, I present a novel and generic procedure for incorporating the constraints into the equations of motion of the theory in a way designed to make the constraint hypersurface an attractor of the evolution. In principle, the prescription generates non- linear corrections for the Einstein equations. The dissertation presents numerical evidence that the correction terms do work in the case of two formulations of the Maxwell equations and two formulations of the linearized Einstein equations. On the issue of radiation extraction, I provide the first in-depth analysis of a novel algorithm, due originally to Misner, for computing spherical harmonic components on a cubic grid. I compute explicitly how the truncation error in the algorithm depends on its various parameters, and I also provide a detailed analysis showing how to implement the method on grids in which explicit symmetries are enforced via boundary conditions. Finally, I verify these error estimates and symmetry arguments with a numerical study using a solution of the linearized Einstein equations known as a Teukolsky wave. The algorithm performs well and the estimates prove true both

  4. Automation in organizations: Eternal conflict

    NASA Technical Reports Server (NTRS)

    Dieterly, D. L.

    1981-01-01

    Some ideas on and insights into the problems associated with automation in organizations are presented with emphasis on the concept of automation, its relationship to the individual, and its impact on system performance. An analogy is drawn, based on an American folk hero, to emphasize the extent of the problems encountered when dealing with automation within an organization. A model is proposed to focus attention on a set of appropriate dimensions. The function allocation process becomes a prominent aspect of the model. The current state of automation research is mentioned in relation to the ideas introduced. Proposed directions for an improved understanding of automation's effect on the individual's efficiency are discussed. The importance of understanding the individual's perception of the system in terms of the degree of automation is highlighted.

  5. Characterization of cysteine related variants in an IgG2 antibody by LC-MS with an automated data analysis approach.

    PubMed

    Zhang, Yuling; Bailey, Robert; Nightlinger, Nancy; Gillespie, Alison; Balland, Alain; Rogers, Richard

    2015-08-01

    In this communication, a high-throughput method for automated data analysis of cysteine-related product quality attributes (PQAs) in IgG2 antibodies is reported. This method leverages recent advances in the relative quantification of PQAs to facilitate the characterization of disulfide variants and free sulfhydryls (SHs) in IgG2 antibodies. The method uses samples labeled with a mass tag (N-ethyl maleimide [NEM]) followed by enzymatic digestion under non-reducing conditions to maintain the cysteine connectivity. The digested IgG2 samples are separated and detected by mass spectrometry (MS) and the resulting peptide map is analyzed in an automated fashion using Pinpoint software (Thermo Scientific). Previous knowledge of IgG2 disulfide structures can be fed into the Pinpoint software to create workbooks for various disulfide linkages and hinge disulfide variants. In addition, the NEM mass tag can be added to the workbooks for targeted analysis of labeled cysteine-containing peptides. The established Pinpoint workbooks are a high-throughput approach to quantify relative abundances of unpaired cysteines and disulfide linkages, including complicated hinge disulfide variants. This approach is especially efficient for comparing large sets of similar samples such as those created in comparability and stability studies or chromatographic fractions. Here, the high throughput method is applied to quantify the relative abundance of hinge disulfide variants and unpaired cysteines in the IgG2 fractions from non-reduced reversed-phase high-performance liquid chromatography (nrRP-HPLC). The LC-MS data analyzed by the Pinpoint workbook suggests that the nrRP-HPLC separated peaks contain hinge disulfide isoforms and free cysteine pairs for each major disulfide isoform structure. PMID:26079266

  6. Soybean extract showed modulation of retinoic acid-related gene expression of skin and photo-protective effects in keratinocytes.

    PubMed

    Park, N-H; Park, J-S; Kang, Y-G; Bae, J-H; Lee, H-K; Yeom, M-H; Cho, J-C; Na, Y J

    2013-04-01

    Soy extracts are well known as medicinal and nutritional ingredients, and exhibit benefits towards human skin including depigmenting or anti-ageing effects. Despite the wrinkle decreasing effects of retinoids on skin as an anti-ageing ingredient, retinoid application can causes photo-sensitive responses such as skin irritation. Thus, their daytime usage is not recommended. The aim of this study is the investigation into the activities of soybean extract as an anti-ageing ingredient and their comparison to retinoids in this respect. Soybean extract decreased the relative ratio of MMP-1/TIMP-1 mRNA to the same degree as retinoic acid in normal human fibroblasts. It also affected mRNA levels of HAS2 and CRABP2 in normal human keratinocytes. Furthermore, we investigated its effect on mRNA expression of histidase, an enzyme that converts histidine into urocanic acid, the main UV light absorption factor of the stratum corneum. Unlike the complete inhibition of histidase exhibited by the mRNA expression of retinoic acid, the effect of soybean extract on histidase gene expression was weaker in normal human keratinocytes. Also, soybean extract pretreatment inhibited UVB-induced cyclobutane pyrimidine dimer formation dose-dependently in normal human keratinocytes. In this study, we found that soybean extract modulated retinoic acid-related genes and showed photo-protective effects. Our findings suggest that soybean extract could be an anti-ageing ingredient that can be safely used under the sunlight. PMID:23075113

  7. Habitat automation

    NASA Technical Reports Server (NTRS)

    Swab, Rodney E.

    1992-01-01

    A habitat, on either the surface of the Moon or Mars, will be designed and built with the proven technologies of that day. These technologies will be mature and readily available to the habitat designer. We believe an acceleration of the normal pace of automation would allow a habitat to be safer and more easily maintained than would be the case otherwise. This document examines the operation of a habitat and describes elements of that operation which may benefit from an increased use of automation. Research topics within the automation realm are then defined and discussed with respect to the role they can have in the design of the habitat. Problems associated with the integration of advanced technologies into real-world projects at NASA are also addressed.

  8. How to extract clinically useful information from large amount of dialysis related stored data.

    PubMed

    Vito, Domenico; Casagrande, Giustina; Bianchi, Camilla; Costantino, Maria L

    2015-08-01

    The basic storage infrastructure used to gather data from the technological evolution also in the healthcare field was leading to the storing into public or private repository of even higher quantities of data related to patients and their pathological evolution. Big data techniques are spreading also in medical research. By these techniques is possible extract information from complex heterogeneous sources, realizing longitudinal studies focused to correlate the patient status with biometric parameters. In our work we develop a common data infrastructure involving 4 clinical dialysis centers between Lombardy and Switzerland. The common platform has been build to store large amount of clinical data related to 716 dialysis session of 70 patient. The platform is made up by a combination of a MySQL(®) database (Dialysis Database) and a MATLAB-based mining library (Dialysis MATlib). A statistical analysis of these data has been performed on the data gathered. These analyses led to the development of two clinical indexes, representing an example of transformation of big data into clinical information. PMID:26737858

  9. RELATIVE POTENCY OF FUNGAL EXTRACTS IN INDUCING ALLERGIC ASTHMA-LIKE RESPONSES IN BALB/C MICE

    EPA Science Inventory

    Indoor mold has been associated with the development of allergic asthma. However, relative potency of molds in the induction of allergic asthma is not clear. In this study, we tested the relative potency of fungal extracts (Metarizium anisophilae [MACA], Stachybotrys ...

  10. Ginseng Purified Dry Extract, BST204, Improved Cancer Chemotherapy-Related Fatigue and Toxicity in Mice

    PubMed Central

    Park, Hyun-Jung; Shim, Hyun Soo; Kim, Jeom Yong; Kim, Joo Young; Park, Sun Kyu

    2015-01-01

    Cancer related fatigue (CRF) is one of the most common side effects of cancer and its treatments. A large proportion of cancer patients experience cancer-related physical and central fatigue so new strategies are needed for treatment and improved survival of these patients. BST204 was prepared by incubating crude ginseng extract with ginsenoside-β-glucosidase. The purpose of the present study was to examine the effects of BST204, mixture of ginsenosides on 5-fluorouracil (5-FU)-induced CRF, the glycogen synthesis, and biochemical parameters in mice. The mice were randomly divided into the following groups: the naïve normal (normal), the HT-29 cell inoculated (xenograft), xenograft and 5-FU treated (control), xenograft + 5-FU + BST204-treated (100 and 200 mg/kg) (BST204), and xenograft + 5-FU + modafinil (13 mg/kg) treated group (modafinil). Running wheel activity and forced swimming test were used for evaluation of CRF. Muscle glycogen, serum inflammatory cytokines, aspartic aminotransferase (AST), alanine aminotransferase (ALT), creatinine (CRE), white blood cell (WBC), neutrophil (NEUT), red blood cell (RBC), and hemoglobin (HGB) were measured. Treatment with BST204 significantly increased the running wheel activity and forced swimming time compared to the control group. Consistent with the behavioral data, BST204 markedly increased muscle glycogen activity and concentrations of WBC, NEUT, RBC, and HGB. Also, tumor necrosis factor-α (TNF-α) and interleukin-6 (IL-6), AST, ALT, and CRE levels in the serum were significantly reduced in the BST204-treated group compared to the control group. This result suggests that BST204 may improve chemotherapy-related fatigue and adverse toxic side effects. PMID:25945105

  11. BioCreative V CDR task corpus: a resource for chemical disease relation extraction.

    PubMed

    Li, Jiao; Sun, Yueping; Johnson, Robin J; Sciaky, Daniela; Wei, Chih-Hsuan; Leaman, Robert; Davis, Allan Peter; Mattingly, Carolyn J; Wiegers, Thomas C; Lu, Zhiyong

    2016-01-01

    Community-run, formal evaluations and manually annotated text corpora are critically important for advancing biomedical text-mining research. Recently in BioCreative V, a new challenge was organized for the tasks of disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. Given the nature of both tasks, a test collection is required to contain both disease/chemical annotations and relation annotations in the same set of articles. Despite previous efforts in biomedical corpus construction, none was found to be sufficient for the task. Thus, we developed our own corpus called BC5CDR during the challenge by inviting a team of Medical Subject Headings (MeSH) indexers for disease/chemical entity annotation and Comparative Toxicogenomics Database (CTD) curators for CID relation annotation. To ensure high annotation quality and productivity, detailed annotation guidelines and automatic annotation tools were provided. The resulting BC5CDR corpus consists of 1500 PubMed articles with 4409 annotated chemicals, 5818 diseases and 3116 chemical-disease interactions. Each entity annotation includes both the mention text spans and normalized concept identifiers, using MeSH as the controlled vocabulary. To ensure accuracy, the entities were first captured independently by two annotators followed by a consensus annotation: The average inter-annotator agreement (IAA) scores were 87.49% and 96.05% for the disease and chemicals, respectively, in the test set according to the Jaccard similarity coefficient. Our corpus was successfully used for the BioCreative V challenge tasks and should serve as a valuable resource for the text-mining research community.Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/. PMID:27161011

  12. BioCreative V CDR task corpus: a resource for chemical disease relation extraction

    PubMed Central

    Li, Jiao; Sun, Yueping; Johnson, Robin J.; Sciaky, Daniela; Wei, Chih-Hsuan; Leaman, Robert; Davis, Allan Peter; Mattingly, Carolyn J.; Wiegers, Thomas C.; Lu, Zhiyong

    2016-01-01

    Community-run, formal evaluations and manually annotated text corpora are critically important for advancing biomedical text-mining research. Recently in BioCreative V, a new challenge was organized for the tasks of disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. Given the nature of both tasks, a test collection is required to contain both disease/chemical annotations and relation annotations in the same set of articles. Despite previous efforts in biomedical corpus construction, none was found to be sufficient for the task. Thus, we developed our own corpus called BC5CDR during the challenge by inviting a team of Medical Subject Headings (MeSH) indexers for disease/chemical entity annotation and Comparative Toxicogenomics Database (CTD) curators for CID relation annotation. To ensure high annotation quality and productivity, detailed annotation guidelines and automatic annotation tools were provided. The resulting BC5CDR corpus consists of 1500 PubMed articles with 4409 annotated chemicals, 5818 diseases and 3116 chemical-disease interactions. Each entity annotation includes both the mention text spans and normalized concept identifiers, using MeSH as the controlled vocabulary. To ensure accuracy, the entities were first captured independently by two annotators followed by a consensus annotation: The average inter-annotator agreement (IAA) scores were 87.49% and 96.05% for the disease and chemicals, respectively, in the test set according to the Jaccard similarity coefficient. Our corpus was successfully used for the BioCreative V challenge tasks and should serve as a valuable resource for the text-mining research community. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/ PMID:27161011

  13. Extraction of solubles from plant biomass for use as microbial growth stimulant and methods related thereto

    SciTech Connect

    Lau, Ming Woei

    2015-12-08

    A method for producing a microbial growth stimulant (MGS) from a plant biomass is described. In one embodiment, an ammonium hydroxide solution is used to extract a solution of proteins and ammonia from the biomass. Some of the proteins and ammonia are separated from the extracted solution to provide the MGS solution. The removed ammonia can be recycled and the proteins are useful as animal feeds. In one embodiment, the method comprises extracting solubles from pretreated lignocellulosic biomass with a cellulase enzyme-producing growth medium (such T. reesei) in the presence of water and an aqueous extract.

  14. Automated protein NMR resonance assignments.

    PubMed

    Wan, Xiang; Xu, Dong; Slupsky, Carolyn M; Lin, Guohui

    2003-01-01

    NMR resonance peak assignment is one of the key steps in solving an NMR protein structure. The assignment process links resonance peaks to individual residues of the target protein sequence, providing the prerequisite for establishing intra- and inter-residue spatial relationships between atoms. The assignment process is tedious and time-consuming, which could take many weeks. Though there exist a number of computer programs to assist the assignment process, many NMR labs are still doing the assignments manually to ensure quality. This paper presents (1) a new scoring system for mapping spin systems to residues, (2) an automated adjacency information extraction procedure from NMR spectra, and (3) a very fast assignment algorithm based on our previous proposed greedy filtering method and a maximum matching algorithm to automate the assignment process. The computational tests on 70 instances of (pseudo) experimental NMR data of 14 proteins demonstrate that the new score scheme has much better discerning power with the aid of adjacency information between spin systems simulated across various NMR spectra. Typically, with automated extraction of adjacency information, our method achieves nearly complete assignments for most of the proteins. The experiment shows very promising perspective that the fast automated assignment algorithm together with the new score scheme and automated adjacency extraction may be ready for practical use. PMID:16452794

  15. Pressure-driven mesofluidic platform integrating automated on-chip renewable micro-solid-phase extraction for ultrasensitive determination of waterborne inorganic mercury.

    PubMed

    Portugal, Lindomar A; Laglera, Luis M; Anthemidis, Aristidis N; Ferreira, Sérgio L C; Miró, Manuel

    2013-06-15

    A dedicated pressure-driven mesofluidic platform incorporating on-chip sample clean-up and analyte preconcentration is herein reported for expedient determination of trace level concentrations of waterborne inorganic mercury. Capitalizing upon the Lab-on-a-Valve (LOV) concept, the mesofluidic device integrates on-chip micro-solid phase extraction (μSPE) in automatic disposable mode followed by chemical vapor generation and gas-liquid separation prior to in-line atomic fluorescence spectrometric detection. In contrast to prevailing chelating sorbents for Hg(II), bare poly(divinylbenzene-N-vinylpyrrolidone) copolymer sorptive beads were resorted to efficient uptake of Hg(II) in hydrochloric acid milieu (pH=2.3) without the need for metal derivatization nor pH adjustment of prior acidified water samples for preservation to near-neutral conditions. Experimental variables influencing the sorptive uptake and retrieval of target species and the evolvement of elemental mercury within the miniaturized integrated reaction chamber/gas-liquid separator were investigated in detail. Using merely <10 mg of sorbent, the limits of detection and quantification at the 3s(blank) and 10s(blank) levels, respectively, for a sample volume of 3 mL were 12 and 42 ng L(-1) Hg(II) with a dynamic range extending up to 5.0 μg L(-1). The proposed mesofluidic platform copes with the requirements of regulatory bodies (US-EPA, WHO, EU-Commission) for drinking water quality and surface waters that endorse maximum allowed concentrations of mercury spanning from 0.07 to 6.0 μg L(-1). Demonstrated with the analysis of aqueous samples of varying matrix complexity, the LOV approach afforded reliable results with relative recoveries of 86-107% and intermediate precision down to 9% in the renewable μSPE format. PMID:23618176

  16. Automating Finance

    ERIC Educational Resources Information Center

    Moore, John

    2007-01-01

    In past years, higher education's financial management side has been riddled with manual processes and aging mainframe applications. This article discusses schools which had taken advantage of an array of technologies that automate billing, payment processing, and refund processing in the case of overpayment. The investments are well worth it:…

  17. Automated dispenser

    SciTech Connect

    Hollen, R.M.; Stalnaker, N.D.

    1989-04-06

    An automated dispenser having a conventional pipette attached to an actuating cylinder through a flexible cable for delivering precise quantities of a liquid through commands from remotely located computer software. The travel of the flexible cable is controlled by adjustable stops and a locking shaft. The pipette can be positioned manually or by the hands of a robot. 1 fig.

  18. Locoregional Control of Non-Small Cell Lung Cancer in Relation to Automated Early Assessment of Tumor Regression on Cone Beam Computed Tomography

    SciTech Connect

    Brink, Carsten; Bernchou, Uffe; Bertelsen, Anders; Hansen, Olfred; Schytte, Tine; Bentzen, Soren M.

    2014-07-15

    Purpose: Large interindividual variations in volume regression of non-small cell lung cancer (NSCLC) are observable on standard cone beam computed tomography (CBCT) during fractionated radiation therapy. Here, a method for automated assessment of tumor volume regression is presented and its potential use in response adapted personalized radiation therapy is evaluated empirically. Methods and Materials: Automated deformable registration with calculation of the Jacobian determinant was applied to serial CBCT scans in a series of 99 patients with NSCLC. Tumor volume at the end of treatment was estimated on the basis of the first one third and two thirds of the scans. The concordance between estimated and actual relative volume at the end of radiation therapy was quantified by Pearson's correlation coefficient. On the basis of the estimated relative volume, the patients were stratified into 2 groups having volume regressions below or above the population median value. Kaplan-Meier plots of locoregional disease-free rate and overall survival in the 2 groups were used to evaluate the predictive value of tumor regression during treatment. Cox proportional hazards model was used to adjust for other clinical characteristics. Results: Automatic measurement of the tumor regression from standard CBCT images was feasible. Pearson's correlation coefficient between manual and automatic measurement was 0.86 in a sample of 9 patients. Most patients experienced tumor volume regression, and this could be quantified early into the treatment course. Interestingly, patients with pronounced volume regression had worse locoregional tumor control and overall survival. This was significant on patient with non-adenocarcinoma histology. Conclusions: Evaluation of routinely acquired CBCT images during radiation therapy provides biological information on the specific tumor. This could potentially form the basis for personalized response adaptive therapy.

  19. Optimization of DNA extraction and PCR protocols for phylogenetic analysis in Schinopsis spp. and related Anacardiaceae.

    PubMed

    Mogni, Virginia Y; Kahan, Mariano A; de Queiroz, Luciano Paganucci; Vesprini, José L; Ortiz, Juan Pablo A; Prado, Darién E

    2016-01-01

    The Anacardiaceae is an important and worldwide distributed family of ecological and socio-economic relevance. Notwithstanding that, molecular studies in this family are scarce and problematic because of the particularly high concentration of secondary metabolites-i.e. tannins and oleoresins-that are present in almost all tissues of the many members of the group, which complicate the purification and amplification of the DNA. The objective of this work was to improve an available DNA isolation method for Schinopsis spp. and other related Anacardiaceae, as well as the PCR protocols for DNA amplification of the chloroplast trnL-F, rps16 and ndhF and nuclear ITS-ETS fragments. The modifications proposed allowed the extraction of 70-120 µg of non-degraded genomic DNA per gram of dry tissue that resulted useful for PCR amplification. PCR reactions produced the expected fragments that could be directly sequenced. Sequence analyses of amplicons showed similarity with the corresponding Schinopsis accessions available at GenBank. The methodology presented here can be routinely applied for molecular studies of the group aimed to clarify not only aspects on the molecular biology but also the taxonomy and phylogeny of this fascinating group of vascular plants. PMID:27217992

  20. Cytoprotective Effects of Grape Seed Extract on Human Gingival Fibroblasts in Relation to Its Antioxidant Potential

    PubMed Central

    Katsuda, Yusuke; Niwano, Yoshimi; Nakashima, Takuji; Mokudai, Takayuki; Nakamura, Keisuke; Oizumi, Satomi; Kanno, Taro; Kanetaka, Hiroyasu; Egusa, Hiroshi

    2015-01-01

    Cytoprotective effects of short-term treatment with grape seed extract (GSE) upon human gingival fibroblasts (hGFs) were evaluated in relation to its antioxidant properties and compared with those of a water-soluble analog of vitamin E: trolox (Tx). GSE and Tx showed comparable antioxidant potential in vitro against di(phenyl)-(2,4,6-trinitrophenyl)iminoazanium (DPPH; a stable radical), hydroxyl radical (•OH), singlet oxygen (1O2), and hydrogen peroxide (H2O2). Pretreatment or concomitant treatment with GSE for 1 min protected hGFs from oxidative stressors, including H2O2, acid-electrolyzed water (AEW), and 1O2, and attenuated the intracellular formation of reactive oxygen species induced by H2O2 and AEW. Tx also reduced the H2O2- and AEW-induced intracellular formation of reactive oxygen species, but showed no cytoprotective effects on hGFs exposed to H2O2, AEW, or 1O2. These results suggest that the cytoprotective effects of GSE are likely exerted independently of its antioxidant potential. PMID:26258747

  1. Flight-deck automation: Promises and problems

    NASA Technical Reports Server (NTRS)

    Wiener, E. L.; Curry, R. E.

    1980-01-01

    The state of the art in human factors in flight-deck automation is presented. A number of critical problem areas are identified and broad design guidelines are offered. Automation-related aircraft accidents and incidents are discussed as examples of human factors problems in automated flight.

  2. Automating the analytical laboratory via the Chemical Analysis Automation paradigm

    SciTech Connect

    Hollen, R.; Rzeszutko, C.

    1997-10-01

    To address the need for standardization within the analytical chemistry laboratories of the nation, the Chemical Analysis Automation (CAA) program within the US Department of Energy, Office of Science and Technology`s Robotic Technology Development Program is developing laboratory sample analysis systems that will automate the environmental chemical laboratories. The current laboratory automation paradigm consists of islands-of-automation that do not integrate into a system architecture. Thus, today the chemist must perform most aspects of environmental analysis manually using instrumentation that generally cannot communicate with other devices in the laboratory. CAA is working towards a standardized and modular approach to laboratory automation based upon the Standard Analysis Method (SAM) architecture. Each SAM system automates a complete chemical method. The building block of a SAM is known as the Standard Laboratory Module (SLM). The SLM, either hardware or software, automates a subprotocol of an analysis method and can operate as a standalone or as a unit within a SAM. The CAA concept allows the chemist to easily assemble an automated analysis system, from sample extraction through data interpretation, using standardized SLMs without the worry of hardware or software incompatibility or the necessity of generating complicated control programs. A Task Sequence Controller (TSC) software program schedules and monitors the individual tasks to be performed by each SLM configured within a SAM. The chemist interfaces with the operation of the TSC through the Human Computer Interface (HCI), a logical, icon-driven graphical user interface. The CAA paradigm has successfully been applied in automating EPA SW-846 Methods 3541/3620/8081 for the analysis of PCBs in a soil matrix utilizing commercially available equipment in tandem with SLMs constructed by CAA.

  3. Biological Activity of Blackcurrant Extracts (Ribes nigrum L.) in Relation to Erythrocyte Membranes

    PubMed Central

    Cyboran, Sylwia; Żyłka, Romuald; Oszmiański, Jan; Kleszczyńska, Halina

    2014-01-01

    Compounds contained in fruits and leaves of blackcurrant (Ribes nigrum L.) are known as agents acting preventively and therapeutically on the organism. The HPLC analysis showed they are rich in polyphenol anthocyanins in fruits and flavonoids in leaves, that have antioxidant activity and are beneficial for health. The aim of the research was to determine the effect of blackcurrant fruit and leaf extracts on the physical properties of the erythrocyte membranes and assess their antioxidant properties. The effect of the extracts on osmotic resistance, shape of erythrocytes and hemolytic and antioxidant activity of the extracts were examined with spectrophotometric methods. The FTIR investigation showed that extracts modify the erythrocyte membrane and protect it against free radicals induced by UV radiation. The results show that the extracts do not induce hemolysis and even protect erythrocytes against the harmful action of UVC radiation, while slightly strengthening the membrane and inducing echinocytes. The compounds contained in the extracts do not penetrate into the hydrophobic region, but bind to the membrane surface inducing small changes in the packing arrangement of the polar head groups of membrane lipids. The extracts have a high antioxidant activity. Their presence on the surface of the erythrocyte membrane entails protection against free radicals. PMID:24527456

  4. Dynamic ultrasound-assisted extraction of oleuropein and related biophenols from olive leaves.

    PubMed

    Japón-Luján, R; Luque-Rodríguez, J M; Luque de Castro, M D

    2006-03-01

    A continuous approach for the ultrasound-assisted extraction of olive biophenols (OBPs) from olive leaves is proposed. Multivariate methodology was used to carry out a detailed optimisation of extraction. Under the optimal working conditions, complete extraction of the target analytes (namely, oleuropein, verbacoside, apigenin-7-glucoside and luteolin-7-glucoside with LODs 11.04, 2.68, 1.49 and 3.91 mg/kg, respectively) was achieved in 25 min. The extract was injected into a chromatograph-photodiode array detector assembly (HPLC-DAD) for individual separation-quantification. No clean-up or preconcentration steps were required. Gas chromatography-mass spectrometry (without derivatization of the analytes) was used to identify OBPs at concentrations below the LODs obtained by HPLC-DAD. The efficacy of ethanol-water mixtures to extract OBPs from olive leaves has been demonstrated and compared with that of a conventional method which requires 24h for complete extraction; so these mixtures can substitute toxic extractants used to date. PMID:16442552

  5. Physiological Changes in Rhizobia after Growth in Peat Extract May Be Related to Improved Desiccation Tolerance

    PubMed Central

    Wilkes, Meredith A.; Deaker, Rosalind

    2013-01-01

    Improved survival of peat-cultured rhizobia compared to survival of liquid-cultured cells has been attributed to cellular adaptations during solid-state fermentation in moist peat. We have observed improved desiccation tolerance of Rhizobium leguminosarum bv. trifolii TA1 and Bradyrhizobium japonicum CB1809 after aerobic growth in water extracts of peat. Survival of TA1 grown in crude peat extract was 18-fold greater than that of cells grown in a defined liquid medium but was diminished when cells were grown in different-sized colloidal fractions of peat extract. Survival of CB1809 was generally better when grown in crude peat extract than in the control but was not statistically significant (P > 0.05) and was strongly dependent on peat extract concentration. Accumulation of intracellular trehalose by both TA1 and CB1809 was higher after growth in peat extract than in the defined medium control. Cells grown in water extracts of peat exhibit morphological changes similar to those observed after growth in moist peat. Electron microscopy revealed thickened plasma membranes, with an electron-dense material occupying the periplasmic space in both TA1 and CB1809. Growth in peat extract also resulted in changes to polypeptide expression in both strains, and peptide analysis by liquid chromatography-mass spectrometry indicated increased expression of stress response proteins. Our results suggest that increased capacity for desiccation tolerance in rhizobia is multifactorial, involving the accumulation of trehalose together with increased expression of proteins involved in protection of the cell envelope, repair of DNA damage, oxidative stress responses, and maintenance of stability and integrity of proteins. PMID:23603686

  6. Feature extraction of event-related potentials using wavelets: an application to human performance monitoring

    NASA Technical Reports Server (NTRS)

    Trejo, L. J.; Shensa, M. J.

    1999-01-01

    This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many free parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance. Copyright 1999 Academic Press.

  7. Feature Extraction of Event-Related Potentials Using Wavelets: An Application to Human Performance Monitoring

    NASA Technical Reports Server (NTRS)

    Trejo, Leonard J.; Shensa, Mark J.; Remington, Roger W. (Technical Monitor)

    1998-01-01

    This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many f ree parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation,-, algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance.

  8. Office Automation Boosts University's Productivity.

    ERIC Educational Resources Information Center

    School Business Affairs, 1986

    1986-01-01

    The University of Pittsburgh has a 2-year agreement designating the Xerox Corporation as the primary supplier of word processing and related office automation equipment in order to increase productivity and more efficient use of campus resources. (MLF)

  9. Quantitation of efletirizine in human plasma and urine using automated solid-phase extraction and column-switching high-performance liquid chromatography.

    PubMed

    Coe, R A; DeCesare, L S; Lee, J W

    1999-07-01

    A heart-cut column-switching, ion-pair, reversed-phase HPLC system was used for the quantitation of efletirizine (EFZ) in biological fluids. The analyte and an internal standard (I.S.) were extracted from human EDTA plasma by C18 solid-phase extraction (SPE) using a RapidTrace workstation. The eluent from the SPE was evaporated, reconstituted and injected onto the HPLC column. Urine samples were diluted and injected directly without the need of extraction. The compounds of interest were separated from most of the extraneous matrix materials by the first C18 column, and switched onto a second C18 column for further separation using a mobile phase of stronger eluting capability. Linearity range was 10-2000 ng ml(-1) for plasma and 0.05-10 microg ml(-1) for urine. The lower limit of quantitation (LOQ) was 10 ng from 1 ml of plasma, with a signal-to-noise ratio of 15:1. Inter-day precision and bias of quality control samples (QCs) were <5% for plasma and <7% for urine. Selectivity was established against six other antihistamines, three analogs of efletirizine, and on 12 control plasma lots and nine control urine lots. Recovery was 90.0% for EFZ and 89.5% for I.S. from plasma. One hundred samples can be processed in every 2.75 h on a 10-module RapidTrace workstation with minimal human attention. Method ruggedness were tested on three brands of SPE and six different lots of one SPE brand. Performance ruggedness was demonstrated by different analysts on multiple HPLC systems. Analyte stability through sample storage, extraction process (benchtop, freeze-thaw, refrigeration after extraction) and chromatography (on-system, reinjection) was established. PMID:10448959

  10. Semi-automated building extraction from airborne laser scanning data. (Polish Title: Półautomatyczne modelowanie brył budynków na podstawie danych z lotniczego skaningu laserowego)

    NASA Astrophysics Data System (ADS)

    Marjasiewicz, M.; Malej, T.

    2014-12-01

    The main idea of this project is to introduce a conception of semi - automated method for building model extraction from Airborne Laser Scanning data. The presented method is based on the RANSAC algorithm, which provides automatic collection planes for roofs model creation. In the case of Airborne Laser Scanning, the algorithm can process point clouds influenced with noise and erroneous measurement (gross errors). The RANSAC algorithm is based on the iterative processing of a set of points in order to estimate the geometric model. Research of u sing algorithm for ALS data was performed in available Cloud Compare and SketchUP software. An important aspect in this research was algorithm parameters selection, which was made on the basis of characteristics of point cloud and scanned objects. Analysis showed that the accuracy of plane extraction with RANSAC algorithm does not exceed 20 centimeters for point clouds of density 4 pts . /m 2 . RANSAC can be successfully used in buildings modelling based on ALS data. Roofs created by the presented method could be used in visualizations on a much better level than Level of Detail 2 by CityGML standard. If model is textured it can represent LoD3 standard.

  11. Extraction of bistable-percept-related features from local field potential by integration of local regression and common spatial patterns.

    PubMed

    Wang, Zhisong; Maier, Alexander; Logothetis, Nikos K; Liang, Hualou

    2009-08-01

    Bistable perception arises when an ambiguous stimulus under continuous view is perceived as an alternation of two mutually exclusive states. Such a stimulus provides a unique opportunity for understanding the neural basis of visual perception because it dissociates the perception from the visual input. In this paper, we focus on extracting the percept-related features from the local field potential (LFP) in monkey visual cortex for decoding its bistable structure-from-motion (SFM) perception. Our proposed feature extraction approach consists of two stages. First, we estimate and remove from each LFP trial the nonpercept-related stimulus-evoked activity via a local regression method called the locally weighted scatterplot smoothing because of the dissociation between the perception and the stimulus in our experimental paradigm. Second, we use the common spatial patterns approach to design spatial filters based on the residue signals of multiple channels to extract the percept-related features. We exploit a support vector machine (SVM) classifier on the extracted features to decode the reported perception on a single-trial basis. We apply the proposed approach to the multichannel intracortical LFP data collected from the middle temporal (MT) visual cortex in a macaque monkey performing an SFM task. We demonstrate that our approach is effective in extracting the discriminative features of the percept-related activity from LFP and achieves excellent decoding performance. We also find that the enhanced gamma band synchronization and reduced alpha and beta band desynchronization may be the underpinnings of the percept-related activity. PMID:19362902

  12. Incidence of Tooth Size Discrepancy in Different Groups of Malocclusion and its Relation to Extraction

    PubMed Central

    Gaddam, Rajkumar; Arya, Siddarth; Shetty, K Sadashiva

    2015-01-01

    Background: For proper intercuspation, the teeth must be proportional in size. If teeth are mismatched, with unusually large teeth in one arch compared to the other, then an ideal occlusion cannot be attained. This study has been done to determine the prevalence of tooth size discrepancies among orthodontic patients in general and also between different malocclusion groups, sex, and to analyze the change in the degree of severity in Bolton discrepancy before and after the hypothetical premolar extraction. Methods: The study was carried out on randomly collected 100 pre-treatment dental casts. Tooth size analyses were performed on these pre-treatment models and Mesio distal tooth size ratios were measured as described by Bolton before and after various patterns of hypothetical extraction. Result: The results were statistically evaluated using ANOVA and paired samples t-test. 5 out of 100 patients are seen with severe Bolton discrepancy with Bolton values (BV) ranging above and below 2 standard deviation. Statistically insignificant difference is seen between males and females and also between various groups of malocclusion. The difference between the pre-treatment and post extraction BV was found statistically significant for the first premolar extraction and insignificant for others. Conclusion: The results of this study indicate a new point of view to the question of which teeth to extract when evaluated for tooth size aspect only. PMID:26225105

  13. Multiresidue trace analysis of pharmaceuticals, their human metabolites and transformation products by fully automated on-line solid-phase extraction-liquid chromatography-tandem mass spectrometry.

    PubMed

    García-Galán, María Jesús; Petrovic, Mira; Rodríguez-Mozaz, Sara; Barceló, Damià

    2016-09-01

    A novel, fully automated analytical methodology based on dual column liquid chromatography coupled to tandem mass spectrometry (LC-LC-MS(2)) has been developed and validated for the analysis of 12 pharmaceuticals and 20 metabolites and transformation products in different types of water (influent and effluent wastewaters and surface water). Two LC columns were used - one for pre-concentration of the sample and the second for separation and analysis - so that water samples were injected directly in the chromatographic system. Besides the many advantages of the methodology, such as minimization of the sample volume required and its manipulation, both compounds ionized in positive and negative mode could be analyzed simultaneously without compromising the sensitivity. A comparative study of different mobile phases, gradients and LC pre-concentration columns was carried out to obtain the best analytical performance. Limits of detection (MLODs) achieved were in the low ngL(-1) range for all the compounds. The method was successfully applied to study the presence of the target analytes in different wastewater and surface water samples collected near the city of Girona (Catalonia, Spain). Data on the environmental presence and fate of pharmaceutical metabolites and TPs is still scarce, highlighting the relevance of the developed methodology. PMID:27343613

  14. A Shortest Dependency Path Based Convolutional Neural Network for Protein-Protein Relation Extraction.

    PubMed

    Hua, Lei; Quan, Chanqin

    2016-01-01

    The state-of-the-art methods for protein-protein interaction (PPI) extraction are primarily based on kernel methods, and their performances strongly depend on the handcraft features. In this paper, we tackle PPI extraction by using convolutional neural networks (CNN) and propose a shortest dependency path based CNN (sdpCNN) model. The proposed method (1) only takes the sdp and word embedding as input and (2) could avoid bias from feature selection by using CNN. We performed experiments on standard Aimed and BioInfer datasets, and the experimental results demonstrated that our approach outperformed state-of-the-art kernel based methods. In particular, by tracking the sdpCNN model, we find that sdpCNN could extract key features automatically and it is verified that pretrained word embedding is crucial in PPI task. PMID:27493967

  15. A Shortest Dependency Path Based Convolutional Neural Network for Protein-Protein Relation Extraction

    PubMed Central

    Quan, Chanqin

    2016-01-01

    The state-of-the-art methods for protein-protein interaction (PPI) extraction are primarily based on kernel methods, and their performances strongly depend on the handcraft features. In this paper, we tackle PPI extraction by using convolutional neural networks (CNN) and propose a shortest dependency path based CNN (sdpCNN) model. The proposed method (1) only takes the sdp and word embedding as input and (2) could avoid bias from feature selection by using CNN. We performed experiments on standard Aimed and BioInfer datasets, and the experimental results demonstrated that our approach outperformed state-of-the-art kernel based methods. In particular, by tracking the sdpCNN model, we find that sdpCNN could extract key features automatically and it is verified that pretrained word embedding is crucial in PPI task. PMID:27493967

  16. An object-oriented approach to automated landform mapping: A case study of drumlins

    NASA Astrophysics Data System (ADS)

    Saha, Kakoli; Wells, Neil A.; Munro-Stasiuk, Mandy

    2011-09-01

    This paper details an automated object-oriented approach to mapping landforms from digital elevation models (DEMs), using the example of drumlins in the Chautauqua drumlin field in NW Pennsylvania and upstate New York. Object-oriented classification is highly desirable as it can identify specific shapes in datasets based on both the pixel values in a raster dataset and the contextual information between pixels and extracted objects. The methodology is built specifically for application to the USGS 30 m resolution DEM data, which are freely available to the public and of sufficient resolution to map medium scale landforms. Using the raw DEM data, as well as derived aspect and slope, Definiens Developer (v.7) was used to perform multiresolution segmentation, followed by rule-based classification in order to extract individual polygons that represent drumlins. Drumlins obtained by automated extraction were visually and statistically compared to those identified via manual digitization. Detailed morphometric descriptive statistics such as means, ranges, and standard deviations were inspected and compared for length, width, elongation ratio, area, and perimeter. Although the manual and automated results were not always statistically identical, a more detailed comparison of just the drumlins identified by both procedures showed that the automated methods easily matched the manual digitization. Differences in the two methods related to mapping compound drumlins, and smaller and larger drumlins. The automated method generally identified more features in these categories than the manual method and thus outperformed the manual method.

  17. Accumulation and depuration of trinitrotoluene and related extractable and nonextractable (bound) residues in marine fish and mussels.

    PubMed

    Lotufo, Guilherme R; Belden, Jason B; Fisher, Jonathon C; Chen, Shou-Feng; Mowery, Richard A; Chambliss, C Kevin; Rosen, Gunther

    2016-03-01

    To determine if trinitrotoluene (TNT) forms nonextractable residues in mussels (Mytilus galloprovincialis) and fish (Cyprinodon variegatus) and to measure the relative degree of accumulation as compared to extractable TNT and its major metabolites, organisms were exposed to water fortified with (14)C-TNT. After 24 h, nonextractable residues made up 75% (mussel) and 83% (fish) while TNT accounted for 2% of total radioactivity. Depuration half-lives for extractable TNT, aminodinitrotoluenes (ADNTs) and diaminonitrotoluenes (DANTs) were fast initially (<0.5 h), but slower for nonextractable residues. Nonextractable residues from organisms were identified as ADNTs and DANTs using 0.1 M HCL for solubilization followed by liquid chromatography-tandem mass spectrometry. Recovered metabolites only accounted for a small fraction of the bound residue quantified using a radiotracer likely because of low extraction or hydrolysis efficiency or alternative pathways of incorporation of radiolabel into tissue. PMID:26708767

  18. A semi-automated system for quantifying the oxidative potential of ambient particles in aqueous extracts using the dithiothreitol (DTT) assay: results from the Southeastern Center for Air Pollution and Epidemiology (SCAPE)

    NASA Astrophysics Data System (ADS)

    Fang, T.; Verma, V.; Guo, H.; King, L. E.; Edgerton, E. S.; Weber, R. J.

    2014-07-01

    A variety of methods are used to measure the capability of particulate matter (PM) to catalytically generate reactive oxygen species (ROS) in vivo, also defined as the aerosol oxidative potential. A widely used measure of aerosol oxidative potential is the dithiothreitol (DTT) assay, which monitors the depletion of DTT (a surrogate for cellular antioxidants) as catalyzed by the redox-active species in PM. However, a major constraint in the routine use of the DTT assay for integrating it with the large-scale health studies is its labor-intensive and time-consuming protocol. To specifically address this concern, we have developed a semi-automated system for quantifying the oxidative potential of aerosol liquid extracts using the DTT assay. The system, capable of unattended analysis at one sample per hour, has a high analytical precision (Coefficient of Variation of 12% for standards, 4% for ambient samples), and reasonably low limit of detection (0.31 nmol min-1). Comparison of the automated approach with the manual method conducted on ambient samples yielded good agreement (slope = 1.08 ± 0.12, r2 = 0.92, N = 9). The system was utilized for the Southeastern Center for Air Pollution and Epidemiology (SCAPE) to generate an extensive data set on DTT activity of ambient particles collected from contrasting environments (urban, road-side, and rural) in the southeastern US. We find that water-soluble PM2.5 DTT activity on a per air volume basis was spatially uniform and often well correlated with PM2.5 mass (r = 0.49 to 0.88), suggesting regional sources contributing to the PM oxidative potential in southeast US. However, the greater heterogeneity in the intrinsic DTT activity (per PM mass basis) across seasons indicates variability in the DTT activity associated with aerosols from sources that vary with season. Although developed for the DTT assay, the instrument can also be used to determine oxidative potential with other acellular assays.

  19. Inhibitive Effects of Mulberry Leaf-Related Extracts on Cell Adhesion and Inflammatory Response in Human Aortic Endothelial Cells

    PubMed Central

    Chao, P.-Y.; Lin, K.-H.; Chiu, C.-C.; Yang, Y.-Y.; Huang, M.-Y.; Yang, C.-M.

    2013-01-01

    Effects of mulberry leaf-related extracts (MLREs) on hydrogen peroxide-induced DNA damage in human lymphocytes and on inflammatory signaling pathways in human aortic endothelial cells (HAECs) were studied. The tested MLREs were rich in flavonols, especially bombyx faces tea (BT) in quercetin and kaempferol. Polyphenols, flavonoids, and anthocyanidin also abounded in BT. The best trolox equivalent antioxidant capacity (TEAC) was generated from the acidic methanolic extracts of BT. Acidic methanolic and water extracts of mulberry leaf tea (MT), mulberry leaf (M), and BT significantly inhibited DNA oxidative damage to lymphocytes based on the comet assay as compared to the H2O2-treated group. TNF-α-induced monocyte-endothelial cell adhesion was significantly suppressed by MLREs. Additionally, nuclear factor kappa B (NF-κB) expression was significantly reduced by BT and MT. Significant reductions were also observed in both NF-κB and activator protein (AP)-1 DNA binding by MLREs. Significant increases in peroxisome proliferator-activated receptor (PPAR) α and γ DNA binding by MLREs were also detected in M and MT extracts, but no evidence for PPAR α DNA binding in 50 μg/mL MT extract was found. Apparently, MLREs can provide distinct cytoprotective mechanisms that may contribute to its putative beneficial effects on suppressing endothelial responses to cytokines during inflammation. PMID:24371453

  20. Automated lithocell

    NASA Astrophysics Data System (ADS)

    Englisch, Andreas; Deuter, Armin

    1990-06-01

    Integration and automation have gained more and more ground in modern IC-manufacturing. It is difficult to make a direct calculation of the profit these investments yield. On the other hand, the demands to man, machine and technology have increased enormously of late; it is not difficult to see that only by means of integration and automation can these demands be coped with. Here are some salient points: U the complexity and costs incurred by the equipment and processes have got significantly higher . owing to the reduction of all dimensions, the tolerances within which the various process steps have to be carried out have got smaller and smaller and the adherence to these tolerances more and more difficult U the cycle time has become more and more important both for the development and control of new processes and, to a great extent, for a rapid and reliable supply to the customer. In order that the products be competitive under these conditions, all sort of costs have to be reduced and the yield has to be maximized. Therefore, the computer-aided control of the equipment and the process combined with an automatic data collection and a real-time SPC (statistical process control) has become absolutely necessary for successful IC-manufacturing. Human errors must be eliminated from the execution of the various process steps by automation. The work time set free in this way makes it possible for the human creativity to be employed on a larger scale in stabilizing the processes. Besides, a computer-aided equipment control can ensure the optimal utilization of the equipment round the clock.

  1. Automation in haemostasis.

    PubMed

    Huber, A R; Méndez, A; Brunner-Agten, S

    2013-01-01

    Automatia, an ancient Greece goddess of luck who makes things happen by themselves and on her own will without human engagement, is present in our daily life in the medical laboratory. Automation has been introduced and perfected by clinical chemistry and since then expanded into other fields such as haematology, immunology, molecular biology and also coagulation testing. The initial small and relatively simple standalone instruments have been replaced by more complex systems that allow for multitasking. Integration of automated coagulation testing into total laboratory automation has become possible in the most recent years. Automation has many strengths and opportunities if weaknesses and threats are respected. On the positive side, standardization, reduction of errors, reduction of cost and increase of throughput are clearly beneficial. Dependence on manufacturers, high initiation cost and somewhat expensive maintenance are less favourable factors. The modern lab and especially the todays lab technicians and academic personnel in the laboratory do not add value for the doctor and his patients by spending lots of time behind the machines. In the future the lab needs to contribute at the bedside suggesting laboratory testing and providing support and interpretation of the obtained results. The human factor will continue to play an important role in testing in haemostasis yet under different circumstances. PMID:23460141

  2. AutoLink: Automated sequential resonance assignment of biopolymers from NMR data by relative-hypothesis-prioritization-based simulated logic

    NASA Astrophysics Data System (ADS)

    Masse, James E.; Keller, Rochus

    2005-05-01

    We have developed a new computer algorithm for determining the backbone resonance assignments for biopolymers. The approach we have taken, relative hypothesis prioritization, is implemented as a Lua program interfaced to the recently developed computer-aided resonance assignment (CARA) program. Our program can work with virtually any spectrum type, and is especially good with NOESY data. The results of the program are displayed in an easy-to-read, color-coded, graphic representation, allowing users to assess the quality of the results in minutes. Here we report the application of the program to two RNA recognition motifs of Apobec-1 Complementation Factor. The assignment of these domains demonstrates AutoLink's ability to deliver accurate resonance assignments from very minimal data and with minimal user intervention.

  3. Rapid and automated analysis of aflatoxin M1 in milk and dairy products by online solid phase extraction coupled to ultra-high-pressure-liquid-chromatography tandem mass spectrometry.

    PubMed

    Campone, Luca; Piccinelli, Anna Lisa; Celano, Rita; Pagano, Imma; Russo, Mariateresa; Rastrelli, Luca

    2016-01-01

    This study reports a fast and automated analytical procedure for the analysis of aflatoxin M1 (AFM1) in milk and dairy products. The method is based on the simultaneous protein precipitation and AFM1 extraction, by salt-induced liquid-liquid extraction (SI-LLE), followed by an online solid-phase extraction (online SPE) coupled to ultra-high-pressure-liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) analysis to the automatic pre-concentration, clean up and sensitive and selective determination of AFM1. The main parameters affecting the extraction efficiency and accuracy of the analytical method were studied in detail. In the optimal conditions, acetonitrile and NaCl were used as extraction/denaturant solvent and salting-out agent in SI-LLE, respectively. After centrifugation, the organic phase (acetonitrile) was diluted with water (1:9 v/v) and purified (1mL) by online C18 cartridge coupled with an UHPLC column. Finally, selected reaction monitoring (SRM) acquisition mode was applied to the detection of AFM1. Validation studies were carried out on different dairy products (whole and skimmed cow milk, yogurt, goat milk, and powder infant formula), providing method quantification limits about 25 times lower than AFM1 maximum levels permitted by EU regulation 1881/2006 in milk and dairy products for direct human consumption. Recoveries (86-102%) and repeatability (RSD<3, n=6) meet the performance criteria required by EU regulation N. 401/2006 for the determination of the levels of mycotoxins in foodstuffs. Moreover, no matrix effects were observed in the different milk and dairy products studied. The proposed method improves the performance of AFM1 analysis in milk samples as AFM1 determination is performed with a degree of accuracy higher than the conventional methods. Other advantages are the reduction of sample preparation procedure, time and cost of the analysis, enabling high sample throughput that meet the current concerns of food safety and the public

  4. The relative allergenicity of Stachybotrys chartarum compared to house dust mite extracts in a mouse model

    EPA Science Inventory

    A report by the Institute of Medicine suggested that more research is needed to better understand mold effects on allergic disease, particularly asthma development. The authors compared the ability of the fungus Stachybotrys chartarum (SCE) and house dust mite (HDM) extracts to i...

  5. Prevention of medication-related osteonecrosis of the jaws secondary to tooth extractions. A systematic review

    PubMed Central

    Limeres, Jacobo

    2016-01-01

    Background A study was made to identify the most effective protocol for reducing the risk of osteonecrosis of the jaws (ONJ) following tooth extraction in patients subjected to treatment with antiresorptive or antiangiogenic drugs. Material and Methods A MEDLINE and SCOPUS search (January 2003 - March 2015) was made with the purpose of conducting a systematic literature review based on the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines. All articles contributing information on tooth extractions in patients treated with oral or intravenous antiresorptive or antiangiogenic drugs were included. Results Only 13 of the 380 selected articles were finally included in the review: 11 and 5 of them offered data on patients treated with intravenous and oral bisphosphonates, respectively. No randomized controlled trials were found – all publications corresponding to case series or cohort studies. The prevalence of ONJ in the patients treated with intravenous and oral bisphosphonates was 6,9% (range 0-34.7%) and 0.47% (range 0-2.5%), respectively. The main preventive measures comprised local and systemic infection control. Conclusions No conclusive scientific evidence is available to date on the efficacy of ONJ prevention protocols in patients treated with antiresorptive or antiangiogenic drugs subjected to tooth extraction. Key words:Bisphosphonates, angiogenesis inhibitors, antiresorptive drugs, extraction, osteonecrosis. PMID:26827065

  6. Dynamics of cryogen deposition relative to heat extraction rate during cryogen spray cooling

    NASA Astrophysics Data System (ADS)

    Verkruysse, Wim; Majaron, Boris; Aguilar, Guillermo; Svaasand, Lars O.; Nelson, J. Stuart

    2000-05-01

    Goal is to investigate how delivery nozzle design influences the cooling rate of cryogen spray as used in skin laser treatments. Cryogen was sprayed through nozzles that consist of metal tubes with either a narrow or wide diameter and two different lengths. Fast-flashlamp photography showed that the wide nozzles, in particular the long wide one, produced a cryogen jet (very small spray cone angle) rather than a spray (cone angles of about 15 degrees or higher) and appeared to atomize the cryogen less finely than the narrow nozzles. We measured the cooling rate by spraying some cryogen on an epoxy-block with thermocouples embedded. The heat extraction rate of the wide nozzles was higher than that of the narrow nozzles. The results suggest that finely atomized droplets produced by the narrow nozzles do not have enough kinetic energy to break through a layer of liquid cryogen accumulated on the object, which may act as a thermal barrier and, thus, slow down heat extraction. Presumably, larger droplets or non- broken jets ensure a more violent impact on this layer and therefore ensure an enhanced thermal contact. The margin of error for the heat extraction estimate is analyzed when using the epoxy-block. We introduce a complementary method for estimating heat extraction rate of cryogen sprays.

  7. Cinnamon polyphenol extract regulates tristetraprolin and related gene expression in mouse adipocytes

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Cinnamon (Cinnamomum verum) has been widely used in spices, flavoring agents, and preservatives. Cinnamon polyphenol extract (CPE) may be important in the alleviation of chronic diseases, but the molecular evidence is not substantial. Tristetraprolin (TTP) family proteins have anti-inflammatory ef...

  8. Intravenous immune globulin use in patients with human immunodeficiency virus-related thrombocytopenia who require dental extraction.

    PubMed Central

    Rarick, M. U.; Burian, P.; de Guzman, N.; Espina, B.; Montgomery, T.; Jamin, D.; Levine, A. M.

    1991-01-01

    Five patients with human immunodeficiency virus (HIV)-related immune thrombocytopenia who were undergoing dental extraction were treated with intravenous immune globulin (IVIG). All patients received IVIG, 1 gram per kg, the day before the dental extraction and again the day of the dental extraction. Four patients had a previous history of minor clinical bleeding. The median baseline platelet count before extraction was 20 X 10(9) per liter (range 13 to 44). The median peak platelet count was 100 X 10(9) per liter (range 56 to 528) following infusion. This peak response was achieved by day 2 in 3 patients and by days 5 and 7 in 1 patient each. No patients had complications or toxicity from the infusions or perioperative bleeding. No patients required blood product transfusions for the surgical procedure. In conclusion, IVIG infusion should be considered in patients with HIV-related immune thrombocytopenia requiring surgical procedures when a prompt rise in platelet count is desired. PMID:1812630

  9. Exploiting the UMLS Metathesaurus for extracting and categorizing concepts representing signs and symptoms to anatomically related organ systems

    PubMed Central

    Tran, Le-Thuy T.; Divita, Guy; Carter, Marjorie E.; Judd, Joshua; Samore, Matthew H.; Gundlapalli, Adi V.

    2016-01-01

    Objective To develop a method to exploit the UMLS Metathesaurus for extracting and categorizing concepts found in clinical text representing signs and symptoms to anatomically related organ systems. The overarching goal is to classify patient reported symptoms to organ systems for population health and epidemiological analyses. Materials and methods Using the concepts’ semantic types and the inter-concept relationships as guidance, a selective portion of the concepts within the UMLS Metathesaurus was traversed starting from the concepts representing the highest level organ systems. The traversed concepts were chosen, filtered, and reviewed to obtain the concepts representing clinical signs and symptoms by blocking deviations, pruning superfluous concepts, and manual review. The mapping process was applied to signs and symptoms annotated in a corpus of 750 clinical notes. Results The mapping process yielded a total of 91,000 UMLS concepts (with approximately 300,000 descriptions) possibly representing physical and mental signs and symptoms that were extracted and categorized to the anatomically related organ systems. Of 1864 distinct descriptions of signs and symptoms found in the 750 document corpus, 1635 of these (88%) were successfully mapped to the set of concepts extracted from the UMLS. Of 668 unique concepts mapped, 603 (90%) were correctly categorized to their organ systems. Conclusion We present a process that facilitates mapping of signs and symptoms to their organ systems. By providing a smaller set of UMLS concepts to use for comparing and matching patient records, this method has the potential to increase efficiency of information extraction pipelines. PMID:26362345

  10. Bioactive compounds extracted from Indian wild legume seeds: antioxidant and type II diabetes-related enzyme inhibition properties.

    PubMed

    Gautam, Basanta; Vadivel, Vellingiri; Stuetz, Wolfgang; Biesalski, Hans K

    2012-03-01

    Seven different wild legume seeds (Acacia leucophloea, Bauhinia variegata, Canavalia gladiata, Entada scandens, Mucuna pruriens, Sesbania bispinosa and Tamarindus indica) from various parts of India were analyzed for total free phenolics, l-Dopa (l-3,4 dihydroxyphenylalanine), phytic acid and their antioxidant capacity (ferric-reducing antioxidant power [FRAP] and 2,2-diphenyl-1-picrylhydrazyl [DPPH] assay) and type II diabetes-related enzyme inhibition activitiy (α-amylase). S. bispinosa had the highest content in both total free phenolics and l-Dopa, and relatively low phytic acid when compared with other seeds. Phytic acid content, being highest in E. scandens, M. pruriens and T. indica, was highly predictive for FRAP (r = 0.47, p < 0.05) and DPPH (r = 0.66, p < 0.001) assays. The phenolic extract from T. indica and l-Dopa extract from E. scandens showed significantly higher FRAP values among others. All seed extracts demonstrated a remarkable reducing power (7-145 mM FeSO4 per mg extract), DPPH radical scavenging activity (16-95%) and α-amylase enzyme inhibition activity (28-40%). PMID:21970446

  11. Automated measurement of centering errors and relative surface distances for the optimized assembly of micro-optics

    NASA Astrophysics Data System (ADS)

    Langehanenberg, Patrik; Dumitrescu, Eugen; Heinisch, Josef; Krey, Stefan; Ruprecht, Aiko K.

    2011-03-01

    For any kind of optical compound systems the precise geometric alignment of every single element according to the optical design is essential to obtain the desired imaging properties. In this contribution we present a measurement system for the determination of the complete set of geometric alignment parameters in assembled systems. The deviation of each center or curvature with respect to a reference axis is measured with an autocollimator system. These data are further processed in order to provide the shift and tilt of an individual lens or group of lenses with respect to a defined reference axis. Previously it was shown that such an instrument can measure the centering errors of up to 40 surfaces within a system under test with accuracies in the range of an arc second. In addition, the relative distances of the optical surfaces (center thicknesses of lens elements, air gaps in between) are optically determined in the same measurement system by means of low coherent interferometry. Subsequently, the acquired results can be applied for the compensation of the detected geometric alignment errors before the assembly is finally bonded (e.g., glued). The presented applications mainly include measurements of miniaturized lens systems like mobile phone optics. However, any type of objective lens from endoscope imaging systems up to very complex objective lenses used in microlithography can be analyzed with the presented measurement system.

  12. Relation between various soil phosphorus extraction methods and sorption parameters in calcareous soils with different texture.

    PubMed

    Jalali, Mohsen; Jalali, Mahdi

    2016-10-01

    The aim of this study was to investigate the influence of soil texture on phosphorus (P) extractability and sorption from a wide range of calcareous soils across Hamedan, western Iran. Fifty seven soil samples were selected and partitioned into five types on the basis of soil texture (clay, sandy, sandy clay loam, sandy loam and mixed loam) and the P extracted with calcium chloride (PCaCl2), citrate (Pcitrate), HCl (PHCl), Olsen (POls), and Mehlich-3 (PM3) solutions. On the average, the P extracted was in the order PHCl>PM3>Pcitrate>POls>PCaCl2. The P extracted by Pcitrate, PHCl, POls, and PM3 methods were significantly higher in sandy, sandy clay loam and sandy loam textures than clay and mixed loam textures, while soil phosphorus buffer capacity (PBC) was significantly higher in clay and mixed loam soil textures. The correlation analysis revealed a significant positive relationship between silt content Freundlich sorption coefficient (KF), maximum P sorption (Qmax), linear distribution coefficient (Kd), and PBC. All extractions were highly correlated with each other and among soil components with silt content. The principal component analysis (PCA) performed on data identified five principal components describing 74.5% of total variation. The results point to soil texture as an important factor and that silt was the crucial soil property associated with P sorption and its extractability in these calcareous soils. DPSM3-2 (PM3PM3+Qmax×100) and DPScitrate (PcitratePcitrate+Qmax×100) proved to be good indicators of soil's potential P release in these calcareous soils. Among the DPS, 21% of soils reported DPSM3-2, values higher than the environmental threshold, indicating build-up of P and P release. Most of the studied sandy clay loam soils had exceeded the environmentally unacceptable P concentration. Various management practices should be taken into account to reduce P losses from these soils. Further inorganic and organic P fertilizer inputs should be reduced

  13. GDRMS: a system for automatic extraction of the disease-centre relation

    NASA Astrophysics Data System (ADS)

    Yang, Ronggen; Zhang, Yue; Gong, Lejun

    2012-01-01

    With the rapidly increasing of biomedical literature, the deluge of new articles is leading to information overload. Extracting the available knowledge from the huge amount of biomedical literature has become a major challenge. GDRMS is developed as a tool that extracts the relationship between disease and gene, gene and gene from biomedical literatures using text mining technology. It is a ruled-based system which also provides disease-centre network visualization, constructs the disease-gene database, and represents a gene engine for understanding the function of the gene. The main focus of GDRMS is to provide a valuable opportunity to explore the relationship between disease and gene for the research community about etiology of disease.

  14. GDRMS: a system for automatic extraction of the disease-centre relation

    NASA Astrophysics Data System (ADS)

    Yang, Ronggen; Zhang, Yue; Gong, Lejun

    2011-12-01

    With the rapidly increasing of biomedical literature, the deluge of new articles is leading to information overload. Extracting the available knowledge from the huge amount of biomedical literature has become a major challenge. GDRMS is developed as a tool that extracts the relationship between disease and gene, gene and gene from biomedical literatures using text mining technology. It is a ruled-based system which also provides disease-centre network visualization, constructs the disease-gene database, and represents a gene engine for understanding the function of the gene. The main focus of GDRMS is to provide a valuable opportunity to explore the relationship between disease and gene for the research community about etiology of disease.

  15. HITSZ_CDR: an end-to-end chemical and disease relation extraction system for BioCreative V

    PubMed Central

    Li, Haodi; Tang, Buzhou; Chen, Qingcai; Chen, Kai; Wang, Xiaolong; Wang, Baohua; Wang, Zhe

    2016-01-01

    In this article, an end-to-end system was proposed for the challenge task of disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction in BioCreative V, where DNER includes disease mention recognition (DMR) and normalization (DN). Evaluation on the challenge corpus showed that our system achieved the highest F1-scores 86.93% on DMR, 84.11% on DN, 43.04% on CID relation extraction, respectively. The F1-score on DMR is higher than our previous one reported by the challenge organizers (86.76%), the highest F1-score of the challenge. Database URL: http://database.oxfordjournals.org/content/2016/baw077 PMID:27270713

  16. HITSZ_CDR: an end-to-end chemical and disease relation extraction system for BioCreative V.

    PubMed

    Li, Haodi; Tang, Buzhou; Chen, Qingcai; Chen, Kai; Wang, Xiaolong; Wang, Baohua; Wang, Zhe

    2016-01-01

    In this article, an end-to-end system was proposed for the challenge task of disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction in BioCreative V, where DNER includes disease mention recognition (DMR) and normalization (DN). Evaluation on the challenge corpus showed that our system achieved the highest F1-scores 86.93% on DMR, 84.11% on DN, 43.04% on CID relation extraction, respectively. The F1-score on DMR is higher than our previous one reported by the challenge organizers (86.76%), the highest F1-score of the challenge.Database URL: http://database.oxfordjournals.org/content/2016/baw077. PMID:27270713

  17. Broccoli sprout extract induces detoxification-related gene expression and attenuates acute liver injury

    PubMed Central

    Yoshida, Kazutaka; Ushida, Yusuke; Ishijima, Tomoko; Suganuma, Hiroyuki; Inakuma, Takahiro; Yajima, Nobuhiro; Abe, Keiko; Nakai, Yuji

    2015-01-01

    AIM: To investigate the effects of broccoli sprout extract (BSEx) on liver gene expression and acute liver injury in the rat. METHODS: First, the effects of BSEx on liver gene expression were examined. Male rats were divided into two groups. The Control group was fed the AIN-76 diet, and the BSEx group was fed the AIN-76 diet containing BSEx. After a 10-d feeding period, rats were sacrificed and their livers were used for DNA microarray and real-time reverse transcription-polymerase chain reaction (RT-PCR) analyses. Next, the effects of BSEx on acute liver injury were examined. In experiments using acute liver injury models, 1000 mg/kg acetaminophen (APAP) or 350 mg/kg D-galactosamine (D-GalN) was used to induce injury. These male rats were divided into four groups: Control, BSEx, Inducer (APAP or D-GalN), and Inducer+BSEx. The feeding regimens were identical for the two analyses. Twenty-four hours following APAP administration via p.o. or D-GalN administration via i.p., rats were sacrificed to determine serum aspartate transaminase (AST) and alanine transaminase (ALT) levels, hepatic glutathione (GSH) and thiobarbituric acid-reactive substances accumulation and glutathione-S-transferase (GST) activity. RESULTS: Microarray and real-time RT-PCR analyses revealed that BSEx upregulated the expression of genes related to detoxification and glutathione synthesis in normal rat liver. The levels of AST (70.91 ± 15.74 IU/mL vs 5614.41 ± 1997.83 IU/mL, P < 0.05) and ALT (11.78 ± 2.08 IU/mL vs 1297.71 ± 447.33 IU/mL, P < 0.05) were significantly suppressed in the APAP + BSEx group compared with the APAP group. The level of GSH (2.61 ± 0.75 nmol/g tissue vs 1.66 ± 0.59 nmol/g tissue, P < 0.05) and liver GST activity (93.19 ± 16.55 U/g tissue vs 51.90 ± 16.85 U/g tissue, P < 0.05) were significantly increased in the APAP + BSEx group compared with the APAP group. AST (4820.05 ± 3094.93 IU/mL vs 12465.63 ± 3223.97 IU/mL, P < 0.05) and ALT (1808.95 ± 1014.04 IU/mL vs

  18. Extracting energy from black holes: The Blandford-Znajek mechanism and related problems

    NASA Astrophysics Data System (ADS)

    Li, Li-Xin

    2001-10-01

    In this dissertation I investigate several scenarios for extracting energy from a black hole and consider their applications in astronomy, among them the Blandford- Znajek mechanism has ever been thought promising in powering many energetic astronomical phenomena ranging from gamma-ray bursts to extragalactic jets. I first demonstrate that, contrary to traditional thought, with quite general assumptions the original Blandford-Znajek mechanism cannot work efficiently. With a toy model, I show that for a Kerr black hole with a thin disk the energy extracted from the disk always dominates the energy extracted from the black hole. By considering the screw instability of the magnetic field, I show that a stringent limit on the distance from the load to the black hole exists. The latter reveals that the instability of magnetic fields has important effects on the Blandford-Znajek mechanism-a topic which has been neglected in all previous studies. I then turn to a variant of the Blandford-Znajek mechanism: the magnetic field lines threading the black hole horizon close on the disk rather than the remote load. I find that, the magnetic field connecting the black hole to the disk is very efficient in transferring energy and angular momentum between them. If the black hole rotates faster than the disk, the rotational energy of the black hole is extracted by the disk and ultimately radiated to infinity. This may greatly enhance the efficiency of the disk and produce interesting observational effects. Finally, I present a Tokamak model for gamma-ray bursts powered by the Blandford-Znajek mechanism, which demonstrates that in very specific cases the Blandford- Znajek mechanism may work efficiently for a Kerr black hole with a thick disk or torus.

  19. Validation of a sensitive and automated 96-well solid-phase extraction liquid chromatography-tandem mass spectrometry method for the determination of desloratadine and 3-hydroxydesloratadine in human plasma.

    PubMed

    Yang, Liyu; Clement, Robert P; Kantesaria, Bhavna; Reyderman, Larisa; Beaudry, Francis; Grandmaison, Charles; Di Donato, Lorella; Masse, Robert; Rudewicz, Patrick J

    2003-07-25

    To support clinical development, a liquid chromatographic-tandem mass spectrometric (LC-MS-MS) method was developed and validated for the determination of desloratadine (descarboethoxyloratadine) and 3-OH desloratadine (3-hydroxydescarboethoxyloratadine) concentrations in human plasma. The method consisted of automated 96-well solid-phase extraction for sample preparation and liquid chromatography/turbo ionspray tandem mass spectrometry for analysis. [2H(4)]Desloratadine and [2H(4)]3-OH desloratadine were used as internal standards (I.S.). A quadratic regression (weighted 1/concentration(2)) gave the best fit for calibration curves over the concentration range of 25-10000 pg/ml for both desloratadine and 3-OH desloratadine. There was no interference from endogenous components in the blank plasma tested. The accuracy (%bias) at the lower limit of quantitation (LLOQ) was -12.8 and +3.4% for desloratadine and 3-OH desloratadine, respectively. The precision (%CV) for samples at the LLOQ was 15.1 and 10.9% for desloratadine and 3-OH desloratadine, respectively. For quality control samples at 75, 1000 and 7500 pg/ml, the between run %CV was extracts (up to 185 h at 5 degrees C). This LC-MS-MS method for the determination of desloratadine and 3-OH desloratadine in human plasma met regulatory requirements for selectivity, sensitivity, goodness of fit, precision, accuracy and stability. PMID:12860030

  20. Automated Agitation-Assisted Demulsification Dispersive Liquid-Liquid Microextraction.

    PubMed

    Guo, Liang; Chia, Shao Hua; Lee, Hian Kee

    2016-03-01

    Dispersive liquid-liquid microextraction (DLLME) is an extremely fast and efficient sample preparation procedure. For its capability and applicability to be fully exploited, full automation of its operations seamlessly integrated with analysis is necessary. In this work, for the first time, fully automated agitation-assisted demulsification (AAD)-DLLME integrated with gas chromatography/mass spectrometry was developed for the convenient and efficient determination of polycyclic aromatic hydrocarbons in environmental water samples. The use of a commercially available multipurpose autosampler equipped with two microsyringes of different capacities allowed elimination or significant reduction of manpower, labor, and time with the large-volume microsyringe used for liquid transfers and the small-volume microsyringe for extract collection and injection for analysis. Apart from enhancing accessibility of DLLME, the procedure was characterized by the application of agitation after extraction to break up the emulsion (that otherwise would need centrifugation or a demulsification solvent), further improving overall operational efficiency and flexibility. Additionally, the application of low-density solvent as extractant facilitated the easy collection of extract as the upper layer over water. Some parameters affecting the automated AAD-DDLME procedure were investigated. Under the optimized conditions, the procedure provided good linearity (ranging from a minimum of 0.1-0.5 μg/L to a maximum of 50 μg/L), low limits of detection (0.010-0.058 μg/L), and good repeatability of the extractions (relative standard deviations, below 5.3%, n = 6). The proposed method was applied to analyze PAHs in real river water samples. PMID:26818217

  1. Celery Seed and Related Extracts with Antiarthritic, Antiulcer, and Antimicrobial Activities.

    PubMed

    Powanda, Michael C; Whitehouse, Michael W; Rainsford, K D

    2015-01-01

    Celery preparations have been used extensively for several millennia as natural therapies for acute and chronic painful or inflammatory conditions. This chapter reviews some of the biological and chemical properties of various celery preparations that have been used as natural remedies. Many of these have varying activities and product qualities. A fully standardized celery preparation has been prepared known as an alcoholic extract of the seeds of a plant source derived from northern India. This is termed, Celery Seed Extract (CSE) and has been found to be at least as effective as aspirin, ibuprofen, and naproxen in suppressing arthritis in a model of polyarthritis. CSE can also reduce existing inflammation in rats. CSE has also been shown to provide analgesia in two model systems. CSE, in addition to acting as an analgesic and inflammatory agent, has been shown to protect against and/or reduce gastric irritation caused by NSAIDs, as well as act synergistically with them to reduce inflammation. The CSE was fractionated by organic solvent extractions, then subjected to column chromatography followed by HPLC and was characterized by mass spectrometry. This yielded a purified component that had specific inhibitory effects on Helicobacter pylori but was not active against Campylobacter jejuni or Escherichia coli. Additionally, toxicology studies did not reveal any clear signs of toxicity at doses relevant to human use. Also, unlike many dietary supplements, the available data suggest that CSE does not significantly affect the p450 enzyme systems and thus is less likely to alter the metabolism of drugs the individual may be taking. CSE may be a prototype of a natural product that can be used therapeutically to treat arthritis and other inflammatory diseases. PMID:26462366

  2. Standard operation protocol for analysis of lipid hydroperoxides in human serum using a fully automated method based on solid-phase extraction and liquid chromatography-mass spectrometry in selected reaction monitoring.

    PubMed

    Ferreiro-Vera, C; Ribeiro, Joana P N; Mata-Granados, J M; Priego-Capote, F; Luque de Castro, M D

    2011-09-23

    Standard operating procedures (SOPs) are of paramount importance in the analytical field to ensure the reproducibility of the results obtained among laboratories. SOPs gain special interest when the aim is the analysis of potentially unstable compounds. An SOP for analysis of lipid hydroperoxides (HpETEs) is here reported after optimization of the critical steps to be considered in their analysis in human serum from sampling to final analysis. The method is based on automated hyphenation between solid-phase extraction (SPE) and liquid chromatography-mass spectrometry (LC-MS). The developed research involves: (i) optimization of the SPE and LC-MS steps with a proper synchronization; (ii) validation of the method-viz. accuracy study (estimated as 86.4% as minimum value), evaluation of sensitivity and precision, which ranged from 2.5 to 7.0 ng/mL (0.25-0.70 ng on column) as quantification limit and precision below 13.2%), and robustness study (reusability of the cartridge for 5 times without affecting the accuracy and precision of the method); (iii) stability study, involving freeze-thaw stability, short-term and long-term stability and stock solution stability tests. The results thus obtained allow minimizing both random and systematic variation of the metabolic profiles of the target compounds by correct application of the established protocol. PMID:21851945

  3. Determination of nine benzotriazole UV stabilizers in environmental water samples by automated on-line solid phase extraction coupled with high-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Liu, Runzeng; Ruan, Ting; Wang, Thanh; Song, Shanjun; Guo, Feng; Jiang, Guibin

    2014-03-01

    A method using automated on-line solid phase extraction coupled with a high-performance liquid chromatography-tandem mass spectrometry system was developed for the determination of emerging benzotriazole UV stabilizers (BZTs) in different environmental water matrices including river water, sewage influent and effluent. Water sample was injected directly and the analytes were preconcentrated on a Polar Advantage II on-line SPE cartridge. After cleanup step the target BZTs were eluted in back flush mode and then separated on a liquid chromatography column. Experimental parameters such as sample loading flow rate, SPE cartridge, pH value and methanol ratio in the sample were optimized in detail. The method detection limits ranged from 0.21 to 2.17 ng/L. Recoveries of the target BZTs at 50 ng/L spiking level ranged from 76% to 114% and the inter-day RSDs ranged from 1% to 15%. The optimized method was successfully applied to analyze twelve water samples collected from different wastewater treatment plants and rivers, and five BZTs (UV-P, UV-329, UV-350, UV-234 and UV-328) were detected with concentrations up to 37.1 ng/L. The proposed method is simple, sensitive and suitable for simultaneous analysis and monitoring of BZTs in water samples. PMID:24468355

  4. Automated Desalting Apparatus

    NASA Technical Reports Server (NTRS)

    Spencer, Maegan K.; Liu, De-Ling; Kanik, Isik; Beegle, Luther

    2010-01-01

    Because salt and metals can mask the signature of a variety of organic molecules (like amino acids) in any given sample, an automated system to purify complex field samples has been created for the analytical techniques of electrospray ionization/ mass spectroscopy (ESI/MS), capillary electrophoresis (CE), and biological assays where unique identification requires at least some processing of complex samples. This development allows for automated sample preparation in the laboratory and analysis of complex samples in the field with multiple types of analytical instruments. Rather than using tedious, exacting protocols for desalting samples by hand, this innovation, called the Automated Sample Processing System (ASPS), takes analytes that have been extracted through high-temperature solvent extraction and introduces them into the desalting column. After 20 minutes, the eluent is produced. This clear liquid can then be directly analyzed by the techniques listed above. The current apparatus including the computer and power supplies is sturdy, has an approximate mass of 10 kg, and a volume of about 20 20 20 cm, and is undergoing further miniaturization. This system currently targets amino acids. For these molecules, a slurry of 1 g cation exchange resin in deionized water is packed into a column of the apparatus. Initial generation of the resin is done by flowing sequentially 2.3 bed volumes of 2N NaOH and 2N HCl (1 mL each) to rinse the resin, followed by .5 mL of deionized water. This makes the pH of the resin near neutral, and eliminates cross sample contamination. Afterward, 2.3 mL of extracted sample is then loaded into the column onto the top of the resin bed. Because the column is packed tightly, the sample can be applied without disturbing the resin bed. This is a vital step needed to ensure that the analytes adhere to the resin. After the sample is drained, oxalic acid (1 mL, pH 1.6-1.8, adjusted with NH4OH) is pumped into the column. Oxalic acid works as a

  5. Sieve-based coreference resolution enhances semi-supervised learning model for chemical-induced disease relation extraction

    PubMed Central

    Le, Hoang-Quynh; Tran, Mai-Vu; Dang, Thanh Hai; Ha, Quang-Thuy; Collier, Nigel

    2016-01-01

    The BioCreative V chemical-disease relation (CDR) track was proposed to accelerate the progress of text mining in facilitating integrative understanding of chemicals, diseases and their relations. In this article, we describe an extension of our system (namely UET-CAM) that participated in the BioCreative V CDR. The original UET-CAM system’s performance was ranked fourth among 18 participating systems by the BioCreative CDR track committee. In the Disease Named Entity Recognition and Normalization (DNER) phase, our system employed joint inference (decoding) with a perceptron-based named entity recognizer (NER) and a back-off model with Semantic Supervised Indexing and Skip-gram for named entity normalization. In the chemical-induced disease (CID) relation extraction phase, we proposed a pipeline that includes a coreference resolution module and a Support Vector Machine relation extraction model. The former module utilized a multi-pass sieve to extend entity recall. In this article, the UET-CAM system was improved by adding a ‘silver’ CID corpus to train the prediction model. This silver standard corpus of more than 50 thousand sentences was automatically built based on the Comparative Toxicogenomics Database (CTD) database. We evaluated our method on the CDR test set. Results showed that our system could reach the state of the art performance with F1 of 82.44 for the DNER task and 58.90 for the CID task. Analysis demonstrated substantial benefits of both the multi-pass sieve coreference resolution method (F1 + 4.13%) and the silver CID corpus (F1 +7.3%). Database URL: SilverCID–The silver-standard corpus for CID relation extraction is freely online available at: https://zenodo.org/record/34530 (doi:10.5281/zenodo.34530).

  6. Knowledge, attitudes, and performance of dental students in relation to sterilization/disinfection methods of extracted human teeth

    PubMed Central

    Hashemipour, Maryam Alsadat; Mozafarinia, Romina; Mirzadeh, Azin; Aramon, Moien; Nassab, Sayed Amir Hossein Gandjalikhan

    2013-01-01

    Background: Dental students use extracted human teeth to learn practical and technical skills before they enter the clinical environment. In the present research, knowledge, performance, and attitudes toward sterilization/disinfection methods of extracted human teeth were evaluated in a selected group of Iranian dental students. Materials and Methods: In this descriptive cross-sectional study the subjects consisted of fourth-, fifth- and sixth-year dental students. Data were collected by questionnaires and analyzed by Fisher's exact test and Chi-squared test using SPSS 11.5. Results: In this study, 100 dental students participated. The average knowledge score was 15.9 ± 4.8. Based on the opinion of 81 students sodium hypochlorite was selected as suitable material for sterilization and 78 students believed that oven sterilization is a good way for the purpose. The average performance score was 4.1 ± 0.8, with 3.9 ± 1.7 and 4.3 ± 1.1 for males and females, respectively, with no significant differences between the two sexes. The maximum and minimum attitude scores were 60 and 25, with an average score of 53.1 ± 5.2. Conclusion: The results of this study indicated that knowledge, performance and attitude of dental students in relation to sterilization/disinfection methods of extracted human teeth were good. However, weaknesses were observed in relation to teaching and materials suitable for sterilization. PMID:24130583

  7. Algorithms Could Automate Cancer Diagnosis

    NASA Technical Reports Server (NTRS)

    Baky, A. A.; Winkler, D. G.

    1982-01-01

    Five new algorithms are a complete statistical procedure for quantifying cell abnormalities from digitized images. Procedure could be basis for automated detection and diagnosis of cancer. Objective of procedure is to assign each cell an atypia status index (ASI), which quantifies level of abnormality. It is possible that ASI values will be accurate and economical enough to allow diagnoses to be made quickly and accurately by computer processing of laboratory specimens extracted from patients.

  8. Antioxidant Activity and Thermal Stability of Oleuropein and Related Phenolic Compounds of Olive Leaf Extract after Separation and Concentration by Salting-Out-Assisted Cloud Point Extraction

    PubMed Central

    Stamatopoulos, Konstantinos; Katsoyannos, Evangelos; Chatzilazarou, Arhontoula

    2014-01-01

    A fast, clean, energy-saving, non-toxic method for the stabilization of the antioxidant activity and the improvement of the thermal stability of oleuropein and related phenolic compounds separated from olive leaf extract via salting-out-assisted cloud point extraction (CPE) was developed using Tween 80. The process was based on the decrease of the solubility of polyphenols and the lowering of the cloud point temperature of Tween 80 due to the presence of elevated amounts of sulfates (salting-out) and the separation from the bulk solution with centrifugation. The optimum conditions were chosen based on polyphenols recovery (%), phase volume ratio (Vs/Vw) and concentration factor (Fc). The maximum recovery of polyphenols was in total 95.9%; Vs/Vw was 0.075 and Fc was 15 at the following conditions: pH 2.6, ambient temperature (25 °C), 4% Tween 80 (w/v), 35% Na2SO4 (w/v) and a settling time of 5 min. The total recovery of oleuropein, hydroxytyrosol, luteolin-7-O-glucoside, verbascoside and apigenin-7-O-glucoside, at optimum conditions, was 99.8%, 93.0%, 87.6%, 99.3% and 100.0%, respectively. Polyphenolic compounds entrapped in the surfactant-rich phase (Vs) showed higher thermal stability (activation energy (Ea) 23.8 kJ/mol) compared to non-entrapped ones (Ea 76.5 kJ/mol). The antioxidant activity of separated polyphenols remained unaffected as determined by the 1,1-diphenyl-2-picrylhydrazyl method. PMID:26784869

  9. Automatic Extraction and Post-coordination of Spatial Relations in Consumer Language

    PubMed Central

    Roberts, Kirk; Rodriguez, Laritza; Shooshan, Sonya E.; Demner-Fushman, Dina

    2015-01-01

    To incorporate ontological concepts in natural language processing (NLP) it is often necessary to combine simple concepts into complex concepts (post-coordination). This is especially true in consumer language, where a more limited vocabulary forces consumers to utilize highly productive language that is almost impossible to pre-coordinate in an ontology. Our work focuses on recognizing an important case for post-coordination in natural language: spatial relations between disorders and anatomical structures. Consumers typically utilize such spatial relations when describing symptoms. We describe an annotated corpus of 2,000 sentences with 1,300 spatial relations, and a second corpus of 500 of these relations manually normalized to UMLS concepts. We use machine learning techniques to recognize these relations, obtaining good performance. Further, we experiment with methods to normalize the relations to an existing ontology. This two-step process is analogous to the combination of concept recognition and normalization, and achieves comparable results. PMID:26958247

  10. Extraction efficiency of hydrophilic and lipophilic antioxidants from lyophilized foods using pressurized liquid extraction and manual extraction.

    PubMed

    Watanabe, Jun; Oki, Tomoyuki; Takebayashi, Jun; Takano-Ishikawa, Yuko

    2014-09-01

    The efficient extraction of antioxidants from food samples is necessary in order to accurately measure their antioxidant capacities. α-Tocopherol and gallic acid were spiked into samples of 5 lyophilized and pulverized vegetables and fruits (onion, cabbage, Satsuma mandarin orange, pumpkin, and spinach). The lipophilic and hydrophilic antioxidants in the samples were sequentially extracted with a mixed solvent of n-hexane and dichloromethane, and then with acetic acid-acidified aqueous methanol. Duplicate samples were extracted: one set was extracted using an automated pressurized liquid extraction apparatus, and the other set was extracted manually. Spiked α-tocopherol and gallic acid were recovered almost quantitatively in the extracted lipophilic and hydrophilic fractions, respectively, especially when pressurized liquid extraction was used. The expected increase in lipophilic oxygen radical absorbance capacity (L-ORAC) due to spiking with α-tocopherol, and the expected increase in 2,2-diphenyl-1-picrylhydrazyl radical scavenging activities and total polyphenol content due to spiking with gallic acid, were all recovered in high yield. Relatively low recoveries, as reflected in the hydrophilic ORAC (H-ORAC) value, were obtained following spiking with gallic acid, suggesting an interaction between gallic acid and endogenous antioxidants. The H-ORAC values of gallic acid-spiked samples were almost the same as those of postadded (spiked) samples. These results clearly indicate that lipophilic and hydrophilic antioxidants are effectively extracted from lyophilized food, especially when pressurized liquid extraction is used. PMID:25155095

  11. Information extraction from SMS text related to a reminder service for outpatients.

    PubMed

    Rubrichi, Stefania; Eku Ndam, Stéphanie; Battistotti, Andrea; Quaglini, Silvana

    2012-01-01

    This work evaluates the users' satisfaction with an SMS-based reminder system that is being used since about six years by an Italian healthcare organization. The system was implemented for reducing dropouts. This goal has been achieved, as dropout decreased from 8% to 4%. During these years, a number of reminded citizens, even not required, sent an SMS message back, with comments about the service, further requirements, etc. We collected some thousands of them. Their analysis may represent a useful feedback to the healthcare organization. We used conditional random fields as the information extraction method for classifying messages into appreciation, critique, inappropriateness, etc. The classification system achieved a very good overall performance (F1-measure of 94%), thus it can be used from here on to monitor the users' satisfaction in time. PMID:22874188

  12. Both Automation and Paper.

    ERIC Educational Resources Information Center

    Purcell, Royal

    1988-01-01

    Discusses the concept of a paperless society and the current situation in library automation. Various applications of automation and telecommunications are addressed, and future library automation is considered. Automation at the Monroe County Public Library in Bloomington, Indiana, is described as an example. (MES)

  13. A semi-automated system for quantifying the oxidative potential of ambient particles in aqueous extracts using the dithiothreitol (DTT) assay: results from the Southeastern Center for Air Pollution and Epidemiology (SCAPE)

    NASA Astrophysics Data System (ADS)

    Fang, T.; Verma, V.; Guo, H.; King, L. E.; Edgerton, E. S.; Weber, R. J.

    2015-01-01

    A variety of methods are used to measure the capability of particulate matter (PM) to catalytically generate reactive oxygen species (ROS) in vivo, also defined as the aerosol oxidative potential. A widely used measure of aerosol oxidative potential is the dithiothreitol (DTT) assay, which monitors the depletion of DTT (a surrogate for cellular antioxidants) as catalyzed by the redox-active species in PM. However, a major constraint in the routine use of the DTT assay for integrating it with large-scale health studies is its labor-intensive and time-consuming protocol. To specifically address this concern, we have developed a semi-automated system for quantifying the oxidative potential of aerosol liquid extracts using the DTT assay. The system, capable of unattended analysis at one sample per hour, has a high analytical precision (coefficient of variation of 15% for positive control, 4% for ambient samples) and reasonably low limit of detection (0.31 nmol min-1). Comparison of the automated approach with the manual method conducted on ambient samples yielded good agreement (slope = 1.08 ± 0.12, r2 = 0.92, N = 9). The system was utilized for the Southeastern Center for Air Pollution & Epidemiology (SCAPE) to generate an extensive data set on DTT activity of ambient particles collected from contrasting environments (urban, roadside, and rural) in the southeastern US. We find that water-soluble PM2.5 DTT activity on a per-air-volume basis was spatially uniform and often well correlated with PM2.5 mass (r = 0.49 to 0.88), suggesting regional sources contributing to the PM oxidative potential in the southeastern US. The correlation may also suggest a mechanistic explanation (oxidative stress) for observed PM2.5 mass-health associations. The heterogeneity in the intrinsic DTT activity (per-PM-mass basis) across seasons indicates variability in the DTT activity associated with aerosols from sources that vary with season. Although developed for the DTT assay, the

  14. Automated Defect Classification (ADC)

    Energy Science and Technology Software Center (ESTSC)

    1998-01-01

    The ADC Software System is designed to provide semiconductor defect feature analysis and defect classification capabilities. Defect classification is an important software method used by semiconductor wafer manufacturers to automate the analysis of defect data collected by a wide range of microscopy techniques in semiconductor wafer manufacturing today. These microscopies (e.g., optical bright and dark field, scanning electron microscopy, atomic force microscopy, etc.) generate images of anomalies that are induced or otherwise appear on wafermore » surfaces as a result of errant manufacturing processes or simple atmospheric contamination (e.g., airborne particles). This software provides methods for analyzing these images, extracting statistical features from the anomalous regions, and applying supervised classifiers to label the anomalies into user-defined categories.« less

  15. Automated synthetic scene generation

    NASA Astrophysics Data System (ADS)

    Givens, Ryan N.

    Physics-based simulations generate synthetic imagery to help organizations anticipate system performance of proposed remote sensing systems. However, manually constructing synthetic scenes which are sophisticated enough to capture the complexity of real-world sites can take days to months depending on the size of the site and desired fidelity of the scene. This research, sponsored by the Air Force Research Laboratory's Sensors Directorate, successfully developed an automated approach to fuse high-resolution RGB imagery, lidar data, and hyperspectral imagery and then extract the necessary scene components. The method greatly reduces the time and money required to generate realistic synthetic scenes and developed new approaches to improve material identification using information from all three of the input datasets.

  16. Automating Frame Analysis

    SciTech Connect

    Sanfilippo, Antonio P.; Franklin, Lyndsey; Tratz, Stephen C.; Danielson, Gary R.; Mileson, Nicholas D.; Riensche, Roderick M.; McGrath, Liam

    2008-04-01

    Frame Analysis has come to play an increasingly stronger role in the study of social movements in Sociology and Political Science. While significant steps have been made in providing a theory of frames and framing, a systematic characterization of the frame concept is still largely lacking and there are no rec-ognized criteria and methods that can be used to identify and marshal frame evi-dence reliably and in a time and cost effective manner. Consequently, current Frame Analysis work is still too reliant on manual annotation and subjective inter-pretation. The goal of this paper is to present an approach to the representation, acquisition and analysis of frame evidence which leverages Content Analysis, In-formation Extraction and Semantic Search methods to provide a systematic treat-ment of a Frame Analysis and automate frame annotation.

  17. Automated data entry system: performance issues

    NASA Astrophysics Data System (ADS)

    Thoma, George R.; Ford, Glenn

    2001-12-01

    This paper discusses the performance of a system for extracting bibliographic fields from scanned pages in biomedical journals to populate MEDLINE, the flagship database of the national Library of Medicine (NLM), and heavily used worldwide. This system consists of automated processes to extract the article title, author names, affiliations and abstract, and manual workstations for the entry of other required fields such as pagination, grant support information, databank accession numbers and others needed for a completed bibliographic record in MEDLINE. Labor and time data are given for (1) a wholly manual keyboarding process to create the records, (2) an OCR-based system that requires all fields except the abstract to be manually input, and (3) a more automated system that relies on document image analysis and understanding techniques for the extraction of several fields. It is shown that this last, most automated, approach requires less than 25% of the labor effort in the first, manual, process.

  18. Salvia officinalis L.: composition and antioxidant-related activities of a crude extract and selected sub-fractions.

    PubMed

    Koşar, Müberra; Dorman, H J Damien; Başer, K Hüsnü Can; Hiltunen, Raimo

    2010-09-01

    The composition and antioxidant properties of a methanol: acetic acid (99:1, v/v) soluble crude extract isolated from S. officinalis L. leaves through maceration and selected fractions isolated thereof are presented in this study. The total phenol content was estimated as gallic acid equivalents, whilst qualitative-quantitative phenolic content was determined using high performance liquid chromatography with photodiode array detection. Antioxidant evaluation consisted of ferric reductive capacity and 1,1-diphenyl-2-picrylhydrazyl and hydroxyl free radical scavenging determinations. The crude extract contained hydroxybenzoic acids, hydroxycinnamic acids, flavonoids and diterpenoids, whilst caffeic acid, carnosic acid, luteolin, luteolin-7-O-glucoside and rosmarinic acid were identified from their chromatographic and spectral characteristics and quantified from their respective calibration curves. The crude extract and sub-fractions demonstrated varying degrees of efficacy in the antioxidant-related assays used, except the n-hexane fraction, which was unable to reduce iron(III) at reasonable concentrations. Although the positive controls, ascorbic acid, BHA and BHT, were more potent than the S. officinalis samples, two fractions were significantly (p < 0.05) more potent iron(III) reducing agents than pycnogenol, a proanthocyanidin-rich commercial preparation. PMID:20923007

  19. Safety in the Automated Office.

    ERIC Educational Resources Information Center

    Graves, Pat R.; Greathouse, Lillian R.

    1990-01-01

    Office automation has introduced new hazards to the workplace: electrical hazards related to computer wiring, musculoskeletal problems resulting from use of computer terminals and design of work stations, and environmental concerns related to ventilation, noise levels, and office machine chemicals. (SK)

  20. Tumorigenesis of diesel exhaust, gasoline exhaust, and related emission extracts on SENCAR mouse skin

    SciTech Connect

    Nesnow, S; Triplett, L L; Slaga, T J

    1980-01-01

    The tumorigenicity of diesel exhaust particulate emissions was examined using a sensitive mouse skin tumorigenesis model (SENCAR). The tumorigenic potency of particulate emissions from diesel, gasoline, and related emission sources was compared.

  1. Coptis japonica Makino extract suppresses angiogenesis through regulation of cell cycle-related proteins.

    PubMed

    Kim, Seo Ho; Kim, Eok-Cheon; Kim, Wan-Joong; Lee, Myung-Hun; Kim, Sun-Young; Kim, Tack-Joong

    2016-06-01

    Angiogenesis, neovascularization from pre-existing vessels, is a key step in tumor growth and metastasis, and anti-angiogenic agents that can interfere with these essential steps of cancer development are a promising strategy for human cancer treatment. In this study, we characterized the anti-angiogenic effects of Coptis japonica Makino extract (CJME) and its mechanism of action. CJME significantly inhibited the proliferation, migration, and invasion of vascular endothelial growth factor (VEGF)-stimulated HUVECs. Furthermore, CJME suppressed VEGF-induced tube formation in vitro and VEGF-induced microvessel sprouting ex vivo. According to our study, CJME blocked VEGF-induced cell cycle transition in G1. CJME decreased expression of cell cycle-regulated proteins, including Cyclin D, Cyclin E, Cdk2, and Cdk4 in response to VEGF. Taken together, the results of our study indicate that CJME suppresses VEGF-induced angiogenic events such as proliferation, migration, and tube formation via cell cycle arrest in G1. PMID:26924430

  2. Relative intelligibility of dynamically extracted transient versus steady-state components of speech

    NASA Astrophysics Data System (ADS)

    Boston, J. R.; Yoo, Sungyub; Li, C. C.; El-Jaroudi, Amro; Durrant, J. D.; Kovacyk, Kristie; Karn, Stacey

    2001-05-01

    Consonants are recognized to dominate higher frequencies of the speech spectrum and to carry more information than vowels, but both demonstrate quasi-steady state and transient components, such as vowel to consonant transitions. Fixed filters somewhat separate these effects, but probably not optimally, given diverse words, speakers, and situations. To enhance the transient characteristics of speech, this study used time-varying adaptive filters [Rao and Kumaresan, IEEE Trans. Speech Audio Process. 8, 240-254 (2000)], following high-pass filtering at 700 Hz (well-known to have minimal effect on intelligibility), to extract predominantly steady-state components of speech material (CVC words, NU-6). The transient component was the difference between the sum of the filter outputs and the original signal. Psychometric functions were determined in five subjects with and without background noise and fitted by ogives. The transient components averaged filtered speech energy, but PBmax was not significantly different (nonparametric ANOVA) from that of either the original or highpass filtered speech. The steady-state components yielded significantly lower PBmax (p 3D 0.003) despite their much greater energy, as expected. These results suggest a potential approach to dynamic enhancement of speech intelligibility. [Work supported by ONR.

  3. Assessing the state of the art in biomedical relation extraction: overview of the BioCreative V chemical-disease relation (CDR) task

    PubMed Central

    Wei, Chih-Hsuan; Peng, Yifan; Leaman, Robert; Davis, Allan Peter; Mattingly, Carolyn J.; Li, Jiao; Wiegers, Thomas C.; Lu, Zhiyong

    2016-01-01

    Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task—a result that approaches the human inter-annotator agreement (0.8875)—and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system’s ability to return real-time results: the average response time for each team’s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the

  4. Assessing the state of the art in biomedical relation extraction: overview of the BioCreative V chemical-disease relation (CDR) task.

    PubMed

    Wei, Chih-Hsuan; Peng, Yifan; Leaman, Robert; Davis, Allan Peter; Mattingly, Carolyn J; Li, Jiao; Wiegers, Thomas C; Lu, Zhiyong

    2016-01-01

    Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task--a result that approaches the human inter-annotator agreement (0.8875)--and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system's ability to return real-time results: the average response time for each team's DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of

  5. TEMPTING system: a hybrid method of rule and machine learning for temporal relation extraction in patient discharge summaries.

    PubMed

    Chang, Yung-Chun; Dai, Hong-Jie; Wu, Johnny Chi-Yang; Chen, Jian-Ming; Tsai, Richard Tzong-Han; Hsu, Wen-Lian

    2013-12-01

    Patient discharge summaries provide detailed medical information about individuals who have been hospitalized. To make a precise and legitimate assessment of the abundant data, a proper time layout of the sequence of relevant events should be compiled and used to drive a patient-specific timeline, which could further assist medical personnel in making clinical decisions. The process of identifying the chronological order of entities is called temporal relation extraction. In this paper, we propose a hybrid method to identify appropriate temporal links between a pair of entities. The method combines two approaches: one is rule-based and the other is based on the maximum entropy model. We develop an integration algorithm to fuse the results of the two approaches. All rules and the integration algorithm are formally stated so that one can easily reproduce the system and results. To optimize the system's configuration, we used the 2012 i2b2 challenge TLINK track dataset and applied threefold cross validation to the training set. Then, we evaluated its performance on the training and test datasets. The experiment results show that the proposed TEMPTING (TEMPoral relaTion extractING) system (ranked seventh) achieved an F-score of 0.563, which was at least 30% better than that of the baseline system, which randomly selects TLINK candidates from all pairs and assigns the TLINK types. The TEMPTING system using the hybrid method also outperformed the stage-based TEMPTING system. Its F-scores were 3.51% and 0.97% better than those of the stage-based system on the training set and test set, respectively. PMID:24060600

  6. An Automated System of Knickpoint Definition and Extraction from a Digital Elevation Model (DEM): Implications for Efficient Large-Scale Mapping and Statistical Analyses of Knickpoint Distributions in Fluvial Networks

    NASA Astrophysics Data System (ADS)

    Neely, A. B.; Bookhagen, B.; Burbank, D. W.

    2014-12-01

    Knickpoints, or convexities in a stream's longitudinal profile, often delineate boundaries in stream networks that separate reaches eroding at different rates resulting from sharp temporal or spatial changes in uplift rate, contributing drainage area, precipitation, or bedrock lithology. We explicitly defined the geometry of a knickpoint in a manner that can be identified using an algorithm which operates in accordance with the stream power incision model, using a chi-plot analysis approach. This method allows for comparison between the real stream profile extracted from a DEM, and a linear best-fit line profile in chi-elevation space, representing a steady state theoretical stream functioning in accordance to uniform temporal and spatial conditions listed above. Assessing where the stream of interest is "under-steepened" and "over-steepened" with respect to a theoretical linear profile reveals knickpoints as certain points of slope inflection, extractable by our algorithm. We tested our algorithm on a 1m resolution LiDAR DEM of Santa Cruz Island (SCI), a tectonically active island 25km south of Santa Barbara, CA with an estimated uplift rate between 0.5 and 1.2mm/yr calculated from uplifted paleoshorelines. We have identified 1025 knickpoints using our algorithm and compared the position of these knickpoints to a similarly-sized dataset of knickpoints manually selected from distance-elevation longitudinal stream profiles for the same region. Our algorithm reduced mapping time by 99.3% and agreed with knickpoint positions from the manually selected knickpoint map for 85% of the 1025 knickpoints. Discrepancies can arise from inconsistencies in manual knickpoint selection that are not present in an automated computation. Additionally, the algorithm measures useful characteristics for each knickpoint allowing for quick statistical analyses. Histograms of knickpoint elevation and chi coordinate have a 3 peaked distribution, possibly expressing 3 levels of uplifted

  7. Maximizing the Value of Mobile Health Monitoring by Avoiding Redundant Patient Reports: Prediction of Depression-Related Symptoms and Adherence Problems in Automated Health Assessment Services

    PubMed Central

    Sussman, Jeremy B; Pfeiffer, Paul N; Silveira, Maria J; Singh, Satinder; Lavieri, Mariel S

    2013-01-01

    medication adherence problems and days in bed were somewhat less predictable but also showed small differences between assessments attempted weekly, biweekly, and monthly. Conclusions The technical feasibility of gathering high frequency health data via IVR may in some instances exceed the clinical benefit of doing so. Predictive analytics could make data gathering more efficient with negligible loss in effectiveness. In particular, weekly or biweekly depressive symptom reports may provide little marginal information regarding how the person is doing relative to collecting that information monthly. The next generation of automated health assessment services should use data mining techniques to avoid redundant assessments and should gather data at the frequency that maximizes the value of the information collected. PMID:23832021

  8. Cambridge Neuropsychological Test Automated Battery in assessment of cognitive parameters in patients with systemic lupus erythematosus in relation to autoantibody profile

    PubMed Central

    Sobow, Tomasz; Kowalski, Jan; Ząbek, Jakub; Woźniacka, Anna; Bogaczewicz, Jaroslaw

    2015-01-01

    Objectives To relate the cognitive parameters of systemic lupus erythematosus (SLE) patients in remission to their profile of autoantibodies. Material and methods The study included 32 patients with SLE in remission, with mild disease activity as indicated by SELENA-SLEDAI < 6. For neuropsychological assessment, the Cambridge Neuropsychological Test Automated Battery (CANTAB) was applied, using motor screening (MOT), big little circle (BLC), paired associated learning (PAL), stockings of Cambridge (SOC), and graded naming tests (GNT). Detection of autoantibodies against dsDNA, nucleosome (aNuc), Sm, and anticardiolipin (aCL: IgG and IgM) was performed with immunoassays. Results The SLE patients demonstrated standard scores below norms, matched according to age and gender, in the following tests: GNT (–0.87 ±0.85), SOC PSMM (–0.47 ±0.97), PAL (–1.88 ±3.58), and BLC (–0.31 ±1.90). GNT scores under –0.5 were found significantly more frequently in SLE patients, seen in roughly 66% of test subjects. Values for PAL and mean subsequent thinking time of stockings of Cambridge (SOC MSTT) were found to be lower than –0.5 in approximately half of the patients. Mean error of motor screening (MOT ME) was found to negatively correlate with mean latency of motor screening (MOT ML) (r = –0.55). PAL significantly correlated with SOC MSTT (r = 0.38) and with GNT (r = 0.36). Anti-dsDNA antibody level correlated negatively with MOT ME (r = –0.46). Anti-Nuc antibodies correlated with MOT ML (r = 0.41) but negatively correlated with MOT ME (r = –0.58). The levels of anti-Sm, anti-CL IgM and IgG did not correlate significantly with the outcomes of CANTAB. The age of the patients correlated negatively with MOT ME (r = –0.36), positively with BLC (r = 0.53) and negatively with SOC MSTT (r = –0.43). The level of anti-Nuc antibodies correlated with anti-dsDNA level (r = 0.62) and of anti-CL IgM with anti-Sm (r = 0.39) and anti-CL IgG (r = 0.87). Conclusions CANTAB

  9. Magnolia Extract, Magnolol, and Metabolites: Activation of Cannabinoid CB2 Receptors and Blockade of the Related GPR55

    PubMed Central

    2012-01-01

    The bark of Magnolia officinalis is used in Asian traditional medicine for the treatment of anxiety, sleeping disorders, and allergic diseases. We found that the extract and its main bioactive constituents, magnolol and honokiol, can activate cannabinoid (CB) receptors. In cAMP accumulation studies, magnolol behaved as a partial agonist (EC50 = 3.28 μM) with selectivity for the CB2 subtype, while honokiol was less potent showing full agonistic activity at CB1 and antagonistic properties at CB2. We subsequently synthesized the major metabolites of magnolol and found that tetrahydromagnolol (7) was 19-fold more potent than magnolol (EC50 CB2 = 0.170 μM) exhibiting high selectivity versus CB1. Additionally, 7 behaved as an antagonist at GPR55, a CB-related orphan receptor (KB = 13.3 μM, β-arrestin translocation assay). Magnolol and its metabolites may contribute to the biological activities of Magnolia extract via the observed mechanisms of action. Furthermore, the biphenylic compound magnolol provides a simple novel lead structure for the development of agonists for CB receptors and antagonists for the related GPR55. PMID:24900561

  10. Performance-friendly rule extraction in large water data-sets with AOC posets and relational concept analysis

    NASA Astrophysics Data System (ADS)

    Dolques, Xavier; Le Ber, Florence; Huchard, Marianne; Grac, Corinne

    2016-02-01

    In this paper, we consider data analysis methods for knowledge extraction from large water data-sets. More specifically, we try to connect physico-chemical parameters and the characteristics of taxons living in sample sites. Among these data analysis methods, we consider formal concept analysis (FCA), which is a recognized tool for classification and rule discovery on object-attribute data. Relational concept analysis (RCA) relies on FCA and deals with sets of object-attribute data provided with relations. RCA produces more informative results but at the expense of an increase in complexity. Besides, in numerous applications of FCA, the partially ordered set of concepts introducing attributes or objects (AOC poset, for Attribute-Object-Concept poset) is used rather than the concept lattice in order to reduce combinatorial problems. AOC posets are much smaller and easier to compute than concept lattices and still contain the information needed to rebuild the initial data. This paper introduces a variant of the RCA process based on AOC posets rather than concept lattices. This approach is compared with RCA based on iceberg lattices. Experiments are performed with various scaling operators, and a specific operator is introduced to deal with noisy data. We show that using AOC poset on water data-sets provides a reasonable concept number and allows us to extract meaningful implication rules (association rules whose confidence is 1), whose semantics depends on the chosen scaling operator.

  11. Relative brain signature: a population-based feature extraction procedure to identify functional biomarkers in the brain of alcoholics

    PubMed Central

    Karamzadeh, Nader; Ardeshirpour, Yasaman; Kellman, Matthew; Chowdhry, Fatima; Anderson, Afrouz; Chorlian, David; Wegman, Edward; Gandjbakhche, Amir

    2015-01-01

    Background A novel feature extraction technique, Relative-Brain-Signature (RBS), which characterizes subjects' relationship to populations with distinctive neuronal activity, is presented. The proposed method transforms a set of Electroencephalography's (EEG) time series in high dimensional space to a space of fewer dimensions by projecting time series onto orthogonal subspaces. Methods We apply our technique to an EEG data set of 77 abstinent alcoholics and 43 control subjects. To characterize subjects' relationship to the alcoholic and control populations, one RBS vector with respect to the alcoholic and one with respect to the control population is constructed. We used the extracted RBS vectors to identify functional biomarkers over the brain of alcoholics. To achieve this goal, the classification algorithm was used to categorize subjects into alcoholics and controls, which resulted in 78% accuracy. Results and Conclusions Using the results of the classification, regions with distinctive functionality in alcoholic subjects are detected. These affected regions, with respect to their spatial extent, are frontal, anterior frontal, centro-parietal, parieto-occiptal, and occipital lobes. The distribution of these regions over the scalp indicates that the impact of the alcohol in the cerebral cortex of the alcoholics is spatially diffuse. Our finding suggests that these regions engage more of the right hemisphere relative to the left hemisphere of the alcoholics' brain. PMID:26221569

  12. AUTOMATED INADVERTENT INTRUDER APPLICATION

    SciTech Connect

    Koffman, L; Patricia Lee, P; Jim Cook, J; Elmer Wilhite, E

    2007-05-29

    The Environmental Analysis and Performance Modeling group of Savannah River National Laboratory (SRNL) conducts performance assessments of the Savannah River Site (SRS) low-level waste facilities to meet the requirements of DOE Order 435.1. These performance assessments, which result in limits on the amounts of radiological substances that can be placed in the waste disposal facilities, consider numerous potential exposure pathways that could occur in the future. One set of exposure scenarios, known as inadvertent intruder analysis, considers the impact on hypothetical individuals who are assumed to inadvertently intrude onto the waste disposal site. Inadvertent intruder analysis considers three distinct scenarios for exposure referred to as the agriculture scenario, the resident scenario, and the post-drilling scenario. Each of these scenarios has specific exposure pathways that contribute to the overall dose for the scenario. For the inadvertent intruder analysis, the calculation of dose for the exposure pathways is a relatively straightforward algebraic calculation that utilizes dose conversion factors. Prior to 2004, these calculations were performed using an Excel spreadsheet. However, design checks of the spreadsheet calculations revealed that errors could be introduced inadvertently when copying spreadsheet formulas cell by cell and finding these errors was tedious and time consuming. This weakness led to the specification of functional requirements to create a software application that would automate the calculations for inadvertent intruder analysis using a controlled source of input parameters. This software application, named the Automated Inadvertent Intruder Application, has undergone rigorous testing of the internal calculations and meets software QA requirements. The Automated Inadvertent Intruder Application was intended to replace the previous spreadsheet analyses with an automated application that was verified to produce the same calculations and

  13. GDBDICT: A program to extract GDB schema information into a relational database

    SciTech Connect

    Nadkarni, P.M. )

    1994-01-01

    GDBDICT synthesizes the information present in the two schema files and produces three tab-delimited files that can be imported directly into a relational database without further processing. These files are: (i) DIAG, a list of the functional categories into which an individual GDB table may belong; (ii) TABLES, a description of the GDB tables in each category; and (iii) FIELDS, a detailed description of the fields for each GDB table. After import, FIELDS will be related many-to-one to TABLES, which in turn will be related many-to-one to DIAG. This is because GDBDICT creates, for the FIELDS and TABLES files, the appropriate foreign key (i.e., the field linking that table to the table higher in the hierarchy). GDBDICT will report all discrepancies between GDB-DICTIONARY and TABLE-DICTIONARY as it processes them. Running GDBDICT on the current (February 1993) schema files showed 18 discrepancies, none of which, on closer inspection, turned out to be of significant consequence. 2 refs., 1 fig.

  14. Automated Detection of Health Websites' HONcode Conformity: Can N-gram Tokenization Replace Stemming?

    PubMed

    Boyer, Célia; Dolamic, Ljiljana; Grabar, Natalia

    2015-01-01

    Authors evaluated supervised automatic classification algorithms for determination of health related web-page compliance with individual HONcode criteria of conduct using varying length character n-gram vectors to represent healthcare web page documents. The training/testing collection comprised web page fragments extracted by HONcode experts during the manual certification process. The authors compared automated classification performance of n-gram tokenization to the automated classification performance of document words and Porter-stemmed document words using a Naive Bayes classifier and DF (document frequency) dimensionality reduction metrics. The study attempted to determine whether the automated, language-independent approach might safely replace word-based classification. Using 5-grams as document features, authors also compared the baseline DF reduction function to Chi-square and Z-score dimensionality reductions. Overall study results indicate that n-gram tokenization provided a potentially viable alternative to document word stemming. PMID:26262363

  15. SU-D-BRD-03: Improving Plan Quality with Automation of Treatment Plan Checks

    SciTech Connect

    Covington, E; Younge, K; Chen, X; Lee, C; Matuszak, M; Kessler, M; Acosta, E; Orow, A; Filpansick, S; Moran, J; Keranen, W

    2015-06-15

    Purpose: To evaluate the effectiveness of an automated plan check tool to improve first-time plan quality as well as standardize and document performance of physics plan checks. Methods: The Plan Checker Tool (PCT) uses the Eclipse Scripting API to check and compare data from the treatment planning system (TPS) and treatment management system (TMS). PCT was created to improve first-time plan quality, reduce patient delays, increase efficiency of our electronic workflow, and to standardize and partially automate plan checks in the TPS. A framework was developed which can be configured with different reference values and types of checks. One example is the prescribed dose check where PCT flags the user when the planned dose and the prescribed dose disagree. PCT includes a comprehensive checklist of automated and manual checks that are documented when performed by the user. A PDF report is created and automatically uploaded into the TMS. Prior to and during PCT development, errors caught during plan checks and also patient delays were tracked in order to prioritize which checks should be automated. The most common and significant errors were determined. Results: Nineteen of 33 checklist items were automated with data extracted with the PCT. These include checks for prescription, reference point and machine scheduling errors which are three of the top six causes of patient delays related to physics and dosimetry. Since the clinical roll-out, no delays have been due to errors that are automatically flagged by the PCT. Development continues to automate the remaining checks. Conclusion: With PCT, 57% of the physics plan checklist has been partially or fully automated. Treatment delays have declined since release of the PCT for clinical use. By tracking delays and errors, we have been able to measure the effectiveness of automating checks and are using this information to prioritize future development. This project was supported in part by P01CA059827.

  16. 49 CFR 238.445 - Automated monitoring.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 4 2012-10-01 2012-10-01 false Automated monitoring. 238.445 Section 238.445 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... Equipment § 238.445 Automated monitoring. (a) Each passenger train shall be equipped to monitor...

  17. 49 CFR 238.445 - Automated monitoring.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 4 2013-10-01 2013-10-01 false Automated monitoring. 238.445 Section 238.445 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... Equipment § 238.445 Automated monitoring. (a) Each passenger train shall be equipped to monitor...

  18. 49 CFR 238.445 - Automated monitoring.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 4 2014-10-01 2014-10-01 false Automated monitoring. 238.445 Section 238.445 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... Equipment § 238.445 Automated monitoring. (a) Each passenger train shall be equipped to monitor...

  19. 49 CFR 238.445 - Automated monitoring.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 4 2011-10-01 2011-10-01 false Automated monitoring. 238.445 Section 238.445 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... Equipment § 238.445 Automated monitoring. (a) Each passenger train shall be equipped to monitor...

  20. 49 CFR 238.445 - Automated monitoring.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Automated monitoring. 238.445 Section 238.445 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... Equipment § 238.445 Automated monitoring. (a) Each passenger train shall be equipped to monitor...

  1. Effects of the polysaccharides extracted from Ganoderma lucidum on chemotherapy-related fatigue in mice.

    PubMed

    Ouyang, Ming-Zi; Lin, Li-Zhu; Lv, Wen-Jiao; Zuo, Qian; Lv, Zhuo; Guan, Jie-Shan; Wang, Shu-Tang; Sun, Ling-Ling; Chen, Han-Rui; Xiao, Zhi-Wei

    2016-10-01

    The weight-loaded swimming capability, tumor growth, survival time and biochemical markers of Ganoderma lucidum polysaccharides (GLPs) in a chemotherapy-related fatigue mouse model were tested in the present study. The results showed that the middle-dose GLPs (GLP-M) and the high-dose GLPs (GLP-H) could increase the exhausting swimming time, which was observed to decrease in the cisplatin control group(PCG) and the tumor control group (TCG).The GLP-M and the GLP-H had reduced serum levels of tumor necrosis factor-αand interleukin-6, which were up-regulated by cisplatin. Cisplatin and the presence of tumor significantly enhanced the malondialdehyde (MDA) content and inhibited the activity of superoxide dismutase (SOD) in the muscle. Administration of GLPs at a high dose decreased the levels of MDA and up-regulated the SOD activity. The high-dose GLPs+cisplatin group presented a decreased tendency of tumor volume and a lower tumor weight compared with PCG. Moreover, the mice in the GLP-M and GLP-H groups had longer survival times compared with the mice in the TCG and PCG.The levels of creatinine and serum blood urea nitrogen, which are up-regulated by cisplatin, were significantly reduced by GLP-M and GLP-H. Therefore, these results suggest that GLPs might improve chemotherapy-related fatigue via regulation of inflammatory responses, oxidative stress and reduction of nephrotoxicity. PMID:27208798

  2. Systematically Extracting Metal- and Solvent-Related Occupational Information from Free-Text Responses to Lifetime Occupational History Questionnaires

    PubMed Central

    Friesen, Melissa C.; Locke, Sarah J.; Tornow, Carina; Chen, Yu-Cheng; Koh, Dong-Hee; Stewart, Patricia A.; Purdue, Mark; Colt, Joanne S.

    2014-01-01

    Objectives: Lifetime occupational history (OH) questionnaires often use open-ended questions to capture detailed information about study participants’ jobs. Exposure assessors use this information, along with responses to job- and industry-specific questionnaires, to assign exposure estimates on a job-by-job basis. An alternative approach is to use information from the OH responses and the job- and industry-specific questionnaires to develop programmable decision rules for assigning exposures. As a first step in this process, we developed a systematic approach to extract the free-text OH responses and convert them into standardized variables that represented exposure scenarios. Methods: Our study population comprised 2408 subjects, reporting 11991 jobs, from a case–control study of renal cell carcinoma. Each subject completed a lifetime OH questionnaire that included verbatim responses, for each job, to open-ended questions including job title, main tasks and activities (task), tools and equipment used (tools), and chemicals and materials handled (chemicals). Based on a review of the literature, we identified exposure scenarios (occupations, industries, tasks/tools/chemicals) expected to involve possible exposure to chlorinated solvents, trichloroethylene (TCE) in particular, lead, and cadmium. We then used a SAS macro to review the information reported by study participants to identify jobs associated with each exposure scenario; this was done using previously coded standardized occupation and industry classification codes, and a priori lists of associated key words and phrases related to possibly exposed tasks, tools, and chemicals. Exposure variables representing the occupation, industry, and task/tool/chemicals exposure scenarios were added to the work history records of the study respondents. Our identification of possibly TCE-exposed scenarios in the OH responses was compared to an expert’s independently assigned probability ratings to evaluate whether

  3. Automation U.S.A.: Overcoming Barriers to Automation.

    ERIC Educational Resources Information Center

    Brody, Herb

    1985-01-01

    Although labor unions and inadequate technology play minor roles, the principal barrier to factory automation is "fear of change." Related problems include long-term benefits, nontechnical executives, and uncertainty of factory cost accounting. Industry support for university programs is helping to educate engineers to design, implement, and…

  4. Inhibition of proliferation by agricultural plant extracts in seven human adult T-cell leukaemia (ATL)-related cell lines.

    PubMed

    Kai, Hisahiro; Akamatsu, Ena; Torii, Eri; Kodama, Hiroko; Yukizaki, Chizuko; Sakakibara, Yoichi; Suiko, Masahito; Morishita, Kazuhiro; Kataoka, Hiroaki; Matsuno, Koji

    2011-07-01

    Adult T-cell leukaemia (ATL) is caused by human T-cell leukaemia virus type I (HTLV-I) infection and is resistant to conventional chemotherapy. We evaluated the inhibitory effects of agricultural plants on the proliferation of seven ATL-related human leukaemia cells, using three ATL cell lines (ED, Su9T01 and S1T), two human T-cell lines transformed by HTLV-I infection (HUT-102 and MT-2) and two HTLV-I-negative human T-cell acute lymphoblastic leukaemia cell lines (Jurkat and MOLT-4). A total of 52 samples of 80% ethanol extracts obtained from 30 types of agricultural plants were examined. On the basis of IC(50) values, we selected samples with greater activity than genistein, which was used as a positive control. The highest inhibitory effect was observed with extracts from leaves of Vaccinium virgatum Aiton (blueberry) on four cell lines (ED, Su9T01, HUT-102 and Jurkat); seeds of Momordica charantia L. (bitter gourd) exhibited the second highest activity. The bitter gourd seeds suppressed the proliferation of three cell lines (Su9T01, HUT-102 and Jurkat). The extracts from edible parts of Ipomea batatas LAM. (sweet potato), edible parts of Colocasia esculenta (L.) Schott (taro), skin of taro and seeds of Prunus mume Sieb. et Zucc. (mume) showed markedly greater inhibitory effects on Su9T01 than genistein. These findings suggest that ATL-preventative bioactive compounds may exist in these agricultural plants, which are considered to be functional foods. PMID:21293936

  5. Treatment of age-related memory complaints with Ginkgo biloba extract: a randomized double blind placebo-controlled study.

    PubMed

    Brautigam, M R; Blommaert, F A; Verleye, G; Castermans, J; Jansen Steur, E N; Kleijnen, J

    1998-12-01

    A growing number of people is subject to age-related cognitive impairment due to the proportional increase of the ageing population. Therefore, there is a growing interest in cognition-enhancing substances. The efficacy of an alcohol/water extract of Ginkgo biloba in elderly individuals with memory- and/or concentration complaints was tested in a randomized, double-blind, placebo-controlled study by using both subjective and objective parameters. After a wash-out period of 4 weeks 241 non-institutionalised patients in the age range 55-86 years were randomly allocated to receive either Ginkgo biloba alcohol/water extract in a high dose (HD), a low dose (LD) or a placebo (PL) for 24 weeks. Patients were assessed using a psychometric testbattery in the following order: Expended Mental Control Test (EMCT) measuring attention and concentration, Benton Test of Visual Retention-Revised (measures short term visual memory), Rey Test part 1 (measures short term memory and learning curve), Beck Depressive Inventory (BDI) measuring the presence and severeness of a depression in order to exclude depressive patients and Rey Test part 2 (measures long term memory: recognition). Furthermore, subjective perception of memory and concentration was measured. 197 patients completed the study (mean MMSE score: 26.29). In the subjective test, the EMCT, the Rey 1 and Rey 2 no significant differences in improvement in time between the groups were observed. In the Benton test increases of 18%, 26% and 11% (expressed as percentage of baseline scores) were observed in the HD, LD and PL respectively (MANOVA; p = 0.0076). No substantial correlation was observed between subjective perception of the severeness of memory complaints and the objective test results. No differences in the number of (gastrointestinal) side effects were observed between placebo and verum groups. These results indicate that the use of Ginkgo extracts in elderly individuals with cognitive impairment might be promising

  6. Rapid extraction of relative topography from Viking orbiter images. 2: Application to irregular topographic features

    NASA Technical Reports Server (NTRS)

    Davis, P. A.; Soderblom, L. A.

    1984-01-01

    The ratio and flat field photoclinometric methods for determining crater form topography are described. Both methods compensate for the effects of atmospheric scattering by subtracting a haze value from all brightness values. Algorithms were altered to derive relative topographic data for irregular features such as ejecta blankets, lava flows, graben and ridge scarps, dune forms, and stratified materials. After the elevations along the profiles are obtained by integration of the photometric function, a matrix transformation is applied to the image coordinates of each pixel within each profile, utilizing each pixel's integral height, to produce a projection of each profile line onto the surface. Pixel brightness values are then resampled along the projected track of each profile to determine a more correct height value for each pixel. Precision of the methods is discussed.

  7. Periodontal Disease, Dental Implants, Extractions and Medications Related to Osteonecrosis of the Jaws.

    PubMed

    Shah, Neha P; Katsarelis, Helen; Pazianas, Michael; Dhariwal, Daljit K

    2015-11-01

    Patients taking bisphosphonates and other anti-resorptive drugs are likely to attend general dental practice. The term 'bisphosphonate'is often immediately associated with osteonecrosis of the jaws (ONJ). Risk assessment and subsequent management of these patients should be carried out taking into account all the risk factors associated with ONJ. The introduction of newer drugs, also shown to be associated with ONJ, demands increased awareness of general dental practitioners about these medications. CPD/CLINICAL RELEVANCE: This paper provides an update on medication-related ONJ and considers the effects of anti-resorptive drugs on the management of patients needing exodontia, treatment for periodontal disease and dental implant placement. PMID:26749795

  8. Automated External Defibrillator

    MedlinePlus

    ... from the NHLBI on Twitter. What Is an Automated External Defibrillator? An automated external defibrillator (AED) is a portable device that ... Institutes of Health Department of Health and Human Services USA.gov

  9. A method of extracting ontology module using concept relations for sharing knowledge in mobile cloud computing environment.

    PubMed

    Lee, Keonsoo; Rho, Seungmin; Lee, Seok-Won

    2014-01-01

    In mobile cloud computing environment, the cooperation of distributed computing objects is one of the most important requirements for providing successful cloud services. To satisfy this requirement, all the members, who are employed in the cooperation group, need to share the knowledge for mutual understanding. Even if ontology can be the right tool for this goal, there are several issues to make a right ontology. As the cost and complexity of managing knowledge increase according to the scale of the knowledge, reducing the size of ontology is one of the critical issues. In this paper, we propose a method of extracting ontology module to increase the utility of knowledge. For the given signature, this method extracts the ontology module, which is semantically self-contained to fulfill the needs of the service, by considering the syntactic structure and semantic relation of concepts. By employing this module, instead of the original ontology, the cooperation of computing objects can be performed with less computing load and complexity. In particular, when multiple external ontologies need to be combined for more complex services, this method can be used to optimize the size of shared knowledge. PMID:25250374

  10. Genotoxic and antigenotoxic activity of acerola (Malpighia glabra L.) extract in relation to the geographic origin.

    PubMed

    Nunes, Roberta Da Silva; Kahl, Vivian Francília Silva; Sarmento, Merielen Da Silva; Richter, Marc François; Abin-Carriquiry, Juan Andres; Martinez, Marcela María; Ferraz, Alexandre De Barros Falcão; Da Silva, Juliana

    2013-10-01

    Malpighia glabra L, popularly known as acerola, is considered a functional fruit and therefore is taken to prevent disease or as adjuvant to treatment strategies, since the fruit is an undeniable source of vitamin C, carotenoids, and flavonoids. Acerola is a natural source of vitamin C, flavonoids, and carotenoids. Its chemical composition is affected by genetic uniformity of the orchards and environmental factors. Considering the extensive growth of the culture of acerola in Brazil as well as its widespread use, this study evaluates the genotoxic and antigenotoxic activity of acerola in relation to geographical origin using the comet assay in mice blood cells in vitro. No acerola samples showed potential to induce DNA damage, independently of origin. Also, for antigenotoxicity activity, only the acerola sample from São Paulo reduced DNA damage induced by hydrogen peroxide (by about 56%). The sample from Ceará showed good antioxidant activity by the 2,2-diphenyl-1-picrylhydrazyl assay, in agreement with its higher rutin, quercetin, and vitamin C levels. Additional studies with other treatment regimens are necessary to better understand the impact of the complex mixture of acerola on genomic stability. PMID:23180597

  11. Automated labeling in document images

    NASA Astrophysics Data System (ADS)

    Kim, Jongwoo; Le, Daniel X.; Thoma, George R.

    2000-12-01

    The National Library of Medicine (NLM) is developing an automated system to produce bibliographic records for its MEDLINER database. This system, named Medical Article Record System (MARS), employs document image analysis and understanding techniques and optical character recognition (OCR). This paper describes a key module in MARS called the Automated Labeling (AL) module, which labels all zones of interest (title, author, affiliation, and abstract) automatically. The AL algorithm is based on 120 rules that are derived from an analysis of journal page layouts and features extracted from OCR output. Experiments carried out on more than 11,000 articles in over 1,000 biomedical journals show the accuracy of this rule-based algorithm to exceed 96%.

  12. Follicular Unit Extraction Hair Transplant

    PubMed Central

    Dua, Aman; Dua, Kapil

    2010-01-01

    Hair transplantation has come a long way from the days of Punch Hair Transplant by Dr. Orentreich in 1950s to Follicular Unit Hair Transplant (FUT) of 1990s and the very recent Follicular Unit Extraction (FUE) technique. With the advent of FUE, the dream of ‘no visible scarring’ in the donor area is now looking like a possibility. In FUE, the grafts are extracted as individual follicular units in a two-step or three-step technique whereas the method of implantation remains the same as in the traditional FUT. The addition of latest automated FUE technique seeks to overcome some of the limitations in this relatively new technique and it is now possible to achieve more than a thousand grafts in one day in trained hands. This article reviews the methodology, limitations and advantages of FUE hair transplant. PMID:21031064

  13. Sensors and Automated Analyzers for Radionuclides

    SciTech Connect

    Grate, Jay W.; Egorov, Oleg B.

    2003-03-27

    The production of nuclear weapons materials has generated large quantities of nuclear waste and significant environmental contamination. We have developed new, rapid, automated methods for determination of radionuclides using sequential injection methodologies to automate extraction chromatographic separations, with on-line flow-through scintillation counting for real time detection. This work has progressed in two main areas: radionuclide sensors for water monitoring and automated radiochemical analyzers for monitoring nuclear waste processing operations. Radionuclide sensors have been developed that collect and concentrate radionuclides in preconcentrating minicolumns with dual functionality: chemical selectivity for radionuclide capture and scintillation for signal output. These sensors can detect pertechnetate to below regulatory levels and have been engineered into a prototype for field testing. A fully automated process monitor has been developed for total technetium in nuclear waste streams. This instrument performs sample acidification, speciation adjustment, separation and detection in fifteen minutes or less.

  14. Image segmentation for automated dental identification

    NASA Astrophysics Data System (ADS)

    Haj Said, Eyad; Nassar, Diaa Eldin M.; Ammar, Hany H.

    2006-02-01

    Dental features are one of few biometric identifiers that qualify for postmortem identification; therefore, creation of an Automated Dental Identification System (ADIS) with goals and objectives similar to the Automated Fingerprint Identification System (AFIS) has received increased attention. As a part of ADIS, teeth segmentation from dental radiographs films is an essential step in the identification process. In this paper, we introduce a fully automated approach for teeth segmentation with goal to extract at least one tooth from the dental radiograph film. We evaluate our approach based on theoretical and empirical basis, and we compare its performance with the performance of other approaches introduced in the literature. The results show that our approach exhibits the lowest failure rate and the highest optimality among all full automated approaches introduced in the literature.

  15. The Automated Planetary Space Station

    NASA Technical Reports Server (NTRS)

    Ivie, C. V.; Friedman, L. D.

    1977-01-01

    Results are presented for a study on mission definition and design to determine broad technology directions and needs for advanced planetary spacecraft and future planetary missions. The discussion covers mission selection, system design, and technology assessment and review for a multicomponent spacecraft exploration facility provided with nuclear power propulsion. As an example, the Automated Planetary Space Station at Jupiter is examined as a generic concept which has the capability of conducting in-depth investigations of different aspects of the entire Jovian system. Mission planning is discussed relative to low-thrust trajectory control, automatic target identification and landing, roving vehicle operation, and automated sample analysis.

  16. Summary of astronaut inputs concerning automation

    NASA Technical Reports Server (NTRS)

    Weeks, David J.

    1990-01-01

    An assessment of the potential for increased productivity on Space Station Freedom through advanced automation and robotics was recently completed. Sponsored by the Office of Space Station, the study involved reviews of on-orbit operations experience documentation, interviews with 23 current and former astronauts/payload specialists as well as other NASA and contractor personnel, and a survey of 32 astronauts and payload specialists. Assessed areas of related on-orbit experience included Skylab, space shuttle, Spacelab, and the Soviet space program, as well as the U.S. nuclear submarine program and Antarctic research stations analogs. The survey questionnaire asked the respondents to rate the desirability of advanced automation, EVA robotics, and IVA robotics. They were also asked to rate safety impacts of automated fault diagnosis, isolation, and recovery (FDIR); automated exception reporting and alarm filtering; and an EVA retriever. The respondents were also asked to evaluate 26 specific applications of advanced automation and robotics related to perceived impact on productivity.

  17. Workflow automation architecture standard

    SciTech Connect

    Moshofsky, R.P.; Rohen, W.T.

    1994-11-14

    This document presents an architectural standard for application of workflow automation technology. The standard includes a functional architecture, process for developing an automated workflow system for a work group, functional and collateral specifications for workflow automation, and results of a proof of concept prototype.

  18. Apparatus for hydrocarbon extraction

    DOEpatents

    Bohnert, George W.; Verhulst, Galen G.

    2013-03-19

    Systems and methods for hydrocarbon extraction from hydrocarbon-containing material. Such systems and methods relate to extracting hydrocarbon from hydrocarbon-containing material employing a non-aqueous extractant. Additionally, such systems and methods relate to recovering and reusing non-aqueous extractant employed for extracting hydrocarbon from hydrocarbon-containing material.

  19. Workshop on Office Automation and Telecommunication: Applying the Technology.

    ERIC Educational Resources Information Center

    Mitchell, Bill

    This document contains 12 outlines that forecast the office of the future. The outlines cover the following topics: (1) office automation definition and objectives; (2) functional categories of office automation software packages for mini and mainframe computers; (3) office automation-related software for microcomputers; (4) office automation…

  20. 22 CFR 120.30 - The Automated Export System (AES).

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 22 Foreign Relations 1 2011-04-01 2011-04-01 false The Automated Export System (AES). 120.30... DEFINITIONS § 120.30 The Automated Export System (AES). The Automated Export System (AES) is the Department of... data and defense services shall be reported directly to the Directorate of Defense Trade Controls...

  1. 22 CFR 120.30 - The Automated Export System (AES).

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 1 2013-04-01 2013-04-01 false The Automated Export System (AES). 120.30... DEFINITIONS § 120.30 The Automated Export System (AES). The Automated Export System (AES) is the Department of... data and defense services shall be reported directly to the Directorate of Defense Trade Controls...

  2. 22 CFR 120.30 - The Automated Export System (AES).

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 1 2014-04-01 2014-04-01 false The Automated Export System (AES). 120.30... DEFINITIONS § 120.30 The Automated Export System (AES). The Automated Export System (AES) is the Department of... data and defense services shall be reported directly to the Directorate of Defense Trade Controls...

  3. 22 CFR 120.30 - The Automated Export System (AES).

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 1 2012-04-01 2012-04-01 false The Automated Export System (AES). 120.30... DEFINITIONS § 120.30 The Automated Export System (AES). The Automated Export System (AES) is the Department of... data and defense services shall be reported directly to the Directorate of Defense Trade Controls...

  4. 22 CFR 120.30 - The Automated Export System (AES).

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false The Automated Export System (AES). 120.30... DEFINITIONS § 120.30 The Automated Export System (AES). The Automated Export System (AES) is the Department of... data and defense services shall be reported directly to the Directorate of Defense Trade Controls...

  5. Automated detection of β-amyloid-related cortical and subcortical signal changes in a transgenic model of Alzheimer’s disease using high-field MRI

    PubMed Central

    Teipel, Stefan J.; Kaza, Evangelia; Hadlich, Stefan; Bauer, Alexandra; Brüning, Thomas; Plath, Anne-Sophie; Krohn, Markus; Scheffler, Katja; Walker, Lary C.; Lotze, Martin; Pahnke, Jens

    2010-01-01

    In vivo imaging of β-amyloid load as a biomarker of Alzheimer’s disease (AD) would be of considerable clinical relevance for the early diagnosis and monitoring of treatment effects. Here, we investigated automated quantification of in vivo T2 relaxation time as a surrogate measure of plaque load in the brains of ten APP/PS1 transgenic mice (age 20 weeks) using in vivo MRI acquisitions on a 7T Bruker ClinScan magnet. APP/PS1 mice present with rapid-onset cerebral β-amyloidosis, and were compared with eight age-matched, wild-type control mice (C57Bl/6J) that do not develop Aβ-deposition in brain. Data were analyzed with a novel automated voxel-based analysis that allowed mapping the entire brain for significant signal changes. In APP/PS1 mice, we found a significant decrease in T2 relaxation times in the deeper neocortical layers, caudate-putamen, thalamus, hippocampus and cerebellum compared to wildtype controls. These changes were in line with the histological distribution of cerebral Aβ plaques and activated microglia. Grey matter density did not differ between wild-type mice and APP/PS1 mice, consistent with a lack of neuronal loss in histological investigations. High-field MRI with automated mapping of T2 time changes may be a useful tool for the detection of plaque load in living transgenic animals, which may become relevant for the evaluation of amyloid lowering intervention effects in future studies. PMID:20966552

  6. Automated Characterization Of Vibrations Of A Structure

    NASA Technical Reports Server (NTRS)

    Bayard, David S.; Yam, Yeung; Mettler, Edward; Hadaegh, Fred Y.; Milman, Mark H.; Scheid, Robert E.

    1992-01-01

    Automated method of characterizing dynamical properties of large flexible structure yields estimates of modal parameters used by robust control system to stabilize structure and minimize undesired motions. Based on extraction of desired modal and control-design data from responses of structure to known vibrational excitations. Applicable to terrestrial structures where vibrations are important - aircraft, buildings, bridges, cranes, and drill strings.

  7. Automation of power unit boiler equipment during the introduction of full-scale automated control systems. Part II

    NASA Astrophysics Data System (ADS)

    Ryashchenko, I. L.; Sukhorukov, I. A.

    2009-06-01

    We describe the fundamental ways for solving problems relating to the design of devices for constructing automated boiler control systems that are used at the Yekaterinburg Branch of ZAO KVARTS Engineering Co. in developing full-scale automated process control systems for the power units of thermal power stations built around the Kosmotronika computerized automation system.

  8. Automation in Clinical Microbiology

    PubMed Central

    Ledeboer, Nathan A.

    2013-01-01

    Historically, the trend toward automation in clinical pathology laboratories has largely bypassed the clinical microbiology laboratory. In this article, we review the historical impediments to automation in the microbiology laboratory and offer insight into the reasons why we believe that we are on the cusp of a dramatic change that will sweep a wave of automation into clinical microbiology laboratories. We review the currently available specimen-processing instruments as well as the total laboratory automation solutions. Lastly, we outline the types of studies that will need to be performed to fully assess the benefits of automation in microbiology laboratories. PMID:23515547

  9. Automation of industrial bioprocesses.

    PubMed

    Beyeler, W; DaPra, E; Schneider, K

    2000-01-01

    The dramatic development of new electronic devices within the last 25 years has had a substantial influence on the control and automation of industrial bioprocesses. Within this short period of time the method of controlling industrial bioprocesses has changed completely. In this paper, the authors will use a practical approach focusing on the industrial applications of automation systems. From the early attempts to use computers for the automation of biotechnological processes up to the modern process automation systems some milestones are highlighted. Special attention is given to the influence of Standards and Guidelines on the development of automation systems. PMID:11092132

  10. Automated office blood pressure.

    PubMed

    Myers, Martin G; Godwin, Marshall

    2012-05-01

    Manual blood pressure (BP) is gradually disappearing from clinical practice with the mercury sphygmomanometer now considered to be an environmental hazard. Manual BP is also subject to measurement error on the part of the physician/nurse and patient-related anxiety which can result in poor quality BP measurements and office-induced (white coat) hypertension. Automated office (AO) BP with devices such as the BpTRU (BpTRU Medical Devices, Coquitlam, BC) has already replaced conventional manual BP in many primary care practices in Canada and has also attracted interest in other countries where research studies using AOBP have been undertaken. The basic principles of AOBP include multiple readings taken with a fully automated recorder with the patient resting alone in a quiet room. When these principles are followed, office-induced hypertension is eliminated and AOBP exhibits a much stronger correlation with the awake ambulatory BP as compared with routine manual BP measurements. Unlike routine manual BP, AOBP correlates as well with left ventricular mass as does the awake ambulatory BP. AOBP also simplifies the definition of hypertension in that the cut point for a normal AOBP (< 135/85 mm Hg) is the same as for the awake ambulatory BP and home BP. This article summarizes the currently available evidence supporting the use of AOBP in routine clinical practice and proposes an algorithm in which AOBP replaces manual BP for the diagnosis and management of hypertension. PMID:22265230

  11. Melia azedarach extract stimulates melanogenesis through increase of tyrosinase-related protein 1 expression in B16F10 mouse melanoma cells.

    PubMed

    Yao, Cheng; Jin, Cheng Long; Oh, Inn Gyung; Park, Chi-Hyun; Chung, Jin Ho

    2015-06-01

    Melia azedarach (MA) has been used in folk medicine in Asia for the treatment of several diseases. Several constituents from MA possess anti-herpetic, anti-angiogenic and anticancer properties. The aim of the present study was to investigate the effect of a 70% ethanol extract of MA on melanogenesis and the underlying mechanisms involved. A B16F10 mouse melanoma cell line was used in our experiments. Treatment of B16F10 cells with the MA extract (10, 20 and 40 µg/ml) increased melanin content in a concentration-dependent manner without cytotoxicity at 24 h. Further experiments indicated that the MA extract (20 µg/ml) increased melanin content as early as at 4 h after treatment. Additionally, although the MA extract did not affect intracellular tyrosinase activity and the protein levels of tyrosinase and tyrosinase-related protein-2 (TRP-2) at 2 and 4 h after treatment, the MA extract increased TRP-1 protein expression at both time points. However, no significant effect of the MA extract treatment on TRP-1 mRNA level at the time points measured was observed. In conclusion, the results from the present study demonstrate that the MA extract increases melanogenesis through the upregulation of TRP-1 protein expression by post-transcriptional control in B16F10 cells and suggest that the MA extract can be viewed as a rapid inducer of melanogenesis, thus rendering it a potential treatment for hypopigmentation diseases including vitiligo. PMID:25872655

  12. Towards automated traceability maintenance.

    PubMed

    Mäder, Patrick; Gotel, Orlena

    2012-10-01

    Traceability relations support stakeholders in understanding the dependencies between artifacts created during the development of a software system and thus enable many development-related tasks. To ensure that the anticipated benefits of these tasks can be realized, it is necessary to have an up-to-date set of traceability relations between the established artifacts. This goal requires the creation of traceability relations during the initial development process. Furthermore, the goal also requires the maintenance of traceability relations over time as the software system evolves in order to prevent their decay. In this paper, an approach is discussed that supports the (semi-) automated update of traceability relations between requirements, analysis and design models of software systems expressed in the UML. This is made possible by analyzing change events that have been captured while working within a third-party UML modeling tool. Within the captured flow of events, development activities comprised of several events are recognized. These are matched with predefined rules that direct the update of impacted traceability relations. The overall approach is supported by a prototype tool and empirical results on the effectiveness of tool-supported traceability maintenance are provided. PMID:23471308

  13. Towards automated traceability maintenance

    PubMed Central

    Mäder, Patrick; Gotel, Orlena

    2012-01-01

    Traceability relations support stakeholders in understanding the dependencies between artifacts created during the development of a software system and thus enable many development-related tasks. To ensure that the anticipated benefits of these tasks can be realized, it is necessary to have an up-to-date set of traceability relations between the established artifacts. This goal requires the creation of traceability relations during the initial development process. Furthermore, the goal also requires the maintenance of traceability relations over time as the software system evolves in order to prevent their decay. In this paper, an approach is discussed that supports the (semi-) automated update of traceability relations between requirements, analysis and design models of software systems expressed in the UML. This is made possible by analyzing change events that have been captured while working within a third-party UML modeling tool. Within the captured flow of events, development activities comprised of several events are recognized. These are matched with predefined rules that direct the update of impacted traceability relations. The overall approach is supported by a prototype tool and empirical results on the effectiveness of tool-supported traceability maintenance are provided. PMID:23471308

  14. The Impact of Office Automation on Libraries.

    ERIC Educational Resources Information Center

    Landau, Robert M.

    1981-01-01

    Discusses the feasibility of integrating present library office functions through the use of voice, text, digital, micrographic, facsimile, and related information technologies, and describes the processes and component features of the automated office. (Author/FM)

  15. Automated DNA Sequencing System

    SciTech Connect

    Armstrong, G.A.; Ekkebus, C.P.; Hauser, L.J.; Kress, R.L.; Mural, R.J.

    1999-04-25

    Oak Ridge National Laboratory (ORNL) is developing a core DNA sequencing facility to support biological research endeavors at ORNL and to conduct basic sequencing automation research. This facility is novel because its development is based on existing standard biology laboratory equipment; thus, the development process is of interest to the many small laboratories trying to use automation to control costs and increase throughput. Before automation, biology Laboratory personnel purified DNA, completed cycle sequencing, and prepared 96-well sample plates with commercially available hardware designed specifically for each step in the process. Following purification and thermal cycling, an automated sequencing machine was used for the sequencing. A technician handled all movement of the 96-well sample plates between machines. To automate the process, ORNL is adding a CRS Robotics A- 465 arm, ABI 377 sequencing machine, automated centrifuge, automated refrigerator, and possibly an automated SpeedVac. The entire system will be integrated with one central controller that will direct each machine and the robot. The goal of this system is to completely automate the sequencing procedure from bacterial cell samples through ready-to-be-sequenced DNA and ultimately to completed sequence. The system will be flexible and will accommodate different chemistries than existing automated sequencing lines. The system will be expanded in the future to include colony picking and/or actual sequencing. This discrete event, DNA sequencing system will demonstrate that smaller sequencing labs can achieve cost-effective the laboratory grow.

  16. Cockpit Adaptive Automation and Pilot Performance

    NASA Technical Reports Server (NTRS)

    Parasuraman, Raja

    2001-01-01

    The introduction of high-level automated systems in the aircraft cockpit has provided several benefits, e.g., new capabilities, enhanced operational efficiency, and reduced crew workload. At the same time, conventional 'static' automation has sometimes degraded human operator monitoring performance, increased workload, and reduced situation awareness. Adaptive automation represents an alternative to static automation. In this approach, task allocation between human operators and computer systems is flexible and context-dependent rather than static. Adaptive automation, or adaptive task allocation, is thought to provide for regulation of operator workload and performance, while preserving the benefits of static automation. In previous research we have reported beneficial effects of adaptive automation on the performance of both pilots and non-pilots of flight-related tasks. For adaptive systems to be viable, however, such benefits need to be examined jointly in the context of a single set of tasks. The studies carried out under this project evaluated a systematic method for combining different forms of adaptive automation. A model for effective combination of different forms of adaptive automation, based on matching adaptation to operator workload was proposed and tested. The model was evaluated in studies using IFR-rated pilots flying a general-aviation simulator. Performance, subjective, and physiological (heart rate variability, eye scan-paths) measures of workload were recorded. The studies compared workload-based adaptation to to non-adaptive control conditions and found evidence for systematic benefits of adaptive automation. The research provides an empirical basis for evaluating the effectiveness of adaptive automation in the cockpit. The results contribute to the development of design principles and guidelines for the implementation of adaptive automation in the cockpit, particularly in general aviation, and in other human-machine systems. Project goals

  17. Assessment of Eyebright (Euphrasia Officinalis L.) Extract Activity in Relation to Human Corneal Cells Using In Vitro Tests

    PubMed Central

    Paduch, Roman; Woźniak, Anna; Niedziela, Piotr; Rejdak, Robert

    2014-01-01

    Background: Euphrasia officinalis L. is an herb traditionally used in folk medicine, mainly in the treatment of eye disorders. Aims: The present study analyzed the activity of three extracts of E. officinalis L. (ethanol, ethyl acetate and heptane) on cultured human corneal epithelial cells (10.014 pRSV-T). Study Design: In vitro study. Methods: Toxicity, free radical scavenging activity and the immunomodulatory effects of the extracts were tested using the thiazolyl blue tetrazolium bromide (MTT) or Neutral Red, 2,2-Diphenyl-1-picrylhydrazyl (DPPH) and ELISA tests, respectively. Moreover, nitric oxide levels and cytoskeleton architecture were analyzed after corneal cell incubation with the plant extracts. Results: We show that the biological effect depended on both the concentration and the extraction solvent used. Heptane extracts, distinct from those in ethanol and ethyl acetate, were toxic to 10.014 pRSV-T cells at low concentrations (25 μg/mL) and did not demonstrate free radical scavenging effects. All tested extracts decreased pro-inflammatory cytokine expression (IL-1β, IL-6 and TNF-α) and also anti-inflammatory IL-10 expression by human corneal cells when the extracts were added to the cell culture medium for 24 h. Conclusion: In conclusion, we show that the promising effects of the application of E. officinalis L. preparations as a supplementary therapy for eye disorders are associated with the ethanol and ethyl acetate extracts, not the heptane extract. PMID:25207164

  18. Enhancing Seismic Calibration Research Through Software Automation

    SciTech Connect

    Ruppert, S; Dodge, D; Elliott, A; Ganzberger, M; Hauk, T; Matzel, E; Ryall, F

    2004-07-09

    observations. Even partial automation of this second tier, through development of prototype tools to extract observations and make many thousands of scientific measurements, has significantly increased the efficiency of the scientists who construct and validate integrated calibration surfaces. This achieved gain in efficiency and quality control is likely to continue and even accelerate through continued application of information science and scientific automation. Data volume and calibration research requirements have increased by several orders of magnitude over the past decade. Whereas it was possible for individual researchers to download individual waveforms and make time-consuming measurements event by event in the past, with the Terabytes of data available today, a software automation framework must exist to efficiently populate and deliver quality data to the researcher. This framework must also simultaneously provide the researcher with robust measurement and analysis tools that can handle and extract groups of events effectively and isolate the researcher from the now onerous task of database management and metadata collection necessary for validation and error analysis. We have succeeded in automating many of the collection, parsing, reconciliation and extraction tasks, individually. Several software automation prototypes have been produced and have resulted in demonstrated gains in efficiency of producing scientific data products. Future software automation tasks will continue to leverage database and information management technologies in addressing additional scientific calibration research tasks.

  19. Automation for System Safety Analysis

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Fleming, Land; Throop, David; Thronesbery, Carroll; Flores, Joshua; Bennett, Ted; Wennberg, Paul

    2009-01-01

    This presentation describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.

  20. Robust automated knowledge capture.

    SciTech Connect

    Stevens-Adams, Susan Marie; Abbott, Robert G.; Forsythe, James Chris; Trumbo, Michael Christopher Stefan; Haass, Michael Joseph; Hendrickson, Stacey M. Langfitt

    2011-10-01

    This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.

  1. Health care automation companies.

    PubMed

    1995-12-01

    Health care automation companies: card transaction processing/EFT/EDI-capable banks; claims auditing/analysis; claims processors/clearinghouses; coding products/services; computer hardware; computer networking/LAN/WAN; consultants; data processing/outsourcing; digital dictation/transcription; document imaging/optical disk storage; executive information systems; health information networks; hospital/health care information systems; interface engines; laboratory information systems; managed care information systems; patient identification/credit cards; pharmacy information systems; POS terminals; radiology information systems; software--claims related/computer-based patient records/home health care/materials management/supply ordering/physician practice management/translation/utilization review/outcomes; telecommunications products/services; telemedicine/teleradiology; value-added networks. PMID:10153839

  2. An Automated High-Throughput Cell-Based Multiplexed Flow Cytometry Assay to Identify Novel Compounds to Target Candida albicans Virulence-Related Proteins

    PubMed Central

    Bernardo, Stella M.; Allen, Christopher P.; Waller, Anna; Young, Susan M.; Oprea, Tudor; Sklar, Larry A.; Lee, Samuel A.

    2014-01-01

    Although three major classes of systemic antifungal agents are clinically available, each is characterized by important limitations. Thus, there has been considerable ongoing effort to develop novel and repurposed agents for the therapy of invasive fungal infections. In an effort to address these needs, we developed a novel high-throughput, multiplexed screening method that utilizes small molecules to probe candidate drug targets in the opportunistic fungal pathogen Candida albicans. This method is amenable to high-throughput automated screening and is based upon detection of changes in GFP levels of individually tagged target proteins. We first selected four GFP-tagged membrane-bound proteins associated with virulence or antifungal drug resistance in C. albicans. We demonstrated proof-of-principle that modulation of fluorescence intensity can be used to assay the expression of specific GFP-tagged target proteins to inhibitors (and inducers), and this change is measurable within the HyperCyt automated flow cytometry sampling system. Next, we generated a multiplex of differentially color-coded C. albicans strains bearing C-terminal GFP-tags of each gene encoding candidate drug targets incubated in the presence of small molecules from the Prestwick Chemical Library in 384-well microtiter plate format. Following incubation, cells were sampled through the HyperCyt system and modulation of protein levels, as indicated by changes in GFP-levels of each strain, was used to identify compounds of interest. The hit rate for both inducers and inhibitors identified in the primary screen did not exceed 1% of the total number of compounds in the small-molecule library that was probed, as would be expected from a robust target-specific, high-throughput screening campaign. Secondary assays for virulence characteristics based on null mutant strains were then used to further validate specificity. In all, this study presents a method for the identification and verification of new

  3. Automated endoscope reprocessors.

    PubMed

    Desilets, David; Kaul, Vivek; Tierney, William M; Banerjee, Subhas; Diehl, David L; Farraye, Francis A; Kethu, Sripathi R; Kwon, Richard S; Mamula, Petar; Pedrosa, Marcos C; Rodriguez, Sarah A; Wong Kee Song, Louis-Michel

    2010-10-01

    The ASGE Technology Committee provides reviews of existing, new, or emerging endoscopic technologies that have an impact on the practice of GI endoscopy. Evidence-based methodology is used, with a MEDLINE literature search to identify pertinent clinical studies on the topic and a MAUDE (U.S. Food and Drug Administration Center for Devices and Radiological Health) database search to identify the reported complications of a given technology. Both are supplemented by accessing the "related articles" feature of PubMed and by scrutinizing pertinent references cited by the identified studies. Controlled clinical trials are emphasized, but in many cases data from randomized, controlled trials are lacking. In such cases, large case series, preliminary clinical studies, and expert opinions are used. Technical data are gathered from traditional and Web-based publications, proprietary publications, and informal communications with pertinent vendors. Technology Status Evaluation Reports are drafted by 1 or 2 members of the ASGE Technology Committee, reviewed and edited by the committee as a whole, and approved by the Governing Board of the ASGE. When financial guidance is indicated, the most recent coding data and list prices at the time of publication are provided. For this review, the MEDLINE database was searched through February 2010 for articles related to automated endoscope reprocessors, using the words endoscope reprocessing, endoscope cleaning, automated endoscope reprocessors, and high-level disinfection. Technology Status Evaluation Reports are scientific reviews provided solely for educational and informational purposes. Technology Status Evaluation Reports are not rules and should not be construed as establishing a legal standard of care or as encouraging, advocating, requiring, or discouraging any particular treatment or payment for such treatment. PMID:20883843

  4. Extracting transport barriers and coherent structures

    NASA Astrophysics Data System (ADS)

    Lekien, F.

    2009-04-01

    Barriers to transport in geophysical flows can be rendered using separation indicators, such as relative dispersion, finite-time Lyapunov exponents (FTLE), finite-size Lyapunov exponents (FSLE), and mixing or leakiness rates. These methods provide a geometrical description of passive particle transport and recent developments extend this framework to finite-size particles, buoys, and underwater vehicles. In most methods, the barriers are rendered as ridges of the stretching indicator. I will describe a technique to process the indicator field and to convert the ridges into extractable level sets. The objective is to automate the identification of the coherent structures and to eliminate the need for visual inspection. Automation makes it also possible to increase the resolution dynamically and to render complex barriers at an acceptable computational cost.

  5. Comparison of different extraction methods for the determination of essential oils and related compounds from aromatic plants and optimization of solid-phase microextraction/gas chromatography.

    PubMed

    Richter, Jana; Schellenberg, Ingo

    2007-03-01

    Different extraction methods for the subsequent gas chromatographic determination of the composition of essential oils and related compounds from marjoram (Origanum majorana L.), caraway (Carum carvi L.), sage (Salvia officinalis L.), and thyme (Thymus vulgaris L.) have been compared. The comparison was also discussed with regard to transformation processes of genuine compounds, particularly in terms of expenditure of time. Hydrodistillation is the method of choice for the determination of the essential oil content of plants. For investigating the composition of genuine essential oils and related, aroma-active compounds, hydrodistillation is not very useful, because of discrimination and transformation processes due to high temperatures and acidic conditions. With cold solvent extraction, accelerated solvent extraction, and supercritical fluid extraction, discrimination of high and non-volatile aroma-active components as well as transformation processes can be diminished, but non-aroma-active fats, waxes, or pigments are often extracted, too. As solid-phase microextraction is a solvent-free fully automizable sample preparation technique, this was the most sparing to sensitive components and the most time-saving method for the rapid determination of the aroma compounds composition in marjoram, caraway, sage, and thyme. Finally, solid-phase microextraction could be successfully optimized for the extraction of the aroma components from the plants for their subsequent gas chromatographic determination. PMID:17221240

  6. What Makes a Matrix so Effective? An Empirical Test of the Relative Benefits of Signaling, Extraction, and Localization

    ERIC Educational Resources Information Center

    Kauffman, Douglas F.; Kiewra, Kenneth A.

    2010-01-01

    What type of display helps students learn the most and why? This study investigated how displays differing in terms of signaling, extraction, and localization impact learning. In Experiment 1, 72 students were assigned randomly to one cell of a 4 x 2 design. Students studied a standard text, a text with key ideas extracted, an outline that…

  7. Production and quantification of sesquiterpenes in Saccharomyces cerevisiae, including extraction, detection and quantification of terpene products and key related metabolites.

    PubMed

    Rodriguez, Sarah; Kirby, James; Denby, Charles M; Keasling, Jay D

    2014-08-01

    The procedures described here are designed for engineering Saccharomyces cerevisiae to produce sesquiterpenes with an aim to either increase product titers or to simply generate a quantity of product sufficient for identification and/or downstream experimentation. Engineering high-level sesquiterpene production in S. cerevisiae often requires iterations of strain modifications and metabolite analysis. To address the latter, the methods described here were tailored for robust measurement of metabolites that we have found to be fundamental indicators of pathway flux, using only gas chromatography and mass spectrometry (GC-MS) instrumentation. Thus, by focusing on heterologous production of sesquiterpenes via the mevalonate (MEV) pathway in S. cerevisiae, we detail procedures for extraction and detection of the key pathway metabolites MEV, squalene and ergosterol, as well as the farnesyl pyrophosphate (FPP)-derived side products farnesol and nerolidol. Analysis of these compounds is important for quality control, because they are possible indicators of pathway imbalance. As many of the sesquiterpene synthase (STS) genes encountered in nature are of plant origin and often not optimal for expression in yeast, we provide guidelines for designing gene expression cassettes to enable expression in S. cerevisiae. As a case study for these protocols, we have selected the sesquiterpene amorphadiene, native to Artemisia annua and related plants. The analytical steps can be completed within 1-2 working days, and a typical experiment might take 1 week. PMID:25058645

  8. Glycosaminoglycans in extracts of cardiac amyloid fibrils from familial amyloid cardiomyopathy of Danish origin related to variant transthyretin Met 111.

    PubMed

    Magnus, J H; Stenstad, T; Kolset, S O; Husby, G

    1991-07-01

    We have previously demonstrated an association between secondary AA type amyloid fibrils and glycosaminoglycans (GAGs) in human liver. The present study was aimed at investigating whether a similar association could be demonstrated in isolated cardiac amyloid fibrils from a unique Danish family with amyloid cardiomyopathy related to variant transthyretin (TTR) with a single amino acid substitution of a methionin for leucine at position 111 (TTR Met 111). Using gel filtration and ion exchange chromatography, significant amounts of GAGs were detected in close association with purified myocardial amyloid fibrils, whereas only trace amounts of polysaccharides were present in the corresponding normal preparation. The GAGs were identified as 50% chondroitin sulfate, 33% heparin/heparan sulfate, and 17% hyaluronan. With the methods used the amyloid associated GAGs appeared as high molecular weight free polysaccharide chains, and not as part of intact proteoglycans (PGs) in the fibril extracts. We conclude that the association between purified amyloid fibrils and GAGs may be a general feature of amyloid deposits. Also, we suggest that the proportion of different GAGs in the amyloid deposits may depend both on the organ or tissues affected and the type of proteins making up the fibrils. PMID:2068532

  9. Human Communication Needs and Organizational Productivity: The Potential Impact of Office Automation.

    ERIC Educational Resources Information Center

    Culnan, Mary J.; Bair, James H.

    1983-01-01

    This survey of potential impacts of office automation on organizational communication and productivity covers the following--(1) the relationship between office automation and organizational communication; (2) communication variables relevant to office automation; (3) benefits and caveats related to implementation of office automation. Thirty-four…

  10. PAHs, PAH-induced carcinogenic potency, and particle-extract-Induced cytotoxicity of traffic-related nano/ultrafine particles.

    PubMed

    Lin, Chih-Chung; Chen, Shui-Jen; Huang, Kuo-Lin; Lee, Wen-Jhy; Lin, Wen-Yinn; Tsai, Jen-Hsiung; Chaung, Hso-Chi

    2008-06-01

    Polycyclic aromatic hydrocarbons (PAHs) bound in nano/ ultrafine particles from vehicle emissions may cause adverse health effects. However, little is known about the characteristics of the nanoparticle-bound PAHs and the PAH-associated carcinogenic potency/cytotoxicity; therefore, traffic-related nano/ultrafine particles were collected in this study using a microorifice uniform deposition impactor(MOUDI) and a nano-MOUDI. For PM0.056--18, the difference in size-distribution of particulate total-PAHs between non-after-rain and after-rain samples was statistically significant at alpha = 0.05; however, this difference was not significant for PM0.01--0.056. The PAH correlation between PM0.01--0.1 and PM0.1--1.8 was lower for the after-rain samples than forthe non-after-rain samples. The average particulate total-PAHs in five samplings displayed a trimodal distribution with a major peak in the Aitken mode (0.032--0.056 microm). About half of the particulate total-PAHs were in the ultrafine size range. The BaPeq sums of BaP, IND, and DBA (with toxic equivalence factors > or = 0.1) accounted for approximately 90% of the total-BaPeq in the nano/ultrafine particles, although these three compounds contributed little to the mass of the sampled particles. The mean content of the particle-bound total-PAHs/-BaPeqs and the PAH/BaPeq-derived carcinogenic potency followed the order nano > ultrafine > fine > coarse. For a sunny day sample, the cytotoxicity of particle extracts (using 1:1 (v/v) n-hexane/dichloromethane) was significantly higher (p < 0.05) for the nano (particularly the 10-18 nm)/ultrafine particles than for the coarser particles and bleomycin. Therefore, traffic-related nano and ultrafine particles are possibly cytotoxic. PMID:18589992

  11. Laboratory Automation and Middleware.

    PubMed

    Riben, Michael

    2015-06-01

    The practice of surgical pathology is under constant pressure to deliver the highest quality of service, reduce errors, increase throughput, and decrease turnaround time while at the same time dealing with an aging workforce, increasing financial constraints, and economic uncertainty. Although not able to implement total laboratory automation, great progress continues to be made in workstation automation in all areas of the pathology laboratory. This report highlights the benefits and challenges of pathology automation, reviews middleware and its use to facilitate automation, and reviews the progress so far in the anatomic pathology laboratory. PMID:26065792

  12. Age-related toxicity of amyloid-beta associated with increased pERK and pCREB in primary hippocampal neurons: reversal by blueberry extract

    PubMed Central

    Brewer, Gregory J.; Torricelli, John R.; Lindsey, Amanda L.; Kunz, Elizabeth Z.; Neuman, A.; Fisher, Derek R.; Joseph, James A.

    2009-01-01

    Further clarification is needed to address the paradox that memory formation, aging and neurodegeneration all involve calcium influx, oxyradical production (ROS) and activation of certain signaling pathways. In aged rats and in APP/PS-1 mice, cognitive and hippocampal Ca2+ dysregulation were reversed by food supplementation with a high antioxidant blueberry extract. Here, we studied whether neurons were an important target of blueberry extract and whether the mechanism involved altered ROS signaling through MAPK and CREB, pathways known to be activated in response to amyloid-beta. Primary hippocampal neurons were isolated and cultured from embryonic, middle-age or old-age (24 months) rats. Blueberry extract was found to be equally neuroprotective against amyloid-beta neurotoxicity at all ages. Increases in amyloid-beta toxicity with age were associated with age-related increases in immunoreactivity of neurons to pERK and an age-independent increase in pCREB. Treatment with blueberry extract strongly inhibited these increases in parallel with neuroprotection. Simultaneous labeling for ROS and for glutathione with dichlorofluorescein and monocholorobimane showed a mechanism of action of blueberry extract to involve transient ROS generation with an increase in the redox buffer, glutathione. We conclude that the increased age-related susceptibility of old-age neurons to amyloid-beta toxicity may be due to higher levels of activation of pERK and pCREB pathways that can be protected by blueberry extract through inhibition of both these pathways through an ROS stress response. These results suggest that the beneficial effects of blueberry extract may involve transient stress signaling and ROS protection that may translate into improved cognition in aging rats and APP/PS1 mice given blueberry extract. PMID:19954954

  13. Assessment of the influence of traffic-related particles in urban dust using sequential selective extraction and oral bioaccessibility tests.

    PubMed

    Patinha, C; Durães, N; Sousa, P; Dias, A C; Reis, A P; Noack, Y; Ferreira da Silva, E

    2015-08-01

    Urban dust is a heterogeneous mix, where traffic-related particles can combine with soil mineral compounds, forming a unique and site-specific material. These traffic-related particles are usually enriched in potentially harmful elements, enhancing the health risk for population by inhalation or ingestion. Urban dust samples from Estarreja city and traffic-related particles (brake dust and white traffic paint) were studied to understand the relative contribution of the traffic particles in the geochemical behaviour of urban dust and to evaluate the long-term impacts of the metals on an urban environment, as well as the risk to the populations. It was possible to distinguish two groups of urban dust samples according to Cu behaviour: (1) one group with low amounts of fine particles (<38 µm), low contents of organic material, high percentage of Cu in soluble phases, and low Cu bioaccessible fraction (Bf) values. This group showed similar chemical behaviour with the brake dust samples of low- to mid-range car brands (with more than 10 years old), composed by coarser wear particles; and (2) another group with greater amounts of fine particles (<38 µm), with low percentage of Cu associated with soluble phases, and with greater Cu Bf values. This group behaved similar to those found for brake dust of mid- to high-range car brands (with less than 10 years old). The results obtained showed that there is no direct correlation between the geoavailability of metals estimated by sequential selective chemical extraction (SSCE) and the in vitro oral bioaccessibility (UBM) test. Thus, oral bioaccessibility of urban dust is site specific. Geoavailability was greatly dependent on particle size, where the bioaccessibility tended to increase with a reduction in particle diameter. As anthropogenic particles showed high metal concentration and a smaller size than mineral particles, urban dusts are of major concern to the populations' health, since fine particles are easily re

  14. Management Planning for Workplace Automation.

    ERIC Educational Resources Information Center

    McDole, Thomas L.

    Several factors must be considered when implementing office automation. Included among these are whether or not to automate at all, the effects of automation on employees, requirements imposed by automation on the physical environment, effects of automation on the total organization, and effects on clientele. The reasons behind the success or…

  15. Automating occupational protection records systems

    SciTech Connect

    Lyon, M.; Martin, J.B.

    1991-10-01

    Occupational protection records have traditionally been generated by field and laboratory personnel, assembled into files in the safety office, and eventually stored in a warehouse or other facility. Until recently, these records have been primarily paper copies, often handwritten. Sometimes, the paper is microfilmed for storage. However, electronic records are beginning to replace these traditional methods. The purpose of this paper is to provide guidance for making the transition to automated record keeping and retrieval using modern computer equipment. This paper describes the types of records most readily converted to electronic record keeping and a methodology for implementing an automated record system. The process of conversion is based on a requirements analysis to assess program needs and a high level of user involvement during the development. The importance of indexing the hard copy records for easy retrieval is also discussed. The concept of linkage between related records and its importance relative to reporting, research, and litigation will be addressed. 2 figs.