Science.gov

Sample records for automated change detection

  1. Automated change detection for synthetic aperture sonar

    NASA Astrophysics Data System (ADS)

    G-Michael, Tesfaye; Marchand, Bradley; Tucker, J. D.; Sternlicht, Daniel D.; Marston, Timothy M.; Azimi-Sadjadi, Mahmood R.

    2014-05-01

    In this paper, an automated change detection technique is presented that compares new and historical seafloor images created with sidescan synthetic aperture sonar (SAS) for changes occurring over time. The method consists of a four stage process: a coarse navigational alignment; fine-scale co-registration using the scale invariant feature transform (SIFT) algorithm to match features between overlapping images; sub-pixel co-registration to improves phase coherence; and finally, change detection utilizing canonical correlation analysis (CCA). The method was tested using data collected with a high-frequency SAS in a sandy shallow-water environment. By using precise co-registration tools and change detection algorithms, it is shown that the coherent nature of the SAS data can be exploited and utilized in this environment over time scales ranging from hours through several days.

  2. Automated baseline change detection phase I. Final report

    SciTech Connect

    1995-12-01

    The Automated Baseline Change Detection (ABCD) project is supported by the DOE Morgantown Energy Technology Center (METC) as part of its ER&WM cross-cutting technology program in robotics. Phase 1 of the Automated Baseline Change Detection project is summarized in this topical report. The primary objective of this project is to apply robotic and optical sensor technology to the operational inspection of mixed toxic and radioactive waste stored in barrels, using Automated Baseline Change Detection (ABCD), based on image subtraction. Absolute change detection is based on detecting any visible physical changes, regardless of cause, between a current inspection image of a barrel and an archived baseline image of the same barrel. Thus, in addition to rust, the ABCD system can also detect corrosion, leaks, dents, and bulges. The ABCD approach and method rely on precise camera positioning and repositioning relative to the barrel and on feature recognition in images. In support of this primary objective, there are secondary objectives to determine DOE operational inspection requirements and DOE system fielding requirements.

  3. Automated baseline change detection -- Phases 1 and 2. Final report

    SciTech Connect

    Byler, E.

    1997-10-31

    The primary objective of this project is to apply robotic and optical sensor technology to the operational inspection of mixed toxic and radioactive waste stored in barrels, using Automated Baseline Change Detection (ABCD), based on image subtraction. Absolute change detection is based on detecting any visible physical changes, regardless of cause, between a current inspection image of a barrel and an archived baseline image of the same barrel. Thus, in addition to rust, the ABCD system can also detect corrosion, leaks, dents, and bulges. The ABCD approach and method rely on precise camera positioning and repositioning relative to the barrel and on feature recognition in images. The ABCD image processing software was installed on a robotic vehicle developed under a related DOE/FETC contract DE-AC21-92MC29112 Intelligent Mobile Sensor System (IMSS) and integrated with the electronics and software. This vehicle was designed especially to navigate in DOE Waste Storage Facilities. Initial system testing was performed at Fernald in June 1996. After some further development and more extensive integration the prototype integrated system was installed and tested at the Radioactive Waste Management Facility (RWMC) at INEEL beginning in April 1997 through the present (November 1997). The integrated system, composed of ABCD imaging software and IMSS mobility base, is called MISS EVE (Mobile Intelligent Sensor System--Environmental Validation Expert). Evaluation of the integrated system in RWMC Building 628, containing approximately 10,000 drums, demonstrated an easy to use system with the ability to properly navigate through the facility, image all the defined drums, and process the results into a report delivered to the operator on a GUI interface and on hard copy. Further work is needed to make the brassboard system more operationally robust.

  4. Automated Detection of Changes on the Lunar Surface

    NASA Astrophysics Data System (ADS)

    Cook, A.; Gibbens, M.

    2005-08-01

    Although the Moon is considered to be geologically dormant, surface altering events visible to orbiting spacecraft must still occur, albeit infrequently e.g. fresh impact craters detected from Apollo imagery. Given a surface area of 3.8E7 km2 and the 40 year time frame spanning Lunar Orbiter to SMART-1 missions, it is likely that 10's-100's of surface changes measurable in the > 50m scale range may be detected by automatically comparing temporal images of the same areas under similar (< 5 deg difference) incidence and emission angles. Automated tie-pointing and image footprint overlap detection developed from Clementine stereo research can be used to select suitable overlapping temporal image pairs of a given area. These can then be automatically registered/warped together, photometrically calibrated to each other and subtracted to leave a difference image. Differences that exceed 3 standard deviations across the image can then be compared to the most recent mosaics of optical maturity in order to confirm whether a suspected area of change is aligned with fresh non-spaceweathered parts of the surface. Knowledge that could be gained from such a study could include: 1) confirmation of cratering rate assumptions that were made from the Apollo ALSEP seismometers, 2) identification of surface disturbances by ejecta from impacts detected by Apollo seismometer, or Earth based telescopic impact flash observations; these can then be used to help relate estimated impact energy to crater size, 3) the areal extent of dust transport from impact ejecta, landslides, or other suspected mechanisms such as residual outgassing or electrostaic levitation of dust. All three of these have important implications for future surface based exploration in identifying sites of interest that can be either monitored over time to study the progression of space weathering, or for studying freshly excavated underlying geology.

  5. Information Foraging and Change Detection for Automated Science Exploration

    NASA Technical Reports Server (NTRS)

    Furlong, P. Michael; Dille, Michael

    2016-01-01

    This paper presents a new algorithm for autonomous on-line exploration in unknown environments. The objective is to free remote scientists from possibly-infeasible extensive preliminary site investigation prior to sending robotic agents. We simulate a common exploration task for an autonomous robot sampling the environment at various locations and compare performance against simpler control strategies. An extension is proposed and evaluated that further permits operation in the presence of environmental variability in which the robot encounters a change in the distribution underlying sampling targets. Experimental results indicate a strong improvement in performance across varied parameter choices for the scenario.

  6. SU-E-J-191: Automated Detection of Anatomic Changes in H'N Patients

    SciTech Connect

    Usynin, A; Ramsey, C

    2014-06-01

    Purpose: To develop a novel statistics-based method for automated detection of anatomical changes using cone-beam CT data. A method was developed that can provide a reliable and automated early warning system that enables a “just-in-time” adaptation of the treatment plan. Methods: Anatomical changes were evaluated by comparing the original treatment planning CT with daily CBCT images taken prior treatment delivery. The external body contour was computed on a given CT slice and compared against the corresponding contour on the daily CBCT. In contrast to threshold-based techniques, a statistical approach was employed to evaluate the difference between the contours using a given confidence level. The detection tool used the two-sample Kolmogorov-Smirnov test, which is a non-parametric technique that compares two samples drawn from arbitrary probability distributions. 11 H'N patients were retrospectively selected from a clinical imaging database with a total of 186 CBCT images. Six patients in the database were confirmed to have anatomic changes during the course of radiotherapy. Five of the H'N patients did not have significant changes. The KS test was applied to the contour data using a sliding window analysis. The confidence level of 0.99 was used to moderate false detection. Results: The algorithm was able to correctly detect anatomical changes in 6 out of 6 patients with an excellent spatial accuracy as early as at the 14th elapsed day. The algorithm provided a consistent and accurate delineation of the detected changes. The output of the anatomical change tool is easy interpretable, and can be shown overlaid on a 3D rendering of the patient's anatomy. Conclusion: The detection method provides the basis for one of the key components of Adaptive Radiation Therapy. The method uses tools that are readily available in the clinic, including daily CBCT imaging, and image co-registration facilities.

  7. Automated detection of slum area change in Hyderabad, India using multitemporal satellite imagery

    NASA Astrophysics Data System (ADS)

    Kit, Oleksandr; Lüdeke, Matthias

    2013-09-01

    This paper presents an approach to automated identification of slum area change patterns in Hyderabad, India, using multi-year and multi-sensor very high resolution satellite imagery. It relies upon a lacunarity-based slum detection algorithm, combined with Canny- and LSD-based imagery pre-processing routines. This method outputs plausible and spatially explicit slum locations for the whole urban agglomeration of Hyderabad in years 2003 and 2010. The results indicate a considerable growth of area occupied by slums between these years and allow identification of trends in slum development in this urban agglomeration.

  8. Automated detection of sperm whale sounds as a function of abrupt changes in sound intensity

    NASA Astrophysics Data System (ADS)

    Walker, Christopher D.; Rayborn, Grayson H.; Brack, Benjamin A.; Kuczaj, Stan A.; Paulos, Robin L.

    2003-04-01

    An algorithm designed to detect abrupt changes in sound intensity was developed and used to identify and count sperm whale vocalizations and to measure boat noise. The algorithm is a MATLAB routine that counts the number of occurrences for which the change in intensity level exceeds a threshold. The algorithm also permits the setting of a ``dead time'' interval to prevent the counting of multiple pulses within a single sperm whale click. This algorithm was used to analyze digitally sampled recordings of ambient noise obtained from the Gulf of Mexico using near bottom mounted EARS buoys deployed as part of the Littoral Acoustic Demonstration Center experiment. Because the background in these data varied slowly, the result of the application of the algorithm was automated detection of sperm whale clicks and creaks with results that agreed well with those obtained by trained human listeners. [Research supported by ONR.

  9. The Challenge of Automated Change Detection: Developing a Method for the Updating of Land Parcels

    NASA Astrophysics Data System (ADS)

    Matikainen, L.; Karila, K.; Litkey, P.; Ahokas, E.; Munck, A.; Karjalainen, M.; Hyyppä, J.

    2012-07-01

    Development of change detection methods that are functional and reliable enough for operational work is still a demanding task. This article discusses automated change detection from the viewpoint of one case study: the Finnish Land Parcel Identification System (FLPIS). The objective of the study is to develop a change detection method that could be used as an aid in the updating of the FLPIS. The method is based on object-based interpretation, and it uses existing parcel boundaries and new aerial ortho images as input data. Rules for classifying field and non-field objects are defined automatically by using the classification tree method and training data. Additional, manually created rules are used to improve the results. Classification tests carried out during the development work suggest that real changes can be detected relatively well. According to a recent visual evaluation, 96% of changes larger than 100 m2 were detected, at least partly. The overall accuracy of the change detection results was 93% when compared with reference data pixel-by-pixel. On the other hand, there are also missing changes and numerous false alarms. The main challenges encountered in the method development include the wide diversity of agricultural fields and other land cover objects locally, across the country, and at different times of the spring and summer, variability in the digital numbers (DNs) of the aerial images, the different nature of visual and automatic interpretation, and the small percentage of the total field area that has really changed. These challenges and possible solutions are discussed in the article.

  10. Eigenvector methods for automated detection of electrocardiographic changes in partial epileptic patients.

    PubMed

    Ubeyli, Elif Derya

    2009-07-01

    In this paper, the automated diagnostic systems trained on diverse and composite features were presented for detection of electrocardiographic changes in partial epileptic patients. In practical applications of pattern recognition, there are often diverse features extracted from raw data that require recognizing. Methods of combining multiple classifiers with diverse features are viewed as a general problem in various application areas of pattern recognition. Two types (normal and partial epilepsy) of ECG beats (180 records from each class) were obtained from the Physiobank database. The multilayer perceptron neural network (MLPNN), combined neural network (CNN), mixture of experts (ME), and modified mixture of experts (MME) were tested and benchmarked for their performance on the classification of the studied ECG signals, which were trained on diverse or composite features. Decision making was performed in two stages: feature extraction by eigenvector methods and classification using the classifiers trained on the extracted features. The present research demonstrated that the MME trained on the diverse features achieved accuracy rates (total classification accuracy is 99.44%) that were higher than that of the other automated diagnostic systems. PMID:19273021

  11. Semi-Automated Cloud/shadow Removal and Land Cover Change Detection Using Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Sah, A. K.; Sah, B. P.; Honji, K.; Kubo, N.; Senthil, S.

    2012-08-01

    Multi-platform/sensor and multi-temporal satellite data facilitates analysis of successive change/monitoring over the longer period and there by forest biomass helping REDD mechanism. The historical archive satellite imagery, specifically Landsat, can play an important role for historical trend analysis of forest cover change at national level. Whereas the fresh high resolution satellite, such as ALOS, imagery can be used for detailed analysis of present forest cover status. ALOS satellite imagery is most suitable as it offers data with optical (AVNIR-2) as well as SAR (PALSAR) sensors. AVNIR-2 providing data in multispectral modes play due role in extracting forest information. In this study, a semi-automated approach has been devised for cloud/shadow and haze removal and land cover change detection. Cloud/shadow pixels are replaced by free pixels of same image with the help of PALSAR image. The tracking of pixel based land cover change for the 1995-2009 period in combination of Landsat and latest ALOS data from its AVNIR-2 for the tropical rain forest area has been carried out using Decision Tree Classifiers followed by un-supervised classification. As threshold for tree classifier, criteria of NDVI refined by reflectance value has been employed. The result shows all pixels have been successfully registered to the pre-defined 6 categories; in accordance with IPCC definition; of land cover types with an overall accuracy 80 percent.

  12. Tapping into the Hexagon spy imagery database: A new automated pipeline for geomorphic change detection

    NASA Astrophysics Data System (ADS)

    Maurer, Joshua; Rupper, Summer

    2015-10-01

    Declassified historical imagery from the Hexagon spy satellite database has near-global coverage, yet remains a largely untapped resource for geomorphic change studies. Unavailable satellite ephemeris data make DEM (digital elevation model) extraction difficult in terms of time and accuracy. A new fully-automated pipeline for DEM extraction and image orthorectification is presented which yields accurate results and greatly increases efficiency over traditional photogrammetric methods, making the Hexagon image database much more appealing and accessible. A 1980 Hexagon DEM is extracted and geomorphic change computed for the Thistle Creek Landslide region in the Wasatch Range of North America to demonstrate an application of the new method. Surface elevation changes resulting from the landslide show an average elevation decrease of 14.4 ± 4.3 m in the source area, an increase of 17.6 ± 4.7 m in the deposition area, and a decrease of 30.2 ± 5.1 m resulting from a new roadcut. Two additional applications of the method include volume estimates of material excavated during the Mount St. Helens volcanic eruption and the volume of net ice loss over a 34-year period for glaciers in the Bhutanese Himalayas. These results show the value of Hexagon imagery in detecting and quantifying historical geomorphic change, especially in regions where other data sources are limited.

  13. Point Cloud Based Change Detection - an Automated Approach for Cloud-based Services

    NASA Astrophysics Data System (ADS)

    Collins, Patrick; Bahr, Thomas

    2016-04-01

    The fusion of stereo photogrammetric point clouds with LiDAR data or terrain information derived from SAR interferometry has a significant potential for 3D topographic change detection. In the present case study latest point cloud generation and analysis capabilities are used to examine a landslide that occurred in the village of Malin in Maharashtra, India, on 30 July 2014, and affected an area of ca. 44.000 m2. It focuses on Pléiades high resolution satellite imagery and the Airbus DS WorldDEMTM as a product of the TanDEM-X mission. This case study was performed using the COTS software package ENVI 5.3. Integration of custom processes and automation is supported by IDL (Interactive Data Language). Thus, ENVI analytics is running via the object-oriented and IDL-based ENVITask API. The pre-event topography is represented by the WorldDEMTM product, delivered with a raster of 12 m x 12 m and based on the EGM2008 geoid (called pre-DEM). For the post-event situation a Pléiades 1B stereo image pair of the AOI affected was obtained. The ENVITask "GeneratePointCloudsByDenseImageMatching" was implemented to extract passive point clouds in LAS format from the panchromatic stereo datasets: • A dense image-matching algorithm is used to identify corresponding points in the two images. • A block adjustment is applied to refine the 3D coordinates that describe the scene geometry. • Additionally, the WorldDEMTM was input to constrain the range of heights in the matching area, and subsequently the length of the epipolar line. The "PointCloudFeatureExtraction" task was executed to generate the post-event digital surface model from the photogrammetric point clouds (called post-DEM). Post-processing consisted of the following steps: • Adding the geoid component (EGM 2008) to the post-DEM. • Pre-DEM reprojection to the UTM Zone 43N (WGS-84) coordinate system and resizing. • Subtraction of the pre-DEM from the post-DEM. • Filtering and threshold based classification of

  14. Automated anomaly detection processor

    NASA Astrophysics Data System (ADS)

    Kraiman, James B.; Arouh, Scott L.; Webb, Michael L.

    2002-07-01

    Robust exploitation of tracking and surveillance data will provide an early warning and cueing capability for military and civilian Law Enforcement Agency operations. This will improve dynamic tasking of limited resources and hence operational efficiency. The challenge is to rapidly identify threat activity within a huge background of noncombatant traffic. We discuss development of an Automated Anomaly Detection Processor (AADP) that exploits multi-INT, multi-sensor tracking and surveillance data to rapidly identify and characterize events and/or objects of military interest, without requiring operators to specify threat behaviors or templates. The AADP has successfully detected an anomaly in traffic patterns in Los Angeles, analyzed ship track data collected during a Fleet Battle Experiment to detect simulated mine laying behavior amongst maritime noncombatants, and is currently under development for surface vessel tracking within the Coast Guard's Vessel Traffic Service to support port security, ship inspection, and harbor traffic control missions, and to monitor medical surveillance databases for early alert of a bioterrorist attack. The AADP can also be integrated into combat simulations to enhance model fidelity of multi-sensor fusion effects in military operations.

  15. Automated segmentation algorithm for detection of changes in vaginal epithelial morphology using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Chitchian, Shahab; Vincent, Kathleen L.; Vargas, Gracie; Motamedi, Massoud

    2012-11-01

    We have explored the use of optical coherence tomography (OCT) as a noninvasive tool for assessing the toxicity of topical microbicides, products used to prevent HIV, by monitoring the integrity of the vaginal epithelium. A novel feature-based segmentation algorithm using a nearest-neighbor classifier was developed to monitor changes in the morphology of vaginal epithelium. The two-step automated algorithm yielded OCT images with a clearly defined epithelial layer, enabling differentiation of normal and damaged tissue. The algorithm was robust in that it was able to discriminate the epithelial layer from underlying stroma as well as residual microbicide product on the surface. This segmentation technique for OCT images has the potential to be readily adaptable to the clinical setting for noninvasively defining the boundaries of the epithelium, enabling quantifiable assessment of microbicide-induced damage in vaginal tissue.

  16. Changes in ecosystem resilience detected in automated measures of ecosystem metabolism during a whole-lake manipulation.

    PubMed

    Batt, Ryan D; Carpenter, Stephen R; Cole, Jonathan J; Pace, Michael L; Johnson, Robert A

    2013-10-22

    Environmental sensor networks are developing rapidly to assess changes in ecosystems and their services. Some ecosystem changes involve thresholds, and theory suggests that statistical indicators of changing resilience can be detected near thresholds. We examined the capacity of environmental sensors to assess resilience during an experimentally induced transition in a whole-lake manipulation. A trophic cascade was induced in a planktivore-dominated lake by slowly adding piscivorous bass, whereas a nearby bass-dominated lake remained unmanipulated and served as a reference ecosystem during the 4-y experiment. In both the manipulated and reference lakes, automated sensors were used to measure variables related to ecosystem metabolism (dissolved oxygen, pH, and chlorophyll-a concentration) and to estimate gross primary production, respiration, and net ecosystem production. Thresholds were detected in some automated measurements more than a year before the completion of the transition to piscivore dominance. Directly measured variables (dissolved oxygen, pH, and chlorophyll-a concentration) related to ecosystem metabolism were better indicators of the approaching threshold than were the estimates of rates (gross primary production, respiration, and net ecosystem production); this difference was likely a result of the larger uncertainties in the derived rate estimates. Thus, relatively simple characteristics of ecosystems that were observed directly by the sensors were superior indicators of changing resilience. Models linked to thresholds in variables that are directly observed by sensor networks may provide unique opportunities for evaluating resilience in complex ecosystems. PMID:24101479

  17. Automated urban change detection using scanned cartographic and satellite image data

    USGS Publications Warehouse

    Spooner, Jeffrey D.

    1991-01-01

    The objective of this study was to develop a digital procedure to measure the amount of urban change that has occurred in an area since the publication of its corresponding 1:24,000-scale topographic map. Traditional change detection techniques are dependent upon the visual comparison of high-altitude aerial photographs or, more recently, satellite image data to a corresponding map. Analytical change detection techniques typically involve the digital comparison of satellite images to one another. As a result of this investigation, a new technique has been developed that analytically compares the most recently published map to a corresponding digital satellite image. Scanned cartographic and satellite image data are combined in a single file with a structural component derived from the satellite image. This investigation determined that with this combination of data the spectral characteristics of urban change are predictable. A supervised classification was used to detect and delimit urban change. Although it was not intended to identify the specific nature of any change, this procedure does provide a means of differentiating between areas that have or have not experienced urbanization to determine appropriate map revision strategies.

  18. Fluorescence detection by intensity changes for high-performance thin-layer chromatography separation of lipids using automated multiple development.

    PubMed

    Cebolla, Vicente L; Jarne, Carmen; Domingo, Pilar; Domínguez, Andrés; Delgado-Camón, Aránzazu; Garriga, Rosa; Galbán, Javier; Membrado, Luis; Gálvez, Eva M; Cossío, Fernando P

    2011-05-13

    Changes in emission of berberine cation, induced by non-covalent interactions with lipids on silica gel plates, can be used for detecting and quantifying lipids using fluorescence scanning densitometry in HPTLC analysis. This procedure, referred to as fluorescence detection by intensity changes (FDIC) has been used here in combination with automated multiple development (HPTLC/AMD), a gradient-based separation HPTLC technique, for separating, detecting and quantifying lipids from different families. Three different HPTLC/AMD gradient schemes have been developed for separating: neutral lipid families and steryl glycosides; different sphingolipids; and sphingosine-sphinganine mixtures. Fluorescent molar responses of studied lipids, and differences in response among different lipid families have been rationalized in the light of a previously proposed model of FDIC response, which is based on ion-induced dipole interactions between the fluorophore and the analyte. Likewise, computational calculations using molecular mechanics have also been a complementary useful tool to explain high FDIC responses of cholesteryl and steryl-derivatives, and moderate responses of sphingolipids. An explanation for the high FDIC response of cholesterol, whose limit of detection (LOD) is 5 ng, has been proposed. Advantages and limitations of FDIC application have also been discussed. PMID:21145556

  19. Detecting Glaucoma Using Automated Pupillography

    PubMed Central

    Tatham, Andrew J.; Meira-Freitas, Daniel; Weinreb, Robert N.; Zangwill, Linda M.; Medeiros, Felipe A.

    2014-01-01

    Objective To evaluate the ability of a binocular automated pupillograph to discriminate healthy subjects from those with glaucoma. Design Cross-sectional observational study. Participants Both eyes of 116 subjects, including 66 patients with glaucoma in at least 1 eye and 50 healthy subjects from the Diagnostic Innovations in Glaucoma Study. Eyes were classified as glaucomatous by repeatable abnormal standard automated perimetry (SAP) or progressive glaucomatous changes on stereophotographs. Methods All subjects underwent automated pupillography using the RAPDx pupillograph (Konan Medical USA, Inc., Irvine, CA). Main Outcome Measures Receiver operating characteristic (ROC) curves were constructed to assess the diagnostic ability of pupil response parameters to white, red, green, yellow, and blue full-field and regional stimuli. A ROC regression model was used to investigate the influence of disease severity and asymmetry on diagnostic ability. Results The largest area under the ROC curve (AUC) for any single parameter was 0.75. Disease asymmetry (P < 0.001), but not disease severity (P = 0.058), had a significant effect on diagnostic ability. At the sample mean age (60.9 years), AUCs for arbitrary values of intereye difference in SAP mean deviation (MD) of 0, 5, 10, and 15 dB were 0.58, 0.71, 0.82, and 0.90, respectively. The mean intereye difference in MD was 2.2±3.1 dB. The best combination of parameters had an AUC of 0.85; however, the cross-validated bias-corrected AUC for these parameters was only 0.74. Conclusions Although the pupillograph had a good ability to detect glaucoma in the presence of asymmetric disease, it performed poorly in those with symmetric disease. PMID:24485921

  20. LANDSAT image differencing as an automated land cover change detection technique

    NASA Technical Reports Server (NTRS)

    Stauffer, M. L.; Mckinney, R. L.

    1978-01-01

    Image differencing was investigated as a technique for use with LANDSAT digital data to delineate areas of land cover change in an urban environment. LANDSAT data collected in April 1973 and April 1975 for Austin, Texas, were geometrically corrected and precisely registered to United States Geological Survey 7.5-minute quadrangle maps. At each pixel location reflectance values for the corresponding bands were subtracted to produce four difference images. Areas of major reflectance differences are isolated by thresholding each of the difference images. The resulting images are combined to obtain an image data set to total change. These areas of reflectance differences were found, in general, to correspond to areas of land cover change. Information on areas of land cover change was incorporated into a procedure to mask out all nonchange areas and perform an unsupervised classification only for data in the change areas. This procedure identified three broad categories: (1) areas of high reflectance (construction or extractive), (2) changes in agricultural areas, and (3) areas of confusion between agricultural and other areas.

  1. Use of an automated digital images system for detecting plant status changes in response to climate change manipulations

    NASA Astrophysics Data System (ADS)

    Cesaraccio, Carla; Piga, Alessandra; Ventura, Andrea; Arca, Angelo; Duce, Pierpaolo

    2014-05-01

    The importance of phenological research for understanding the consequences of global environmental change on vegetation is highlighted in the most recent IPCC reports. Collecting time series of phenological events appears to be of crucial importance to better understand how vegetation systems respond to climatic regime fluctuations, and, consequently, to develop effective management and adaptation strategies. However, traditional monitoring of phenology is labor intensive and costly and affected to a certain degree of subjective inaccuracy. Other methods used to quantify the seasonal patterns of vegetation development are based on satellite remote sensing (land surface phenology) but they operate at coarse spatial and temporal resolution. To overcome the issues of these methodologies different approaches for vegetation monitoring based on "near-surface" remote sensing have been proposed in recent researches. In particular, the use of digital cameras has become more common for phenological monitoring. Digital images provide spectral information in the red, green, and blue (RGB) wavelengths. Inflection points in seasonal variations of intensities of each color channel can be used to identify phenological events. Canopy green-up phenology can be quantified from the greenness indices. Species-specific dates of leaf emergence can be estimated by RGB image analyses. In this research, an Automated Phenological Observation System (APOS), based on digital image sensors, was used for monitoring the phenological behavior of shrubland species in a Mediterranean site. The system was developed under the INCREASE (an Integrated Network on Climate Change Research) EU-funded research infrastructure project, which is based upon large scale field experiments with non-intrusive climatic manipulations. Monitoring of phenological behavior was conducted continuously since October 2012. The system was set to acquire one panorama per day at noon which included three experimental plots for

  2. Development of o.a.s.i.s., a new automated blood culture system in which detection is based on measurement of bottle headspace pressure changes.

    PubMed Central

    Stevens, C M; Swaine, D; Butler, C; Carr, A H; Weightman, A; Catchpole, C R; Healing, D E; Elliott, T S

    1994-01-01

    o.a.s.i.s. (Unipath Ltd., Basingstoke, United Kingdom) is a new automated blood culture system. The metabolism of microorganisms is detected by measuring changes in the pressure of the headspace of blood culture bottles. These changes are measured by monitoring the position of a flexible sealing septum, every 5 min, with a scanning laser sensor. This noninvasive system can detect both gas absorption and production and does not rely solely on measuring increasing carbon dioxide levels. A research prototype instrument was used to carry out an evaluation of the media, the detection system, and its associated detection algorithm. In simulated blood cultures, o.a.s.i.s. supported growth and detected a range of clinical isolates. Times to positivity were significantly shorter in o.a.s.i.s. than in the BACTEC 460 system. Results of a clinical feasibility study, with a manual blood culture system as a control, confirmed that o.a.s.i.s. was able to support the growth and detection of a variety of clinically significant organisms. On the basis of these findings, full-scale comparative clinical trials of o.a.s.i.s. with other automated blood culture systems are warranted. PMID:7929769

  3. Noninvasive Measurement of Transient Change in Viscoelasticity Due to Flow-Mediated Dilation Using Automated Detection of Arterial Wall Boundaries

    NASA Astrophysics Data System (ADS)

    Ikeshita, Kazuki; Hasegawa, Hideyuki; Kanai, Hiroshi

    2011-07-01

    We measured the stress-strain relationship of the radial arterial wall during a heartbeat noninvasively. In our previous study, the viscoelasticity of the intima-media region was estimated from the stress-strain relationship, and the transient change in viscoelasticity due to flow-mediated dilation (FMD) was estimated. In this estimation, it is necessary to detect the lumen-intima boundary (LIB) and the media-adventitia boundary (MAB). To decrease the operator dependence, in the present study, a method is proposed for automatic and objective boundary detection based on template matching between the measured and adaptive model ultrasonic signals. Using this method, arterial wall boundaries were appropriately detected in in vivo experiments. Furthermore, the transient change in viscoelasticity estimated from the stress-strain relationship was similar to that obtained manually. These results show the feasibility of the proposed method for automatic boundary detection enabling an objective and appropriate analysis of the transient change in viscoelasticity due to FMD.

  4. Automated detection of β-amyloid-related cortical and subcortical signal changes in a transgenic model of Alzheimer’s disease using high-field MRI

    PubMed Central

    Teipel, Stefan J.; Kaza, Evangelia; Hadlich, Stefan; Bauer, Alexandra; Brüning, Thomas; Plath, Anne-Sophie; Krohn, Markus; Scheffler, Katja; Walker, Lary C.; Lotze, Martin; Pahnke, Jens

    2010-01-01

    In vivo imaging of β-amyloid load as a biomarker of Alzheimer’s disease (AD) would be of considerable clinical relevance for the early diagnosis and monitoring of treatment effects. Here, we investigated automated quantification of in vivo T2 relaxation time as a surrogate measure of plaque load in the brains of ten APP/PS1 transgenic mice (age 20 weeks) using in vivo MRI acquisitions on a 7T Bruker ClinScan magnet. APP/PS1 mice present with rapid-onset cerebral β-amyloidosis, and were compared with eight age-matched, wild-type control mice (C57Bl/6J) that do not develop Aβ-deposition in brain. Data were analyzed with a novel automated voxel-based analysis that allowed mapping the entire brain for significant signal changes. In APP/PS1 mice, we found a significant decrease in T2 relaxation times in the deeper neocortical layers, caudate-putamen, thalamus, hippocampus and cerebellum compared to wildtype controls. These changes were in line with the histological distribution of cerebral Aβ plaques and activated microglia. Grey matter density did not differ between wild-type mice and APP/PS1 mice, consistent with a lack of neuronal loss in histological investigations. High-field MRI with automated mapping of T2 time changes may be a useful tool for the detection of plaque load in living transgenic animals, which may become relevant for the evaluation of amyloid lowering intervention effects in future studies. PMID:20966552

  5. Satellite mapping and automated feature extraction: Geographic information system-based change detection of the Antarctic coast

    NASA Astrophysics Data System (ADS)

    Kim, Kee-Tae

    Declassified Intelligence Satellite Photograph (DISP) data are important resources for measuring the geometry of the coastline of Antarctica. By using the state-of-art digital imaging technology, bundle block triangulation based on tie points and control points derived from a RADARSAT-1 Synthetic Aperture Radar (SAR) image mosaic and Ohio State University (OSU) Antarctic digital elevation model (DEM), the individual DISP images were accurately assembled into a map quality mosaic of Antarctica as it appeared in 1963. The new map is one of important benchmarks for gauging the response of the Antarctic coastline to changing climate. Automated coastline extraction algorithm design is the second theme of this dissertation. At the pre-processing stage, an adaptive neighborhood filtering was used to remove the film-grain noise while preserving edge features. At the segmentation stage, an adaptive Bayesian approach to image segmentation was used to split the DISP imagery into its homogenous regions, in which the fuzzy c-means clustering (FCM) technique and Gibbs random field (GRF) model were introduced to estimate the conditional and prior probability density functions. A Gaussian mixture model was used to estimate the reliable initial values for the FCM technique. At the post-processing stage, image object formation and labeling, removal of noisy image objects, and vectorization algorithms were sequentially applied to segmented images for extracting a vector representation of coastlines. Results were presented that demonstrate the effectiveness of the algorithm in segmenting the DISP data. In the cases of cloud cover and little contrast scenes, manual editing was carried out based on intermediate image processing and visual inspection in comparison of old paper maps. Through a geographic information system (GIS), the derived DISP coastline data were integrated with earlier and later data to assess continental scale changes in the Antarctic coast. Computing the area of

  6. Automated Detection of Events of Scientific Interest

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    A report presents a slightly different perspective of the subject matter of Fusing Symbolic and Numerical Diagnostic Computations (NPO-42512), which appears elsewhere in this issue of NASA Tech Briefs. Briefly, the subject matter is the X-2000 Anomaly Detection Language, which is a developmental computing language for fusing two diagnostic computer programs one implementing a numerical analysis method, the other implementing a symbolic analysis method into a unified event-based decision analysis software system for real-time detection of events. In the case of the cited companion NASA Tech Briefs article, the contemplated events that one seeks to detect would be primarily failures or other changes that could adversely affect the safety or success of a spacecraft mission. In the case of the instant report, the events to be detected could also include natural phenomena that could be of scientific interest. Hence, the use of X- 2000 Anomaly Detection Language could contribute to a capability for automated, coordinated use of multiple sensors and sensor-output-data-processing hardware and software to effect opportunistic collection and analysis of scientific data.

  7. Automated Microbiological Detection/Identification System

    PubMed Central

    Aldridge, C.; Jones, P. W.; Gibson, S.; Lanham, J.; Meyer, M.; Vannest, R.; Charles, R.

    1977-01-01

    An automated, computerized system, the AutoMicrobic System, has been developed for the detection, enumeration, and identification of bacteria and yeasts in clinical specimens. The biological basis for the system resides in lyophilized, highly selective and specific media enclosed in wells of a disposable plastic cuvette; introduction of a suitable specimen rehydrates and inoculates the media in the wells. An automated optical system monitors, and the computer interprets, changes in the media, with enumeration and identification results automatically obtained in 13 h. Sixteen different selective media were developed and tested with a variety of seeded (simulated) and clinical specimens. The AutoMicrobic System has been extensively tested with urine specimens, using a urine test kit (Identi-Pak) that contains selective media for Escherichia coli, Proteus species, Pseudomonas aeruginosa, Klebsiella-Enterobacter species, Serratia species, Citrobacter freundii, group D enterococci, Staphylococcus aureus, and yeasts (Candida species and Torulopsis glabrata). The system has been tested with 3,370 seeded urine specimens and 1,486 clinical urines. Agreement with simultaneous conventional (manual) cultures, at levels of 70,000 colony-forming units per ml (or more), was 92% or better for seeded specimens; clinical specimens yielded results of 93% or better for all organisms except P. aeruginosa, where agreement was 86%. System expansion in progress includes antibiotic susceptibility testing and compatibility with most types of clinical specimens. Images PMID:334798

  8. Automated detection of bacteria in urine

    NASA Technical Reports Server (NTRS)

    Fleig, A. J.; Picciolo, G. L.; Chappelle, E. W.; Kelbaugh, B. N.

    1972-01-01

    A method for detecting the presence of bacteria in urine was developed which utilizes the bioluminescent reaction of adenosine triphosphate with luciferin and luciferase derived from the tails of fireflies. The method was derived from work on extraterrestrial life detection. A device was developed which completely automates the assay process.

  9. An Automated Flying-Insect-Detection System

    NASA Technical Reports Server (NTRS)

    Vann, Timi; Andrews, Jane C.; Howell, Dane; Ryan, Robert

    2005-01-01

    An automated flying-insect-detection system (AFIDS) was developed as a proof-of-concept instrument for real-time detection and identification of flying insects. This type of system has use in public health and homeland security decision support, agriculture and military pest management, and/or entomological research. Insects are first lured into the AFIDS integrated sphere by insect attractants. Once inside the sphere, the insect's wing beats cause alterations in light intensity that is detected by a photoelectric sensor. Following detection, the insects are encouraged (with the use of a small fan) to move out of the sphere and into a designated insect trap where they are held for taxonomic identification or serological testing. The acquired electronic wing beat signatures are preprocessed (Fourier transformed) in real-time to display a periodic signal. These signals are sent to the end user where they are graphically displayed. All AFIDS data are pre-processed in the field with the use of a laptop computer equipped with LABVIEW. The AFIDS software can be programmed to run continuously or at specific time intervals when insects are prevalent. A special DC-restored transimpedance amplifier reduces the contributions of low-frequency background light signals, and affords approximately two orders of magnitude greater AC gain than conventional amplifiers. This greatly increases the signal-to-noise ratio and enables the detection of small changes in light intensity. The AFIDS light source consists of high-intensity Al GaInP light-emitting diodes (LEDs). The AFIDS circuitry minimizes brightness fluctuations in the LEDs and when integrated with an integrating sphere, creates a diffuse uniform light field. The insect wing beats isotropically scatter the diffuse light in the sphere and create wing beat signatures that are detected by the sensor. This configuration minimizes variations in signal associated with insect flight orientation.

  10. Automated image based prominent nucleoli detection

    PubMed Central

    Yap, Choon K.; Kalaw, Emarene M.; Singh, Malay; Chong, Kian T.; Giron, Danilo M.; Huang, Chao-Hui; Cheng, Li; Law, Yan N.; Lee, Hwee Kuan

    2015-01-01

    Introduction: Nucleolar changes in cancer cells are one of the cytologic features important to the tumor pathologist in cancer assessments of tissue biopsies. However, inter-observer variability and the manual approach to this work hamper the accuracy of the assessment by pathologists. In this paper, we propose a computational method for prominent nucleoli pattern detection. Materials and Methods: Thirty-five hematoxylin and eosin stained images were acquired from prostate cancer, breast cancer, renal clear cell cancer and renal papillary cell cancer tissues. Prostate cancer images were used for the development of a computer-based automated prominent nucleoli pattern detector built on a cascade farm. An ensemble of approximately 1000 cascades was constructed by permuting different combinations of classifiers such as support vector machines, eXclusive component analysis, boosting, and logistic regression. The output of cascades was then combined using the RankBoost algorithm. The output of our prominent nucleoli pattern detector is a ranked set of detected image patches of patterns of prominent nucleoli. Results: The mean number of detected prominent nucleoli patterns in the top 100 ranked detected objects was 58 in the prostate cancer dataset, 68 in the breast cancer dataset, 86 in the renal clear cell cancer dataset, and 76 in the renal papillary cell cancer dataset. The proposed cascade farm performs twice as good as the use of a single cascade proposed in the seminal paper by Viola and Jones. For comparison, a naive algorithm that randomly chooses a pixel as a nucleoli pattern would detect five correct patterns in the first 100 ranked objects. Conclusions: Detection of sparse nucleoli patterns in a large background of highly variable tissue patterns is a difficult challenge our method has overcome. This study developed an accurate prominent nucleoli pattern detector with the potential to be used in the clinical settings. PMID:26167383

  11. Automated Methods for Multiplexed Pathogen Detection

    SciTech Connect

    Straub, Tim M.; Dockendorff, Brian P.; Quinonez-Diaz, Maria D.; Valdez, Catherine O.; Shutthanandan, Janani I.; Tarasevich, Barbara J.; Grate, Jay W.; Bruckner-Lea, Cindy J.

    2005-09-01

    Detection of pathogenic microorganisms in environmental samples is a difficult process. Concentration of the organisms of interest also co-concentrates inhibitors of many end-point detection methods, notably, nucleic acid methods. In addition, sensitive, highly multiplexed pathogen detection continues to be problematic. The primary function of the BEADS (Biodetection Enabling Analyte Delivery System) platform is the automated concentration and purification of target analytes from interfering substances, often present in these samples, via a renewable surface column. In one version of BEADS, automated immunomagnetic separation (IMS) is used to separate cells from their samples. Captured cells are transferred to a flow-through thermal cycler where PCR, using labeled primers, is performed. PCR products are then detected by hybridization to a DNA suspension array. In another version of BEADS, cell lysis is performed, and community RNA is purified and directly labeled. Multiplexed detection is accomplished by direct hybridization of the RNA to a planar microarray. The integrated IMS/PCR version of BEADS can successfully purify and amplify 10 E. coli O157:H7 cells from river water samples. Multiplexed PCR assays for the simultaneous detection of E. coli O157:H7, Salmonella, and Shigella on bead suspension arrays was demonstrated for the detection of as few as 100 cells for each organism. Results for the RNA version of BEADS are also showing promising results. Automation yields highly purified RNA, suitable for multiplexed detection on microarrays, with microarray detection specificity equivalent to PCR. Both versions of the BEADS platform show great promise for automated pathogen detection from environmental samples. Highly multiplexed pathogen detection using PCR continues to be problematic, but may be required for trace detection in large volume samples. The RNA approach solves the issues of highly multiplexed PCR and provides ''live vs. dead'' capabilities. However

  12. Imaging flow cytometry for automated detection of hypoxia-induced erythrocyte shape change in sickle cell disease.

    PubMed

    van Beers, Eduard J; Samsel, Leigh; Mendelsohn, Laurel; Saiyed, Rehan; Fertrin, Kleber Y; Brantner, Christine A; Daniels, Mathew P; Nichols, James; McCoy, J Philip; Kato, Gregory J

    2014-06-01

    In preclinical and early phase pharmacologic trials in sickle cell disease, the percentage of sickled erythrocytes after deoxygenation, an ex vivo functional sickling assay, has been used as a measure of a patient's disease outcome. We developed a new sickle imaging flow cytometry assay (SIFCA) and investigated its application. To perform the SIFCA, peripheral blood was diluted, deoxygenated (2% oxygen) for 2 hr, fixed, and analyzed using imaging flow cytometry. We developed a software algorithm that correctly classified investigator tagged "sickled" and "normal" erythrocyte morphology with a sensitivity of 100% and a specificity of 99.1%. The percentage of sickled cells as measured by SIFCA correlated strongly with the percentage of sickle cell anemia blood in experimentally admixed samples (R = 0.98, P ≤ 0.001), negatively with fetal hemoglobin (HbF) levels (R = -0.558, P = 0.027), negatively with pH (R = -0.688, P = 0.026), negatively with pretreatment with the antisickling agent, Aes-103 (5-hydroxymethyl-2-furfural) (R = -0.766, P = 0.002), and positively with the presence of long intracellular fibers as visualized by transmission electron microscopy (R = 0.799, P = 0.002). This study shows proof of principle that the automated, operator-independent SIFCA is associated with predictable physiologic and clinical parameters and is altered by the putative antisickling agent, Aes-103. SIFCA is a new method that may be useful in sickle cell drug development. PMID:24585634

  13. Automated detection of solar eruptions

    NASA Astrophysics Data System (ADS)

    Hurlburt, N.

    2015-12-01

    Observation of the solar atmosphere reveals a wide range of motions, from small scale jets and spicules to global-scale coronal mass ejections (CMEs). Identifying and characterizing these motions are essential to advancing our understanding of the drivers of space weather. Both automated and visual identifications are currently used in identifying Coronal Mass Ejections. To date, eruptions near the solar surface, which may be precursors to CMEs, have been identified primarily by visual inspection. Here we report on Eruption Patrol (EP): a software module that is designed to automatically identify eruptions from data collected by the Atmospheric Imaging Assembly on the Solar Dynamics Observatory (SDO/AIA). We describe the method underlying the module and compare its results to previous identifications found in the Heliophysics Event Knowledgebase. EP identifies eruptions events that are consistent with those found by human annotations, but in a significantly more consistent and quantitative manner. Eruptions are found to be distributed within 15 Mm of the solar surface. They possess peak speeds ranging from 4 to 100 km/s and display a power-law probability distribution over that range. These characteristics are consistent with previous observations of prominences.

  14. Imaging flow cytometry for automated detection of hypoxia-induced erythrocyte shape change in sickle cell disease

    PubMed Central

    van Beers, Eduard J.; Samsel, Leigh; Mendelsohn, Laurel; Saiyed, Rehan; Fertrin, Kleber Y.; Brantner, Christine A.; Daniels, Mathew P.; Nichols, James; McCoy, J. Philip; Kato, Gregory J.

    2014-01-01

    In preclinical and early phase pharmacologic trials in sickle cell disease, the percentage of sickled erythrocytes after deoxygenation, an ex vivo functional sickling assay, has been used as a measure of a patient’s disease outcome. We developed a new sickle imaging flow cytometry assay (SIFCA) and investigated its application. To perform the SIFCA, peripheral blood was diluted, deoxygenated (2% oxygen) for 2 hr, fixed, and analyzed using imaging flow cytometry. We developed a software algorithm that correctly classified investigator tagged “sickled” and “normal” erythrocyte morphology with a sensitivity of 100% and a specificity of 99.1%. The percentage of sickled cells as measured by SIFCA correlated strongly with the percentage of sickle cell anemia blood in experimentally admixed samples (R = 0.98, P ≤ 0.001), negatively with fetal hemoglobin (HbF) levels (R = −0.558, P = 0.027), negatively with pH (R = −0.688, P = 0.026), negatively with pretreatment with the antisickling agent, Aes-103 (5-hydroxymethyl-2-furfural) (R = −0.766, P = 0.002), and positively with the presence of long intracellular fibers as visualized by transmission electron microscopy (R = 0.799, P = 0.002). This study shows proof of principle that the automated, operator-independent SIFCA is associated with predictable physiologic and clinical parameters and is altered by the putative antisickling agent, Aes-103. SIFCA is a new method that may be useful in sickle cell drug development. PMID:24585634

  15. An Automated Flying-Insect Detection System

    NASA Technical Reports Server (NTRS)

    Vann, Timi; Andrews, Jane C.; Howell, Dane; Ryan, Robert

    2007-01-01

    An automated flying-insect detection system (AFIDS) was developed as a proof-of-concept instrument for real-time detection and identification of flying insects. This type of system has use in public health and homeland-security decision support, agriculture and military pest management, and/or entomological research. Insects are first lured into the AFIDS integrated sphere by insect attractants. Once inside the sphere, the insect s wing beats cause alterations in light intensity that is detected by a photoelectric sensor. Following detection, the insects are encouraged (with the use of a small fan) to move out of the sphere and into a designated insect trap where they are held for taxonomic identification or serological testing. The acquired electronic wing-beat signatures are preprocessed (Fourier transformed) in real time to display a periodic signal. These signals are sent to the end user where they are graphically. All AFIDS data are preprocessed in the field with the use of a laptop computer equipped with LabVIEW. The AFIDS software can be programmed to run continuously or at specific time intervals when insects are prevalent. A special DC-restored transimpedance amplifier reduces the contributions of low-frequency background light signals, and affords approximately two orders of magnitude greater AC gain than conventional amplifiers. This greatly increases the signal-to-noise ratio and enables the detection of small changes in light intensity. The AFIDS light source consists of high-intensity Al-GaInP light-emitting diodes (LEDs). The AFIDS circuitry minimizes brightness fluctuations in the LEDs and when integrated with an integrating sphere, creates a diffuse uniform light field. The insect wing beats isotropically scatter the diffuse light in the sphere and create wing-beat signatures that are detected by the sensor. This configuration minimizes variations in signal associated with insect flight orientation. Preliminary data indicate that AFIDS has

  16. Toward Automated Feature Detection in UAVSAR Images

    NASA Astrophysics Data System (ADS)

    Parker, J. W.; Donnellan, A.; Glasscoe, M. T.

    2014-12-01

    Edge detection identifies seismic or aseismic fault motion, as demonstrated in repeat-pass inteferograms obtained by the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) program. But this identification is not robust at present: it requires a flattened background image, interpolation into missing data (holes) and outliers, and background noise that is either sufficiently small or roughly white Gaussian. Identification and mitigation of nongaussian background image noise is essential to creating a robust, automated system to search for such features. Clearly a robust method is needed for machine scanning of the thousands of UAVSAR repeat-pass interferograms for evidence of fault slip, landslides, and other local features.Empirical examination of detrended noise based on 20 km east-west profiles through desert terrain with little tectonic deformation for a suite of flight interferograms shows nongaussian characteristics. Statistical measurement of curvature with varying length scale (Allan variance) shows nearly white behavior (Allan variance slope with spatial distance from roughly -1.76 to -2) from 25 to 400 meters, deviations from -2 suggesting short-range differences (such as used in detecting edges) are often freer of noise than longer-range differences. At distances longer than 400 m the Allan variance flattens out without consistency from one interferogram to another. We attribute this additional noise afflicting difference estimates at longer distances to atmospheric water vapor and uncompensated aircraft motion.Paradoxically, California interferograms made with increasing time intervals before and after the El Mayor Cucapah earthquake (2008, M7.2, Mexico) show visually stronger and more interesting edges, but edge detection methods developed for the first year do not produce reliable results over the first two years, because longer time spans suffer reduced coherence in the interferogram. The changes over time are reflecting fault slip and block

  17. Photoelectric detection system. [manufacturing automation

    NASA Technical Reports Server (NTRS)

    Currie, J. R.; Schansman, R. R. (Inventor)

    1982-01-01

    A photoelectric beam system for the detection of the arrival of an object at a discrete station wherein artificial light, natural light, or no light may be present is described. A signal generator turns on and off a signal light at a selected frequency. When the object in question arrives on station, ambient light is blocked by the object, and the light from the signal light is reflected onto a photoelectric sensor which has a delayed electrical output but is of the frequency of the signal light. Outputs from both the signal source and the photoelectric sensor are fed to inputs of an exclusively OR detector which provides as an output the difference between them. The difference signal is a small width pulse occurring at the frequency of the signal source. By filter means, this signal is distinguished from those responsive to sunlight, darkness, or 120 Hz artificial light. In this fashion, the presence of an object is positively established.

  18. Automated macromolecular crystal detection system and method

    DOEpatents

    Christian, Allen T.; Segelke, Brent; Rupp, Bernard; Toppani, Dominique

    2007-06-05

    An automated macromolecular method and system for detecting crystals in two-dimensional images, such as light microscopy images obtained from an array of crystallization screens. Edges are detected from the images by identifying local maxima of a phase congruency-based function associated with each image. The detected edges are segmented into discrete line segments, which are subsequently geometrically evaluated with respect to each other to identify any crystal-like qualities such as, for example, parallel lines, facing each other, similarity in length, and relative proximity. And from the evaluation a determination is made as to whether crystals are present in each image.

  19. Automated detection and classification of dice

    NASA Astrophysics Data System (ADS)

    Correia, Bento A. B.; Silva, Jeronimo A.; Carvalho, Fernando D.; Guilherme, Rui; Rodrigues, Fernando C.; de Silva Ferreira, Antonio M.

    1995-03-01

    This paper describes a typical machine vision system in an unusual application, the automated visual inspection of a Casino's playing tables. The SORTE computer vision system was developed at INETI under a contract with the Portuguese Gaming Inspection Authorities IGJ. It aims to automate the tasks of detection and classification of the dice's scores on the playing tables of the game `Banca Francesa' (which means French Banking) in Casinos. The system is based on the on-line analysis of the images captured by a monochrome CCD camera placed over the playing tables, in order to extract relevant information concerning the score indicated by the dice. Image processing algorithms for real time automatic throwing detection and dice classification were developed and implemented.

  20. Automated Wildfire Detection Through Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Miller, Jerry; Borne, Kirk; Thomas, Brian; Huang, Zhenping; Chi, Yuechen

    2005-01-01

    We have tested and deployed Artificial Neural Network (ANN) data mining techniques to analyze remotely sensed multi-channel imaging data from MODIS, GOES, and AVHRR. The goal is to train the ANN to learn the signatures of wildfires in remotely sensed data in order to automate the detection process. We train the ANN using the set of human-detected wildfires in the U.S., which are provided by the Hazard Mapping System (HMS) wildfire detection group at NOAA/NESDIS. The ANN is trained to mimic the behavior of fire detection algorithms and the subjective decision- making by N O M HMS Fire Analysts. We use a local extremum search in order to isolate fire pixels, and then we extract a 7x7 pixel array around that location in 3 spectral channels. The corresponding 147 pixel values are used to populate a 147-dimensional input vector that is fed into the ANN. The ANN accuracy is tested and overfitting is avoided by using a subset of the training data that is set aside as a test data set. We have achieved an automated fire detection accuracy of 80-92%, depending on a variety of ANN parameters and for different instrument channels among the 3 satellites. We believe that this system can be deployed worldwide or for any region to detect wildfires automatically in satellite imagery of those regions. These detections can ultimately be used to provide thermal inputs to climate models.

  1. Automated assistance for detecting malicious code

    SciTech Connect

    Crawford, R.; Kerchen, P.; Levitt, K.; Olsson, R.; Archer, M.; Casillas, M.

    1993-06-18

    This paper gives an update on the continuing work on the Malicious Code Testbed (MCT). The MCT is a semi-automated tool, operating in a simulated, cleanroom environment, that is capable of detecting many types of malicious code, such as viruses, Trojan horses, and time/logic bombs. The MCT allows security analysts to check a program before installation, thereby avoiding any damage a malicious program might inflict.

  2. Automated target detection from compressive measurements

    NASA Astrophysics Data System (ADS)

    Shilling, Richard Z.; Muise, Robert R.

    2016-04-01

    A novel compressive imaging model is proposed that multiplexes segments of the field of view onto an infrared focal plane array (FPA). Similar to the compound eyes of insects, our imaging model is based on combining pixels from a surface comprising of different parts of the field of view (FOV). We formalize this superposition of pixels in a global multiplexing process reducing the resolution requirements of the FPA. We then apply automated target detection algorithms directed on the measurements of this model in a typical missile seeker scene. Based on quadratic correlation filters, we extend the target training and detection processes directly using these encoded measurements. Preliminary results are promising.

  3. Automated Wildfire Detection Through Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Miller, Jerry; Borne, Kirk; Thomas, Brian; Huang, Zhenping; Chi, Yuechen

    2005-01-01

    Wildfires have a profound impact upon the biosphere and our society in general. They cause loss of life, destruction of personal property and natural resources and alter the chemistry of the atmosphere. In response to the concern over the consequences of wildland fire and to support the fire management community, the National Oceanic and Atmospheric Administration (NOAA), National Environmental Satellite, Data and Information Service (NESDIS) located in Camp Springs, Maryland gradually developed an operational system to routinely monitor wildland fire by satellite observations. The Hazard Mapping System, as it is known today, allows a team of trained fire analysts to examine and integrate, on a daily basis, remote sensing data from Geostationary Operational Environmental Satellite (GOES), Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectroradiometer (MODIS) satellite sensors and generate a 24 hour fire product for the conterminous United States. Although assisted by automated fire detection algorithms, N O M has not been able to eliminate the human element from their fire detection procedures. As a consequence, the manually intensive effort has prevented NOAA from transitioning to a global fire product as urged particularly by climate modelers. NASA at Goddard Space Flight Center in Greenbelt, Maryland is helping N O M more fully automate the Hazard Mapping System by training neural networks to mimic the decision-making process of the frre analyst team as well as the automated algorithms.

  4. Automated DNA electrophoresis, hybridization and detection

    SciTech Connect

    Zapolski, E.J.; Gersten, D.M.; Golab, T.J.; Ledley, R.S.

    1986-05-01

    A fully automated, computer controlled system for nucleic acid hybridization analysis has been devised and constructed. In practice, DNA is digested with restriction endonuclease enzyme(s) and loaded into the system by pipette; /sup 32/P-labelled nucleic acid probe(s) is loaded into the nine hybridization chambers. Instructions for all the steps in the automated process are specified by answering questions that appear on the computer screen at the start of the experiment. Subsequent steps are performed automatically. The system performs horizontal electrophoresis in agarose gel, fixed the fragments to a solid phase matrix, denatures, neutralizes, prehybridizes, hybridizes, washes, dries and detects the radioactivity according to the specifications given by the operator. The results, printed out at the end, give the positions on the matrix to which radioactivity remains hybridized following stringent washing.

  5. Automated Detection of HONcode Website Conformity Compared to Manual Detection: An Evaluation

    PubMed Central

    2015-01-01

    of at least 75%, with a recall of more than 50% for contact details (100% precision, 69% recall), authority (85% precision, 52% recall), and reference (75% precision, 56% recall). The results also revealed issues for some criteria such as date. Changing the “document” definition (ie, using the sentence instead of whole document as a unit of classification) within the automated system resolved some but not all of them. Conclusions Study results indicate concordance between automated and expert manual compliance detection for authority, privacy, reference, and contact details. Results also indicate that using the same general parameters for automated detection of each criterion produces suboptimal results. Future work to configure optimal system parameters for each HONcode principle would improve results. The potential utility of integrating automated detection of HONcode conformity into future search engines is also discussed. PMID:26036669

  6. Computing and Office Automation: Changing Variables.

    ERIC Educational Resources Information Center

    Staman, E. Michael

    1981-01-01

    Trends in computing and office automation and their applications, including planning, institutional research, and general administrative support in higher education, are discussed. Changing aspects of information processing and an increasingly larger user community are considered. The computing literacy cycle may involve programming, analysis, use…

  7. Automated detection of Antarctic blue whale calls.

    PubMed

    Socheleau, Francois-Xavier; Leroy, Emmanuelle; Pecci, Andres Carvallo; Samaran, Flore; Bonnel, Julien; Royer, Jean-Yves

    2015-11-01

    This paper addresses the problem of automated detection of Z-calls emitted by Antarctic blue whales (B. m. intermedia). The proposed solution is based on a subspace detector of sigmoidal-frequency signals with unknown time-varying amplitude. This detection strategy takes into account frequency variations of blue whale calls as well as the presence of other transient sounds that can interfere with Z-calls (such as airguns or other whale calls). The proposed method has been tested on more than 105 h of acoustic data containing about 2200 Z-calls (as found by an experienced human operator). This method is shown to have a correct-detection rate of up to more than 15% better than the extensible bioacoustic tool package, a spectrogram-based correlation detector commonly used to study blue whales. Because the proposed method relies on subspace detection, it does not suffer from some drawbacks of correlation-based detectors. In particular, it does not require the choice of an a priori fixed and subjective template. The analytic expression of the detection performance is also derived, which provides crucial information for higher level analyses such as animal density estimation from acoustic data. Finally, the detection threshold automatically adapts to the soundscape in order not to violate a user-specified false alarm rate. PMID:26627784

  8. Automated oil spill detection with multispectral imagery

    NASA Astrophysics Data System (ADS)

    Bradford, Brian N.; Sanchez-Reyes, Pedro J.

    2011-06-01

    In this publication we present an automated detection method for ocean surface oil, like that which existed in the Gulf of Mexico as a result of the April 20, 2010 Deepwater Horizon drilling rig explosion. Regions of surface oil in airborne imagery are isolated using red, green, and blue bands from multispectral data sets. The oil shape isolation procedure involves a series of image processing functions to draw out the visual phenomenological features of the surface oil. These functions include selective color band combinations, contrast enhancement and histogram warping. An image segmentation process then separates out contiguous regions of oil to provide a raster mask to an analyst. We automate the detection algorithm to allow large volumes of data to be processed in a short time period, which can provide timely oil coverage statistics to response crews. Geo-referenced and mosaicked data sets enable the largest identified oil regions to be mapped to exact geographic coordinates. In our simulation, multispectral imagery came from multiple sources including first-hand data collected from the Gulf. Results of the simulation show the oil spill coverage area as a raster mask, along with histogram statistics of the oil pixels. A rough square footage estimate of the coverage is reported if the image ground sample distance is available.

  9. Human-guided visualization enhances automated target detection

    NASA Astrophysics Data System (ADS)

    Irvine, John M.

    2010-04-01

    Automated target cueing (ATC) can assist analysts with searching large volumes of imagery. Performance of most automated systems is less than perfect, requiring an analyst to review the results to dismiss false alarms or confirm correct detections. This paper explores methods for improving the presentation and visualization of the ATC output, enabling more efficient and effective review of the detections flagged by the ATC. The techniques presented in this paper are applicable to a wide range of search problems using data from different sensors modalities. The information available to the computer increases as ATC detections are either accepted or rejected by the analyst. It is often easy to confirm obviously correct detections and dismiss obvious false alarms, which provides the starting point for the automated updating of the visualization. In machine learning algorithms, this information can be used to retrain or refine the classifier. However, this retraining process is appropriate only when future sensor data is expected to closely resemble the current set. For many applications, the sensor data characteristics (viewing geometry, resolution, clutter complexity, prevalence and types of confusers) are likely to change from one data collection to the next. For this reason, updating the visualization for the current data set, rather than updating the classifier for future processing, may prove more effective. This paper presents an adaptive visualization technique and illustrates the technique with applications.

  10. Automated Detection of Clouds in Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary

    2010-01-01

    Many different approaches have been used to automatically detect clouds in satellite imagery. Most approaches are deterministic and provide a binary cloud - no cloud product used in a variety of applications. Some of these applications require the identification of cloudy pixels for cloud parameter retrieval, while others require only an ability to mask out clouds for the retrieval of surface or atmospheric parameters in the absence of clouds. A few approaches estimate a probability of the presence of a cloud at each point in an image. These probabilities allow a user to select cloud information based on the tolerance of the application to uncertainty in the estimate. Many automated cloud detection techniques develop sophisticated tests using a combination of visible and infrared channels to determine the presence of clouds in both day and night imagery. Visible channels are quite effective in detecting clouds during the day, as long as test thresholds properly account for variations in surface features and atmospheric scattering. Cloud detection at night is more challenging, since only courser resolution infrared measurements are available. A few schemes use just two infrared channels for day and night cloud detection. The most influential factor in the success of a particular technique is the determination of the thresholds for each cloud test. The techniques which perform the best usually have thresholds that are varied based on the geographic region, time of year, time of day and solar angle.

  11. Automated object detection for astronomical images

    NASA Astrophysics Data System (ADS)

    Orellana, Sonny; Zhao, Lei; Boussalis, Helen; Liu, Charles; Rad, Khosrow; Dong, Jane

    2005-10-01

    Sponsored by the National Aeronautical Space Association (NASA), the Synergetic Education and Research in Enabling NASA-centered Academic Development of Engineers and Space Scientists (SERENADES) Laboratory was established at California State University, Los Angeles (CSULA). An important on-going research activity in this lab is to develop an easy-to-use image analysis software with the capability of automated object detection to facilitate astronomical research. This paper presented a fast object detection algorithm based on the characteristics of astronomical images. This algorithm consists of three steps. First, the foreground and background are separated using histogram-based approach. Second, connectivity analysis is conducted to extract individual object. The final step is post processing which refines the detection results. To improve the detection accuracy when some objects are blocked by clouds, top-hat transform is employed to split the sky into cloudy region and non-cloudy region. A multi-level thresholding algorithm is developed to select the optimal threshold for different regions. Experimental results show that our proposed approach can successfully detect the blocked objects by clouds.

  12. Automated Hydrogen Gas Leak Detection System

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The Gencorp Aerojet Automated Hydrogen Gas Leak Detection System was developed through the cooperation of industry, academia, and the Government. Although the original purpose of the system was to detect leaks in the main engine of the space shuttle while on the launch pad, it also has significant commercial potential in applications for which there are no existing commercial systems. With high sensitivity, the system can detect hydrogen leaks at low concentrations in inert environments. The sensors are integrated with hardware and software to form a complete system. Several of these systems have already been purchased for use on the Ford Motor Company assembly line for natural gas vehicles. This system to detect trace hydrogen gas leaks from pressurized systems consists of a microprocessor-based control unit that operates a network of sensors. The sensors can be deployed around pipes, connectors, flanges, and tanks of pressurized systems where leaks may occur. The control unit monitors the sensors and provides the operator with a visual representation of the magnitude and locations of the leak as a function of time. The system can be customized to fit the user's needs; for example, it can monitor and display the condition of the flanges and fittings associated with the tank of a natural gas vehicle.

  13. Automated detection of elephants in wildlife video

    PubMed Central

    Zeppelzauer, Matthias

    2015-01-01

    Biologists often have to investigate large amounts of video in behavioral studies of animals. These videos are usually not sufficiently indexed which makes the finding of objects of interest a time-consuming task. We propose a fully automated method for the detection and tracking of elephants in wildlife video which has been collected by biologists in the field. The method dynamically learns a color model of elephants from a few training images. Based on the color model, we localize elephants in video sequences with different backgrounds and lighting conditions. We exploit temporal clues from the video to improve the robustness of the approach and to obtain spatial and temporal consistent detections. The proposed method detects elephants (and groups of elephants) of different sizes and poses performing different activities. The method is robust to occlusions (e.g., by vegetation) and correctly handles camera motion and different lighting conditions. Experiments show that both near- and far-distant elephants can be detected and tracked reliably. The proposed method enables biologists efficient and direct access to their video collections which facilitates further behavioral and ecological studies. The method does not make hard constraints on the species of elephants themselves and is thus easily adaptable to other animal species. PMID:25902006

  14. Automated System Marketplace 1995: The Changing Face of Automation.

    ERIC Educational Resources Information Center

    Barry, Jeff; And Others

    1995-01-01

    Discusses trends in the automated system marketplace with specific attention to online vendors and their customers: academic, public, school, and special libraries. Presents vendor profiles; tables and charts on computer systems and sales; and sidebars that include a vendor source list and the differing views on procuring an automated library…

  15. Automated detection of Karnal bunt teliospores

    SciTech Connect

    Linder, K.D.; Baumgart, C.; Creager, J.; Heinen, B.; Troupe, T.; Meyer, D.; Carr, J.; Quint, J.

    1998-02-01

    Karnal bunt is a fungal disease which infects wheat and, when present in wheat crops, yields it unsatisfactory for human consumption. Due to the fact that Karnal bunt (KB) is difficult to detect in the field, samples are taken to laboratories where technicians use microscopes and methodically search for KB teliospores. AlliedSignal Federal Manufacturing and Technologies (FM and T), working with the Kansas Department of Agriculture, created a system which utilizes pattern recognition, feature extraction, and neural networks to prototype an automated detection system for identifying KB teliospores. System hardware consists of a biological compound microscope, motorized stage, CCD camera, frame grabber, and a PC. Integration of the system hardware with custom software comprises the machine vision system. Fundamental processing steps involve capturing an image from the slide, while concurrently processing the previous image. Features extracted from the acquired imagery are then processed by a neural network classifier which has been trained to recognize spore-like objects. Images with spore-like objects are reviewed by trained technicians. Benefits of this system include: (1) reduction of the overall cycle-time; (2) utilization of technicians for intelligent decision making (vs. manual searching); (3) a regulatory standard which is quantifiable and repeatable; (4) guaranteed 100% coverage of the cover slip; and (5) significantly enhanced detection accuracy.

  16. Automated Detection of Activity Transitions for Prompting

    PubMed Central

    Feuz, Kyle D.; Cook, Diane J.; Rosasco, Cody; Robertson, Kayela; Schmitter-Edgecombe, Maureen

    2016-01-01

    Individuals with cognitive impairment can benefit from intervention strategies like recording important information in a memory notebook. However, training individuals to use the notebook on a regular basis requires a constant delivery of reminders. In this work, we design and evaluate machine learning-based methods for providing automated reminders using a digital memory notebook interface. Specifically, we identify transition periods between activities as times to issue prompts. We consider the problem of detecting activity transitions using supervised and unsupervised machine learning techniques, and find that both techniques show promising results for detecting transition periods. We test the techniques in a scripted setting with 15 individuals. Motion sensors data is recorded and annotated as participants perform a fixed set of activities. We also test the techniques in an unscripted setting with 8 individuals. Motion sensor data is recorded as participants go about their normal daily routine. In both the scripted and unscripted settings a true positive rate of greater than 80% can be achieved while maintaining a false positive rate of less than 15%. On average, this leads to transitions being detected within 1 minute of a true transition for the scripted data and within 2 minutes of a true transition on the unscripted data. PMID:27019791

  17. Detecting Unidentified Changes

    PubMed Central

    Howe, Piers D. L.; Webb, Margaret E.

    2014-01-01

    Does becoming aware of a change to a purely visual stimulus necessarily cause the observer to be able to identify or localise the change or can change detection occur in the absence of identification or localisation? Several theories of visual awareness stress that we are aware of more than just the few objects to which we attend. In particular, it is clear that to some extent we are also aware of the global properties of the scene, such as the mean luminance or the distribution of spatial frequencies. It follows that we may be able to detect a change to a visual scene by detecting a change to one or more of these global properties. However, detecting a change to global property may not supply us with enough information to accurately identify or localise which object in the scene has been changed. Thus, it may be possible to reliably detect the occurrence of changes without being able to identify or localise what has changed. Previous attempts to show that this can occur with natural images have produced mixed results. Here we use a novel analysis technique to provide additional evidence that changes can be detected in natural images without also being identified or localised. It is likely that this occurs by the observers monitoring the global properties of the scene. PMID:24454727

  18. Automated System for Early Breast Cancer Detection in Mammograms

    NASA Technical Reports Server (NTRS)

    Bankman, Isaac N.; Kim, Dong W.; Christens-Barry, William A.; Weinberg, Irving N.; Gatewood, Olga B.; Brody, William R.

    1993-01-01

    The increasing demand on mammographic screening for early breast cancer detection, and the subtlety of early breast cancer signs on mammograms, suggest an automated image processing system that can serve as a diagnostic aid in radiology clinics. We present a fully automated algorithm for detecting clusters of microcalcifications that are the most common signs of early, potentially curable breast cancer. By using the contour map of the mammogram, the algorithm circumvents some of the difficulties encountered with standard image processing methods. The clinical implementation of an automated instrument based on this algorithm is also discussed.

  19. Automated Detection of Opaque Volcanic Plumes in Polar Satellite Data

    NASA Astrophysics Data System (ADS)

    Dehn, J.; Webley, P.

    2013-12-01

    Response to an explosive volcanic eruption is time sensitive, so automated eruption detection techniques are essential to minimize alert times after an event. Automated detection of volcanic ash plumes in satellite imagery is usually done using a variant of the split-window or reverse-absorption method. This method is often effective but requires among other things that an ash plume be translucent to allow thermal radiation to pass through it. In the critical first hour or two of an eruption, plumes are most often opaque, and therefore cannot be detected by this method. It has been shown that an emergent plume appears as a sudden cold cloud over a volcano where a weather system should not appear, and this has been applied to geostationary data that is acquired every 15 to 30 minutes and will be an integral part of the upcoming geostationary mission, GOES-R. In this study this concept is used on time sequential polar orbiting satellite data to detect emergent plumes. This augments geostationary data, and may detect smaller plumes at higher latitudes where geostationary data suffers from poorer spatial resolution. A series of weighted credits and demerits are used to determine the presence of an anomalously cold cloud over a volcano in time sequential advanced very high resolution radiometer (AVHRR) data. Parameters such as coldest thermal infrared temperature, time between images, ratio of cold to background temperature, and temperature trend are assigned a weighted value and a threshold used to determine the presence of an anomalous cloud. The weighting and threshold is unique for each volcano due to weather conditions and satellite coverage. Using the 20 year archive of eruptions in the North Pacific at the Geophysical Institute of the University of Alaska Fairbanks, explosive eruptions were evaluated at Karmsky Volcano (1996), Pavlof volcano (1996, 2007, 2013), Cleveland Volcano (1994, 2001, 2008), Shishaldin Volcano (1999), Augustine Volcano (2006), Fourpeaked

  20. Laboratory Detection of Respiratory Viruses by Automated Techniques

    PubMed Central

    Pérez-Ruiz, Mercedes; Pedrosa-Corral, Irene; Sanbonmatsu-Gámez, Sara; Navarro-Marí, José-María

    2012-01-01

    Advances in clinical virology for detecting respiratory viruses have been focused on nucleic acids amplification techniques, which have converted in the reference method for the diagnosis of acute respiratory infections of viral aetiology. Improvements of current commercial molecular assays to reduce hands-on-time rely on two strategies, a stepwise automation (semi-automation) and the complete automation of the whole procedure. Contributions to the former strategy have been the use of automated nucleic acids extractors, multiplex PCR, real-time PCR and/or DNA arrays for detection of amplicons. Commercial fully-automated molecular systems are now available for the detection of respiratory viruses. Some of them could convert in point-of-care methods substituting antigen tests for detection of respiratory syncytial virus and influenza A and B viruses. This article describes laboratory methods for detection of respiratory viruses. A cost-effective and rational diagnostic algorithm is proposed, considering technical aspects of the available assays, infrastructure possibilities of each laboratory and clinic-epidemiologic factors of the infection PMID:23248735

  1. Automation: An Illustration of Social Change.

    ERIC Educational Resources Information Center

    Warnat, Winifred I.

    Advanced automation is significantly affecting American society and the individual. To understand the extent of this impact, an understanding of the country's service economy is necessary. The United States made the transition from a goods- to service-based economy shortly after World War II. In 1982, services generated 67% of the Gross National…

  2. Automated RNA Extraction and Purification for Multiplexed Pathogen Detection

    SciTech Connect

    Bruzek, Amy K.; Bruckner-Lea, Cindy J.

    2005-01-01

    Pathogen detection has become an extremely important part of our nation?s defense in this post 9/11 world where the threat of bioterrorist attacks are a grim reality. When a biological attack takes place, response time is critical. The faster the biothreat is assessed, the faster countermeasures can be put in place to protect the health of the general public. Today some of the most widely used methods for detecting pathogens are either time consuming or not reliable [1]. Therefore, a method that can detect multiple pathogens that is inherently reliable, rapid, automated and field portable is needed. To that end, we are developing automated fluidics systems for the recovery, cleanup, and direct labeling of community RNA from suspect environmental samples. The advantage of using RNA for detection is that there are multiple copies of mRNA in a cell, whereas there are normally only one or two copies of DNA [2]. Because there are multiple copies of mRNA in a cell for highly expressed genes, no amplification of the genetic material may be necessary, and thus rapid and direct detection of only a few cells may be possible [3]. This report outlines the development of both manual and automated methods for the extraction and purification of mRNA. The methods were evaluated using cell lysates from Escherichia coli 25922 (nonpathogenic), Salmonella typhimurium (pathogenic), and Shigella spp (pathogenic). Automated RNA purification was achieved using a custom sequential injection fluidics system consisting of a syringe pump, a multi-port valve and a magnetic capture cell. mRNA was captured using silica coated superparamagnetic beads that were trapped in the tubing by a rare earth magnet. RNA was detected by gel electrophoresis and/or by hybridization of the RNA to microarrays. The versatility of the fluidics systems and the ability to automate these systems allows for quick and easy processing of samples and eliminates the need for an experienced operator.

  3. Defect Prevention and Detection in Software for Automated Test Equipment

    SciTech Connect

    E. Bean

    2006-11-30

    Software for automated test equipment can be tedious and monotonous making it just as error-prone as other software. Active defect prevention and detection are also important for test applications. Incomplete or unclear requirements, a cryptic syntax used for some test applications—especially script-based test sets, variability in syntax or structure, and changing requirements are among the problems encountered in one tester. Such problems are common to all software but can be particularly problematic in test equipment software intended to test another product. Each of these issues increases the probability of error injection during test application development. This report describes a test application development tool designed to address these issues and others for a particular piece of test equipment. By addressing these problems in the development environment, the tool has powerful built-in defect prevention and detection capabilities. Regular expressions are widely used in the development tool as a means of formally defining test equipment requirements for the test application and verifying conformance to those requirements. A novel means of using regular expressions to perform range checking was developed. A reduction in rework and increased productivity are the results. These capabilities are described along with lessons learned and their applicability to other test equipment software. The test application development tool, or “application builder”, is known as the PT3800 AM Creation, Revision and Archiving Tool (PACRAT).

  4. Automated Detection of Solar Loops by the Oriented Connectivity Method

    NASA Technical Reports Server (NTRS)

    Lee, Jong Kwan; Newman, Timothy S.; Gary, G. Allen

    2004-01-01

    An automated technique to segment solar coronal loops from intensity images of the Sun s corona is introduced. It exploits physical characteristics of the solar magnetic field to enable robust extraction from noisy images. The technique is a constructive curve detection approach, constrained by collections of estimates of the magnetic fields orientation. Its effectiveness is evaluated through experiments on synthetic and real coronal images.

  5. Automated ultrasonic arterial vibrometry: detection and measurement

    NASA Astrophysics Data System (ADS)

    Plett, Melani I.; Beach, Kirk W.; Paun, Marla

    2000-04-01

    Since the invention of the stethoscope, the detection of vibrations and sounds from the body has been a touchstone of diagnosis. However, the method is limited to vibrations whose associated sounds transmit to the skin, with no means to determine the anatomic and physiological source of the vibrations save the cunning of the examiner. Using ultrasound quadrature phase demodulation methods similar to those of ultrasonic color flow imaging, we have developed a system to detect and measure tissue vibrations with amplitude excursions as small as 30 nanometers. The system uses wavelet analysis for sensitive and specific detection, as well as measurement, of short duration vibrations amidst clutter and time-varying, colored noise. Vibration detection rates in ROC curves from simulated data predict > 99.5% detections with < 1% false alarms for signal to noise ratios >= 0.5. Vibrations from in vivo arterial stenoses and punctures have been studied. The results show that vibration durations vary from 10 - 150 ms, frequencies from 100 - 1000 Hz, and amplitudes from 30 nanometers to several microns. By marking the location of vibration sources on ultrasound images, and using color to indicate amplitude, frequency or acoustic intensity, new diagnostic information is provided to aid disorder diagnosis and management.

  6. Automated Detection of Stereotypical Motor Movements

    ERIC Educational Resources Information Center

    Goodwin, Matthew S.; Intille, Stephen S.; Albinali, Fahd; Velicer, Wayne F.

    2011-01-01

    To overcome problems with traditional methods for measuring stereotypical motor movements in persons with Autism Spectrum Disorders (ASD), we evaluated the use of wireless three-axis accelerometers and pattern recognition algorithms to automatically detect body rocking and hand flapping in children with ASD. Findings revealed that, on average,…

  7. Automated Human Screening for Detecting Concealed Knowledge

    ERIC Educational Resources Information Center

    Twyman, Nathan W.

    2012-01-01

    Screening individuals for concealed knowledge has traditionally been the purview of professional interrogators investigating a crime. But the ability to detect when a person is hiding important information would be of high value to many other fields and functions. This dissertation proposes design principles for and reports on an implementation…

  8. SAR change detection MTI

    NASA Astrophysics Data System (ADS)

    Scarborough, Steven; Lemanski, Christopher; Nichols, Howard; Owirka, Gregory; Minardi, Michael; Hale, Todd

    2006-05-01

    This paper examines the theory, application, and results of using single-channel synthetic aperture radar (SAR) data with Moving Reference Processing (MRP) to focus and geolocate moving targets. Moving targets within a standard SAR imaging scene are defocused, displaced, or completely missing in the final image. Building on previous research at AFRL, the SAR-MRP method focuses and geolocates moving targets by reprocessing the SAR data to focus the movers rather than the stationary clutter. SAR change detection is used so that target detection and focusing is performed more robustly. In the cases where moving target returns possess the same range versus slow-time histories, a geolocation ambiguity results. This ambiguity can be resolved in a number of ways. This paper concludes by applying the SAR-MRP method to high-frequency radar measurements from persistent continuous-dwell SAR observations of a moving target.

  9. BacT/Alert: an automated colorimetric microbial detection system.

    PubMed Central

    Thorpe, T C; Wilson, M L; Turner, J E; DiGuiseppi, J L; Willert, M; Mirrett, S; Reller, L B

    1990-01-01

    BacT/Alert (Organon Teknika Corp., Durham, N.C.) is an automated microbial detection system based on the colorimetric detection of CO2 produced by growing microorganisms. Results of an evaluation of the media, sensor, detection system, and detection algorithm indicate that the system reliably grows and detects a wide variety of bacteria and fungi. Results of a limited pilot clinical trial with a prototype research instrument indicate that the system is comparable to the radiometric BACTEC 460 system in its ability to grow and detect microorganisms in blood. On the basis of these initial findings, large-scale clinical trials comparing BacT/Alert with other commercial microbial detection systems appear warranted. PMID:2116451

  10. Automated detection of geomagnetic storms with heightened risk of GIC

    NASA Astrophysics Data System (ADS)

    Bailey, Rachel L.; Leonhardt, Roman

    2016-06-01

    Automated detection of geomagnetic storms is of growing importance to operators of technical infrastructure (e.g., power grids, satellites), which is susceptible to damage caused by the consequences of geomagnetic storms. In this study, we compare three methods for automated geomagnetic storm detection: a method analyzing the first derivative of the geomagnetic variations, another looking at the Akaike information criterion, and a third using multi-resolution analysis of the maximal overlap discrete wavelet transform of the variations. These detection methods are used in combination with an algorithm for the detection of coronal mass ejection shock fronts in ACE solar wind data prior to the storm arrival on Earth as an additional constraint for possible storm detection. The maximal overlap discrete wavelet transform is found to be the most accurate of the detection methods. The final storm detection software, implementing analysis of both satellite solar wind and geomagnetic ground data, detects 14 of 15 more powerful geomagnetic storms over a period of 2 years.

  11. Method and automated apparatus for detecting coliform organisms

    NASA Technical Reports Server (NTRS)

    Dill, W. P.; Taylor, R. E.; Jeffers, E. L. (Inventor)

    1980-01-01

    Method and automated apparatus are disclosed for determining the time of detection of metabolically produced hydrogen by coliform bacteria cultured in an electroanalytical cell from the time the cell is inoculated with the bacteria. The detection time data provides bacteria concentration values. The apparatus is sequenced and controlled by a digital computer to discharge a spent sample, clean and sterilize the culture cell, provide a bacteria nutrient into the cell, control the temperature of the nutrient, inoculate the nutrient with a bacteria sample, measures the electrical potential difference produced by the cell, and measures the time of detection from inoculation.

  12. Change Detection: Training and Transfer

    PubMed Central

    Gaspar, John G.; Neider, Mark B.; Simons, Daniel J.; McCarley, Jason S.; Kramer, Arthur F.

    2013-01-01

    Observers often fail to notice even dramatic changes to their environment, a phenomenon known as change blindness. If training could enhance change detection performance in general, then it might help to remedy some real-world consequences of change blindness (e.g. failing to detect hazards while driving). We examined whether adaptive training on a simple change detection task could improve the ability to detect changes in untrained tasks for young and older adults. Consistent with an effective training procedure, both young and older adults were better able to detect changes to trained objects following training. However, neither group showed differential improvement on untrained change detection tasks when compared to active control groups. Change detection training led to improvements on the trained task but did not generalize to other change detection tasks. PMID:23840775

  13. Automated detection, characterization, and tracking of filaments from SDO data

    NASA Astrophysics Data System (ADS)

    Buchlin, Eric; Vial, Jean-Claude; Mercier, Claude

    2016-07-01

    Thanks to the cadence and continuity of AIA and HMI observations, SDO offers unique data for detecting, characterizing, and tracking solar filaments, until their eruptions, which are often associated with coronal mass ejections. Because of the requirement of short latency when aiming at space weather applications, and because of the important data volume, only an automated detection can be worked out. We present the code "FILaments, Eruptions, and Activations detected from Space" (FILEAS) that we have developed for the automated detection and tracking of filaments. Detections are based on the analysis of AIA 30.4 nm He II images and on the magnetic polarity inversion lines derived from HMI. Following the tracking of filaments as they rotate with the Sun, filament characteristics are computed and a database of filaments parameters is built. We present the algorithms and performances of the code, and we compare its results with the filaments detected in Hα and already present in the Heliophysics Events Knowledgebase. We finally discuss the possibility of using such a code to detect eruptions in real time.

  14. Towards an Automated Acoustic Detection System for Free Ranging Elephants

    PubMed Central

    Zeppelzauer, Matthias; Hensman, Sean; Stoeger, Angela S.

    2015-01-01

    The human-elephant conflict is one of the most serious conservation problems in Asia and Africa today. The involuntary confrontation of humans and elephants claims the lives of many animals and humans every year. A promising approach to alleviate this conflict is the development of an acoustic early warning system. Such a system requires the robust automated detection of elephant vocalizations under unconstrained field conditions. Today, no system exists that fulfills these requirements. In this paper, we present a method for the automated detection of elephant vocalizations that is robust to the diverse noise sources present in the field. We evaluate the method on a dataset recorded under natural field conditions to simulate a real-world scenario. The proposed method outperformed existing approaches and robustly and accurately detected elephants. It thus can form the basis for a future automated early warning system for elephants. Furthermore, the method may be a useful tool for scientists in bioacoustics for the study of wildlife recordings. PMID:25983398

  15. Automated Imaging Techniques for Biosignature Detection in Geologic Samples

    NASA Astrophysics Data System (ADS)

    Williford, K. H.

    2015-12-01

    Robust biosignature detection in geologic samples typically requires the integration of morphological/textural data with biogeochemical data across a variety of scales. We present new automated imaging and coordinated biogeochemical analysis techniques developed at the JPL Astrobiogeochemistry Laboratory (abcLab) in support of biosignature detection in terrestrial samples as well as those that may eventually be returned from Mars. Automated gigapixel mosaic imaging of petrographic thin sections in transmitted and incident light (including UV epifluorescence) is supported by a microscopy platform with a digital XYZ stage. Images are acquired, processed, and co-registered using multiple software platforms at JPL and can be displayed and shared using Gigapan, a freely available, web-based toolset (e.g. . Automated large area (cm-scale) elemental mapping at sub-micrometer spatial resolution is enabled by a variable pressure scanning electron microscope (SEM) with a large (150 mm2) silicon drift energy dispersive spectroscopy (EDS) detector system. The abcLab light and electron microscopy techniques are augmented by additional elemental chemistry, mineralogy and organic detection/classification using laboratory Micro-XRF and UV Raman/fluorescence systems, precursors to the PIXL and SHERLOC instrument platforms selected for flight on the NASA Mars 2020 rover mission. A workflow including careful sample preparation followed by iterative gigapixel imaging, SEM/EDS, Micro-XRF and UV fluorescence/Raman in support of organic, mineralogic, and elemental biosignature target identification and follow up analysis with other techniques including secondary ion mass spectrometry (SIMS) will be discussed.

  16. An Automated Cloud-edge Detection Algorithm Using Cloud Physics and Radar Data

    NASA Technical Reports Server (NTRS)

    Ward, Jennifer G.; Merceret, Francis J.; Grainger, Cedric A.

    2003-01-01

    An automated cloud edge detection algorithm was developed and extensively tested. The algorithm uses in-situ cloud physics data measured by a research aircraft coupled with ground-based weather radar measurements to determine whether the aircraft is in or out of cloud. Cloud edges are determined when the in/out state changes, subject to a hysteresis constraint. The hysteresis constraint prevents isolated transient cloud puffs or data dropouts from being identified as cloud boundaries. The algorithm was verified by detailed manual examination of the data set in comparison to the results from application of the automated algorithm.

  17. Automated spoof-detection for fingerprints using optical coherence tomography.

    PubMed

    Darlow, Luke Nicholas; Webb, Leandra; Botha, Natasha

    2016-05-01

    Fingerprint recognition systems are prevalent in high-security applications. As a result, the act of spoofing these systems with artificial fingerprints is of increasing concern. This research presents an automatic means for spoof-detection using optical coherence tomography (OCT). This technology is able to capture a 3D representation of the internal structure of the skin and is thus not limited to a 2D surface scan. The additional information afforded by this representation means that accurate spoof-detection can be achieved. Two features were extracted to detect the presence of (1) an additional thin layer on the surface of the skin and (2) a thicker additional layer or a complete artificial finger. An analysis of these features showed that they are highly separable, resulting in 100% accuracy regarding spoof-detection, with no false rejections of real fingers. This is the first attempt at fully automated spoof-detection using OCT. PMID:27140346

  18. Automated choroidal neovascularization detection algorithm for optical coherence tomography angiography

    PubMed Central

    Liu, Li; Gao, Simon S.; Bailey, Steven T.; Huang, David; Li, Dengwang; Jia, Yali

    2015-01-01

    Optical coherence tomography angiography has recently been used to visualize choroidal neovascularization (CNV) in participants with age-related macular degeneration. Identification and quantification of CNV area is important clinically for disease assessment. An automated algorithm for CNV area detection is presented in this article. It relies on denoising and a saliency detection model to overcome issues such as projection artifacts and the heterogeneity of CNV. Qualitative and quantitative evaluations were performed on scans of 7 participants. Results from the algorithm agreed well with manual delineation of CNV area. PMID:26417524

  19. Automated detection of asynchrony in patient-ventilator interaction.

    PubMed

    Mulqueeny, Qestra; Redmond, Stephen J; Tassaux, Didier; Vignaux, Laurence; Jolliet, Philippe; Ceriana, Piero; Nava, Stefano; Schindhelm, Klaus; Lovell, Nigel H

    2009-01-01

    An automated classification algorithm for the detection of expiratory ineffective efforts in patient-ventilator interaction is developed and validated. Using this algorithm, 5624 breaths from 23 patients in a pulmonary ward were examined. The participants (N = 23) underwent both conventional and non-invasive ventilation. Tracings of patient flow, pressure at the airway, and transdiaphragmatic pressure were manually labeled by an expert. Overall accuracy of 94.5% was achieved with sensitivity 58.7% and specificity 98.7%. The results demonstrate the viability of using pattern classification techniques to automatically detect the presence of asynchrony between a patient and their ventilator. PMID:19963896

  20. AUTOMATION AND TECHNOLOGICAL CHANGE IN BANKING.

    ERIC Educational Resources Information Center

    STEINER, CARL L.

    THE PURPOSES OF THIS STUDY WERE TO DETERMINE THE PERSONNEL CHANGE DIRECTLY RESULTING FROM THE INSTALLATION OF ELECTRONIC DATA PROCESSING IN ONE OF THE LARGE COMMERCIAL BANKS IN BALTIMORE, TO DESCRIBE THE PROCESSES AND JOB DUTIES INVOLVED, AND TO INDICATE HOW CHANGES HAVE AFFECTED EMPLOYMENT AND WHAT MAY BE EXPECTED IN THE FUTURE. THE USE OF THE…

  1. Multisensor Fusion for Change Detection

    NASA Astrophysics Data System (ADS)

    Schenk, T.; Csatho, B.

    2005-12-01

    Combining sensors that record different properties of a 3-D scene leads to complementary and redundant information. If fused properly, a more robust and complete scene description becomes available. Moreover, fusion facilitates automatic procedures for object reconstruction and modeling. For example, aerial imaging sensors, hyperspectral scanning systems, and airborne laser scanning systems generate complementary data. We describe how data from these sensors can be fused for such diverse applications as mapping surface erosion and landslides, reconstructing urban scenes, monitoring urban land use and urban sprawl, and deriving velocities and surface changes of glaciers and ice sheets. An absolute prerequisite for successful fusion is a rigorous co-registration of the sensors involved. We establish a common 3-D reference frame by using sensor invariant features. Such features are caused by the same object space phenomena and are extracted in multiple steps from the individual sensors. After extracting, segmenting and grouping the features into more abstract entities, we discuss ways on how to automatically establish correspondences. This is followed by a brief description of rigorous mathematical models suitable to deal with linear and area features. In contrast to traditional, point-based registration methods, lineal and areal features lend themselves to a more robust and more accurate registration. More important, the chances to automate the registration process increases significantly. The result of the co-registration of the sensors is a unique transformation between the individual sensors and the object space. This makes spatial reasoning of extracted information more versatile; reasoning can be performed in sensor space or in 3-D space where domain knowledge about features and objects constrains reasoning processes, reduces the search space, and helps to make the problem well-posed. We demonstrate the feasibility of the proposed multisensor fusion approach

  2. An Automated Motion Detection and Reward System for Animal Training

    PubMed Central

    Miller, Brad; Lim, Audrey N; Heidbreder, Arnold F

    2015-01-01

    A variety of approaches has been used to minimize head movement during functional brain imaging studies in awake laboratory animals. Many laboratories expend substantial effort and time training animals to remain essentially motionless during such studies. We could not locate an “off-the-shelf” automated training system that suited our needs.  We developed a time- and labor-saving automated system to train animals to hold still for extended periods of time. The system uses a personal computer and modest external hardware to provide stimulus cues, monitor movement using commercial video surveillance components, and dispense rewards. A custom computer program automatically increases the motionless duration required for rewards based on performance during the training session but allows changes during sessions. This system was used to train cynomolgus monkeys (Macaca fascicularis) for awake neuroimaging studies using positron emission tomography (PET) and functional magnetic resonance imaging (fMRI). The automated system saved the trainer substantial time, presented stimuli and rewards in a highly consistent manner, and automatically documented training sessions. We have limited data to prove the training system's success, drawn from the automated records during training sessions, but we believe others may find it useful. The system can be adapted to a range of behavioral training/recording activities for research or commercial applications, and the software is freely available for non-commercial use. PMID:26798573

  3. An Automated Motion Detection and Reward System for Animal Training.

    PubMed

    Miller, Brad; Lim, Audrey N; Heidbreder, Arnold F; Black, Kevin J

    2015-01-01

    A variety of approaches has been used to minimize head movement during functional brain imaging studies in awake laboratory animals. Many laboratories expend substantial effort and time training animals to remain essentially motionless during such studies. We could not locate an "off-the-shelf" automated training system that suited our needs.  We developed a time- and labor-saving automated system to train animals to hold still for extended periods of time. The system uses a personal computer and modest external hardware to provide stimulus cues, monitor movement using commercial video surveillance components, and dispense rewards. A custom computer program automatically increases the motionless duration required for rewards based on performance during the training session but allows changes during sessions. This system was used to train cynomolgus monkeys (Macaca fascicularis) for awake neuroimaging studies using positron emission tomography (PET) and functional magnetic resonance imaging (fMRI). The automated system saved the trainer substantial time, presented stimuli and rewards in a highly consistent manner, and automatically documented training sessions. We have limited data to prove the training system's success, drawn from the automated records during training sessions, but we believe others may find it useful. The system can be adapted to a range of behavioral training/recording activities for research or commercial applications, and the software is freely available for non-commercial use. PMID:26798573

  4. Cholangiocarcinoma--an automated preliminary detection system using MLP.

    PubMed

    Logeswaran, Rajasvaran

    2009-12-01

    Cholangiocarcinoma, cancer of the bile ducts, is often diagnosed via magnetic resonance cholangiopancreatography (MRCP). Due to low resolution, noise and difficulty is actually seeing the tumor in the images, especially by examining only a single image, there has been very little development of automated systems for cholangiocarcinoma diagnosis. This paper presents a computer-aided diagnosis (CAD) system for the automated preliminary detection of the tumor using a single MRCP image. The multi-stage system employs algorithms and techniques that correspond to the radiological diagnosis characteristics employed by doctors. A popular artificial neural network, the multi-layer perceptron (MLP), is used for decision making to differentiate images with cholangiocarcinoma from those without. The test results achieved was 94% when differentiating only healthy and tumor images, and 88% in a robust multi-disease test where the system had to identify the tumor images from a large set of images containing common biliary diseases. PMID:20052894

  5. Detection of carryover in automated milk sampling equipment.

    PubMed

    Løvendahl, P; Bjerring, M A

    2006-09-01

    Equipment for sampling milk in automated milking systems may cause carryover problems if residues from one sample remain and are mixed with the subsequent sample. The degree of carryover can be estimated statistically by linear regression models. This study applied various regression analyses to several real and simulated data sets. The statistical power for detecting carryover milk improved considerably when information about cow identity was included and a mixed model was applied. Carryover may affect variation between animals, including genetic variation, and thereby have an impact on management decisions and diagnostic tools based on the milk content of somatic cells. An extended procedure is needed for approval of sampling equipment for automated milking with acceptable latitudes of carryover, and this could include the regression approach taken in this study. PMID:16899700

  6. An Automated Directed Spectral Search Methodology for Small Target Detection

    NASA Astrophysics Data System (ADS)

    Grossman, Stanley I.

    Much of the current efforts in remote sensing tackle macro-level problems such as determining the extent of wheat in a field, the general health of vegetation or the extent of mineral deposits in an area. However, for many of the remaining remote sensing challenges being studied currently, such as border protection, drug smuggling, treaty verification, and the war on terror, most targets are very small in nature - a vehicle or even a person. While in typical macro-level problems the objective vegetation is in the scene, for small target detection problems it is not usually known if the desired small target even exists in the scene, never mind finding it in abundance. The ability to find specific small targets, such as vehicles, typifies this problem. Complicating the analyst's life, the growing number of available sensors is generating mountains of imagery outstripping the analysts' ability to visually peruse them. This work presents the important factors influencing spectral exploitation using multispectral data and suggests a different approach to small target detection. The methodology of directed search is presented, including the use of scene-modeled spectral libraries, various search algorithms, and traditional statistical and ROC curve analysis. The work suggests a new metric to calibrate analysis labeled the analytic sweet spot as well as an estimation method for identifying the sweet spot threshold for an image. It also suggests a new visualization aid for highlighting the target in its entirety called nearest neighbor inflation (NNI). It brings these all together to propose that these additions to the target detection arena allow for the construction of a fully automated target detection scheme. This dissertation next details experiments to support the hypothesis that the optimum detection threshold is the analytic sweet spot and that the estimation method adequately predicts it. Experimental results and analysis are presented for the proposed directed

  7. The Development of Change Detection

    ERIC Educational Resources Information Center

    Shore, David I.; Burack, Jacob A.; Miller, Danny; Joseph, Shari; Enns, James T.

    2006-01-01

    Changes to a scene often go unnoticed if the objects of the change are unattended, making change detection an index of where attention is focused during scene perception. We measured change detection in school-age children and young adults by repeatedly alternating two versions of an image. To provide an age-fair assessment we used a bimanual…

  8. Eclipsing binaries in the Gaia era: automated detection performance

    NASA Astrophysics Data System (ADS)

    Holl, Berry; Mowlavi, Nami; Lecoeur-Taïbi, Isabelle; Geneva Gaia CU7 Team members

    2014-09-01

    Binary systems can have periods from a fraction of a day to several years and exist in a large range of possible configurations at various evolutionary stages. About 2% of them are oriented such that eclipses can be observed. Such observations provide unique opportunities for the determination of their orbital and stellar parameters. Large-scale multi-epoch photometric surveys produce large sets of eclipsing binaries that allow for statistical studies of binary systems. In this respect the ESA Gaia mission, launched in December 2013, is expected to deliver an unprecedented sample of millions of eclipsing binaries. Their detection from Gaia photometry and estimation of their orbital periods are essential for their subclassification and orbital and stellar parameter determination. For a subset of these eclipsing systems, Gaia radial velocities and astrometric orbital measurements will further complement the Gaia light curves. A key challenge of the detection and period determination of the expected millions of Gaia eclipsing binaries is the automation of the procedure. Such an automated pipeline is being developed within the Gaia Data Processing Analysis Consortium, in the framework of automated detection and identification of various types of photometric variable objects. In this poster we discuss the performance of this pipeline on eclipsing binaries using simulated Gaia data and the existing Hipparcos data. We show that we can detect a wide range of binary systems and very often determine their orbital periods from photometry alone, even though the data sampling is relatively sparse. The results can further be improved for those objects for which spectroscopic and/or astrometric orbital measurements will also be available from Gaia.

  9. Fully Automated Lipid Pool Detection Using Near Infrared Spectroscopy

    PubMed Central

    Wojakowski, Wojciech

    2016-01-01

    Background. Detecting and identifying vulnerable plaque, which is prone to rupture, is still a challenge for cardiologist. Such lipid core-containing plaque is still not identifiable by everyday angiography, thus triggering the need to develop a new tool where NIRS-IVUS can visualize plaque characterization in terms of its chemical and morphologic characteristic. The new tool can lead to the development of new methods of interpreting the newly obtained data. In this study, the algorithm to fully automated lipid pool detection on NIRS images is proposed. Method. Designed algorithm is divided into four stages: preprocessing (image enhancement), segmentation of artifacts, detection of lipid areas, and calculation of Lipid Core Burden Index. Results. A total of 31 NIRS chemograms were analyzed by two methods. The metrics, total LCBI, maximal LCBI in 4 mm blocks, and maximal LCBI in 2 mm blocks, were calculated to compare presented algorithm with commercial available system. Both intraclass correlation (ICC) and Bland-Altman plots showed good agreement and correlation between used methods. Conclusions. Proposed algorithm is fully automated lipid pool detection on near infrared spectroscopy images. It is a tool developed for offline data analysis, which could be easily augmented for newer functions and projects. PMID:27610191

  10. Fully Automated Lipid Pool Detection Using Near Infrared Spectroscopy.

    PubMed

    Pociask, Elżbieta; Jaworek-Korjakowska, Joanna; Malinowski, Krzysztof Piotr; Roleder, Tomasz; Wojakowski, Wojciech

    2016-01-01

    Background. Detecting and identifying vulnerable plaque, which is prone to rupture, is still a challenge for cardiologist. Such lipid core-containing plaque is still not identifiable by everyday angiography, thus triggering the need to develop a new tool where NIRS-IVUS can visualize plaque characterization in terms of its chemical and morphologic characteristic. The new tool can lead to the development of new methods of interpreting the newly obtained data. In this study, the algorithm to fully automated lipid pool detection on NIRS images is proposed. Method. Designed algorithm is divided into four stages: preprocessing (image enhancement), segmentation of artifacts, detection of lipid areas, and calculation of Lipid Core Burden Index. Results. A total of 31 NIRS chemograms were analyzed by two methods. The metrics, total LCBI, maximal LCBI in 4 mm blocks, and maximal LCBI in 2 mm blocks, were calculated to compare presented algorithm with commercial available system. Both intraclass correlation (ICC) and Bland-Altman plots showed good agreement and correlation between used methods. Conclusions. Proposed algorithm is fully automated lipid pool detection on near infrared spectroscopy images. It is a tool developed for offline data analysis, which could be easily augmented for newer functions and projects. PMID:27610191

  11. An Automated Visual Event Detection System for Cabled Observatory Video

    NASA Astrophysics Data System (ADS)

    Edgington, D. R.; Cline, D. E.; Mariette, J.

    2007-12-01

    The permanent presence of underwater cameras on oceanic cabled observatories, such as the Victoria Experimental Network Under the Sea (VENUS) and Eye-In-The-Sea (EITS) on Monterey Accelerated Research System (MARS), will generate valuable data that can move forward the boundaries of understanding the underwater world. However, sightings of underwater animal activities are rare, resulting in the recording of many hours of video with relatively few events of interest. The burden of video management and analysis often requires reducing the amount of video recorded and later analyzed. Sometimes enough human resources do not exist to analyze the video; the strains on human attention needed to analyze video demand an automated way to assist in video analysis. Towards this end, an Automated Visual Event Detection System (AVED) is in development at the Monterey Bay Aquarium Research Institute (MBARI) to address the problem of analyzing cabled observatory video. Here we describe the overall design of the system to process video data and enable science users to analyze the results. We present our results analyzing video from the VENUS observatory and test data from EITS deployments. This automated system for detecting visual events includes a collection of custom and open source software that can be run three ways: through a Web Service, through a Condor managed pool of AVED enabled compute servers, or locally on a single computer. The collection of software also includes a graphical user interface to preview or edit detected results and to setup processing options. To optimize the compute-intensive AVED algorithms, a parallel program has been implemented for high-data rate applications like the EITS instrument on MARS.

  12. Automated Detection of Malarial Retinopathy-Associated Retinal Hemorrhages

    PubMed Central

    Joshi, Vinayak S.; Maude, Richard J.; Reinhardt, Joseph M.; Tang, Li; Garvin, Mona K.; Sayeed, Abdullah Abu; Ghose, Aniruddha; Hassan, Mahtab Uddin; Abràmoff, Michael D.

    2012-01-01

    Purpose. To develop an automated method for the detection of retinal hemorrhages on color fundus images to characterize malarial retinopathy, which may help in the assessment of patients with cerebral malaria. Methods. A fundus image dataset from 14 patients (200 fundus images, with an average of 14 images per patient) previously diagnosed with malarial retinopathy was examined. We developed a pattern recognition–based algorithm, which extracted features from image watershed regions called splats (tobogganing). A reference standard was obtained by manual segmentation of hemorrhages, which assigned a label to each splat. The splat features with the associated splat label were used to train a linear k-nearest neighbor classifier that learnt the color properties of hemorrhages and identified the splats belonging to hemorrhages in a test dataset. In a crossover design experiment, data from 12 patients were used for training and data from two patients were used for testing, with 14 different permutations; and the derived sensitivity and specificity values were averaged. Results. The experiment resulted in hemorrhage detection sensitivities in terms of splats as 80.83%, and in terms of lesions as 84.84%. The splat-based specificity was 96.67%, whereas for the lesion-based analysis, an average of three false positives was obtained per image. The area under the receiver operating characteristic curve was reported as 0.9148 for splat-based, and as 0.9030 for lesion-based analysis. Conclusions. The method provides an automated means of detecting retinal hemorrhages associated with malarial retinopathy. The results matched well with the reference standard. With further development, this technique may provide automated assistance for screening and quantification of malarial retinopathy. PMID:22915035

  13. Automated sleep scoring and sleep apnea detection in children

    NASA Astrophysics Data System (ADS)

    Baraglia, David P.; Berryman, Matthew J.; Coussens, Scott W.; Pamula, Yvonne; Kennedy, Declan; Martin, A. James; Abbott, Derek

    2005-12-01

    This paper investigates the automated detection of a patient's breathing rate and heart rate from their skin conductivity as well as sleep stage scoring and breathing event detection from their EEG. The software developed for these tasks is tested on data sets obtained from the sleep disorders unit at the Adelaide Women's and Children's Hospital. The sleep scoring and breathing event detection tasks used neural networks to achieve signal classification. The Fourier transform and the Higuchi fractal dimension were used to extract features for input to the neural network. The filtered skin conductivity appeared visually to bear a similarity to the breathing and heart rate signal, but a more detailed evaluation showed the relation was not consistent. Sleep stage classification was achieved with and accuracy of around 65% with some stages being accurately scored and others poorly scored. The two breathing events hypopnea and apnea were scored with varying degrees of accuracy with the highest scores being around 75% and 30%.

  14. Automated Vulnerability Detection for Compiled Smart Grid Software

    SciTech Connect

    Prowell, Stacy J; Pleszkoch, Mark G; Sayre, Kirk D; Linger, Richard C

    2012-01-01

    While testing performed with proper experimental controls can provide scientifically quantifiable evidence that software does not contain unintentional vulnerabilities (bugs), it is insufficient to show that intentional vulnerabilities exist, and impractical to certify devices for the expected long lifetimes of use. For both of these needs, rigorous analysis of the software itself is essential. Automated software behavior computation applies rigorous static software analysis methods based on function extraction (FX) to compiled software to detect vulnerabilities, intentional or unintentional, and to verify critical functionality. This analysis is based on the compiled firmware, takes into account machine precision, and does not rely on heuristics or approximations early in the analysis.

  15. Automated Detection and Annotation of Disturbance in Eastern Forests

    NASA Astrophysics Data System (ADS)

    Hughes, M. J.; Chen, G.; Hayes, D. J.

    2013-12-01

    Forest disturbances represent an important component of the terrestrial carbon budget. To generate spatially-explicit estimates of disturbance and regrowth, we developed an automated system to detect and characterize forest change in the eastern United States at 30 m resolution from a 28-year Landsat Thematic Mapper time-series (1984-2011). Forest changes are labeled as 'disturbances' or 'regrowth', assigned to a severity class, and attributed to a disturbance type: either fire, insects, harvest, or 'unknown'. The system generates cloud-free summertime composite images for each year from multiple summer scenes and calculates vegetation indices from these composites. Patches of similar terrain on the landscape are identified by segmenting the Normalized Burn Ratio image. The spatial variance within each patch, which has been found to be a good indicator of diffuse disturbances such as forest insect damage, is then calculated for each index, creating an additional set of indexes. To identify vegetation change and quantify its degree, the derivative through time is calculated for each index using total variance regularization to account for noise and create a piecewise-linear trend. These indexes and their derivatives detect areas of disturbance and regrowth and are also used as inputs into a neural network that classifies the disturbance type/agent. Disturbance and disease information from the US Forest Service Aerial Detection Surveys (ADS) geodatabase and disturbed plots from the US Forest Service Forest Inventory and Analysis (FIA) database provided training data for the neural network. Although there have been recent advances in discriminating between disturbance types in boreal forests, due to the larger number of forest species and cosmopolitan nature of overstory communities in eastern forests, separation remains difficult. The ADS database, derived from sketch maps and later digitized, commonly designates a single large area encompassing many smaller effected

  16. Automated Detection of Oscillating Regions in the Solar Atmosphere

    NASA Technical Reports Server (NTRS)

    Ireland, J.; Marsh, M. S.; Kucera, T. A.; Young, C. A.

    2010-01-01

    Recently observed oscillations in the solar atmosphere have been interpreted and modeled as magnetohydrodynamic wave modes. This has allowed for the estimation of parameters that are otherwise hard to derive, such as the coronal magnetic-field strength. This work crucially relies on the initial detection of the oscillations, which is commonly done manually. The volume of Solar Dynamics Observatory (SDO) data will make manual detection inefficient for detecting all of the oscillating regions. An algorithm is presented that automates the detection of areas of the solar atmosphere that support spatially extended oscillations. The algorithm identifies areas in the solar atmosphere whose oscillation content is described by a single, dominant oscillation within a user-defined frequency range. The method is based on Bayesian spectral analysis of time series and image filtering. A Bayesian approach sidesteps the need for an a-priori noise estimate to calculate rejection criteria for the observed signal, and it also provides estimates of oscillation frequency, amplitude, and noise, and the error in all of these quantities, in a self-consistent way. The algorithm also introduces the notion of quality measures to those regions for which a positive detection is claimed, allowing for simple post-detection discrimination by the user. The algorithm is demonstrated on two Transition Region and Coronal Explorer (TRACE) datasets, and comments regarding its suitability for oscillation detection in SDO are made.

  17. Observer performance in semi-automated microbleed detection

    NASA Astrophysics Data System (ADS)

    Kuijf, Hugo J.; Brundel, Manon; de Bresser, Jeroen; Viergever, Max A.; Biessels, Geert Jan; Geerlings, Mirjam I.; Vincken, Koen L.

    2013-03-01

    Cerebral microbleeds are small bleedings in the human brain, detectable with MRI. Microbleeds are associated with vascular disease and dementia. The number of studies involving microbleed detection is increasing rapidly. Visual rating is the current standard for detection, but is a time-consuming process, especially at high-resolution 7.0 T MR images, has limited reproducibility and is highly observer dependent. Recently, multiple techniques have been published for the semi-automated detection of microbleeds, attempting to overcome these problems. In the present study, a 7.0 T dual-echo gradient echo MR image was acquired in 18 participants with microbleeds from the SMART study. Two experienced observers identified 54 microbleeds in these participants, using a validated visual rating scale. The radial symmetry transform (RST) can be used for semi-automated detection of microbleeds in 7.0 T MR images. In the present study, the results of the RST were assessed by two observers and 47 microbleeds were identified: 35 true positives and 12 extra positives (microbleeds that were missed during visual rating). Hence, after scoring a total number of 66 microbleeds could be identified in the 18 participants. The use of the RST increased the average sensitivity of observers from 59% to 69%. More importantly, inter-observer agreement (ICC and Dice's coefficient) increased from 0.85 and 0.64 to 0.98 and 0.96, respectively. Furthermore, the required rating time was reduced from 30 to 2 minutes per participant. By fine-tuning the RST, sensitivities up to 90% can be achieved, at the cost of extra false positives.

  18. Automated microaneurysm detection in diabetic retinopathy using curvelet transform.

    PubMed

    Ali Shah, Syed Ayaz; Laude, Augustinus; Faye, Ibrahima; Tang, Tong Boon

    2016-10-01

    Microaneurysms (MAs) are known to be the early signs of diabetic retinopathy (DR). An automated MA detection system based on curvelet transform is proposed for color fundus image analysis. Candidates of MA were extracted in two parallel steps. In step one, blood vessels were removed from preprocessed green band image and preliminary MA candidates were selected by local thresholding technique. In step two, based on statistical features, the image background was estimated. The results from the two steps allowed us to identify preliminary MA candidates which were also present in the image foreground. A collection set of features was fed to a rule-based classifier to divide the candidates into MAs and non-MAs. The proposed system was tested with Retinopathy Online Challenge database. The automated system detected 162 MAs out of 336, thus achieved a sensitivity of 48.21% with 65 false positives per image. Counting MA is a means to measure the progression of DR. Hence, the proposed system may be deployed to monitor the progression of DR at early stage in population studies. PMID:26868326

  19. Development of automated detection of radiology reports citing adrenal findings

    NASA Astrophysics Data System (ADS)

    Zopf, Jason; Langer, Jessica; Boonn, William; Kim, Woojin; Zafar, Hanna

    2011-03-01

    Indeterminate incidental findings pose a challenge to both the radiologist and the ordering physician as their imaging appearance is potentially harmful but their clinical significance and optimal management is unknown. We seek to determine if it is possible to automate detection of adrenal nodules, an indeterminate incidental finding, on imaging examinations at our institution. Using PRESTO (Pathology-Radiology Enterprise Search tool), a newly developed search engine at our institution that mines dictated radiology reports, we searched for phrases used by attendings to describe incidental adrenal findings. Using these phrases as a guide, we designed a query that can be used with the PRESTO index. The results were refined using a modified version of NegEx to eliminate query terms that have been negated within the report text. In order to validate these findings we used an online random date generator to select two random weeks. We queried our RIS database for all reports created on those dates and manually reviewed each report to check for adrenal incidental findings. This survey produced a ground- truth dataset of reports citing adrenal incidental findings against which to compare query performance. We further reviewed the false positives and negatives identified by our validation study, in an attempt to improve the performance query. This algorithm is an important step towards automating the detection of incidental adrenal nodules on cross sectional imaging at our institution. Subsequently, this query can be combined with electronic medical record data searches to determine the clinical significance of these findings through resultant follow-up.

  20. Automated detection of jet contrails using the AVHRR split window

    NASA Technical Reports Server (NTRS)

    Engelstad, M.; Sengupta, S. K.; Lee, T.; Welch, R. M.

    1992-01-01

    This paper investigates the automated detection of jet contrails using data from the Advanced Very High Resolution Radiometer. A preliminary algorithm subtracts the 11.8-micron image from the 10.8-micron image, creating a difference image on which contrails are enhanced. Then a three-stage algorithm searches the difference image for the nearly-straight line segments which characterize contrails. First, the algorithm searches for elevated, linear patterns called 'ridges'. Second, it applies a Hough transform to the detected ridges to locate nearly-straight lines. Third, the algorithm determines which of the nearly-straight lines are likely to be contrails. The paper applies this technique to several test scenes.

  1. Automated detection of optical counterparts to GRBs with RAPTOR

    SciTech Connect

    Wozniak, P. R.; Vestrand, W. T.; Evans, S.; White, R.; Wren, J.

    2006-05-19

    The RAPTOR system (RAPid Telescopes for Optical Response) is an array of several distributed robotic telescopes that automatically respond to GCN localization alerts. Raptor-S is a 0.4-m telescope with 24 arc min. field of view employing a 1k x 1k Marconi CCD detector, and has already detected prompt optical emission from several GRBs within the first minute of the explosion. We present a real-time data analysis and alert system for automated identification of optical transients in Raptor-S GRB response data down to the sensitivity limit of {approx} 19 mag. Our custom data processing pipeline is designed to minimize the time required to reliably identify transients and extract actionable information. The system utilizes a networked PostgreSQL database server for catalog access and distributes email alerts with successful detections.

  2. Automated detection of objects in Sidescan sonar data

    NASA Astrophysics Data System (ADS)

    Irvine, John M.; Israel, Steven A.; Bergeron, Stuart

    2007-04-01

    Detection and mapping of subsurface obstacles is critical for safe navigation of littoral regions. Sidescan sonar data offers a rich source of information for developing such maps. Typically, data are collected at two frequencies using a sensor mounted on a towfish. The major features of interest depend on the specific mission, but often include: objects on the bottom that could pose hazards for navigation, linear features such as cables or pipelines, and the bottom type, e.g., clay, sand, rock, etc. A number of phenomena can complicate the analysis of the sonar data: Surface return, vessel wakes, fluctuations in the position and orientation of the towfish. Developing accurate maps of navigation hazards based on sidescan sonar data is generally labor intensive. We propose an automated approach, which employs commercial software tools, to detect of these objects. This method offers the prospect of substantially reducing production time for maritime geospatial data products.

  3. Reasoning about change and exceptions in automated process planning

    SciTech Connect

    Brooks, S.L.

    1989-08-01

    Automated process planning is generally defined as the automatic planning of the manufacturing procedures for producing a part from a CAD based product definition. The knowledge in this domain is largely heuristic and has been a good application of expert systems for developing an automated planner. We are currently developing an automated process planning system, XCUT, using the HERB rule-based expert system shell which employs hierarchical abstraction and object-oriented programming. Two areas where we have found the AI techniques implemented in HERB lacking for our domain are reasoning about change and exceptions. To reason about change is the frame problem, where after applying an action the planner must determine what facts are still true. Reasoning about exceptions is determining when general heuristics can be used or not. In AI terms reasoning about exceptions is default reasoning or in terms of ATMS is hypothetical reasoning. The focus of this paper will explore both the need and the ways we plan to augment the XCUT system for reasoning about change and exceptions. 19 refs.

  4. Automated calibration methods for robotic multisensor landmine detection

    NASA Astrophysics Data System (ADS)

    Keranen, Joe G.; Miller, Jonathan; Schultz, Gregory; Topolosky, Zeke

    2007-04-01

    Both force protection and humanitarian demining missions require efficient and reliable detection and discrimination of buried anti-tank and anti-personnel landmines. Widely varying surface and subsurface conditions, mine types and placement, as well as environmental regimes challenge the robustness of the automatic target recognition process. In this paper we present applications created for the U.S. Army Nemesis detection platform. Nemesis is an unmanned rubber-tracked vehicle-based system designed to eradicate a wide variety of anti-tank and anti-personnel landmines for humanitarian demining missions. The detection system integrates advanced ground penetrating synthetic aperture radar (GPSAR) and electromagnetic induction (EMI) arrays, highly accurate global and local positioning, and on-board target detection/classification software on the front loader of a semi-autonomous UGV. An automated procedure is developed to estimate the soil's dielectric constant using surface reflections from the ground penetrating radar. The results have implications not only for calibration of system data acquisition parameters, but also for user awareness and tuning of automatic target recognition detection and discrimination algorithms.

  5. Fast-time Simulation of an Automated Conflict Detection and Resolution Concept

    NASA Technical Reports Server (NTRS)

    Windhorst, Robert; Erzberger, Heinz

    2006-01-01

    This paper investigates the effect on the National Airspace System of reducing air traffc controller workload by automating conflict detection and resolution. The Airspace Concept Evaluation System is used to perform simulations of the Cleveland Center with conventional and with automated conflict detection and resolution concepts. Results show that the automated conflict detection and resolution concept significantly decreases growth of delay as traffic demand is increased in en-route airspace.

  6. Infrared Thermal Imaging for Automated Detection of Diabetic Foot Complications

    PubMed Central

    van Netten, Jaap J.; van Baal, Jeff G.; Liu, Chanjuan; van der Heijden, Ferdi; Bus, Sicco A.

    2013-01-01

    Background Although thermal imaging can be a valuable technology in the prevention and management of diabetic foot disease, it is not yet widely used in clinical practice. Technological advancement in infrared imaging increases its application range. The aim was to explore the first steps in the applicability of high-resolution infrared thermal imaging for noninvasive automated detection of signs of diabetic foot disease. Methods The plantar foot surfaces of 15 diabetes patients were imaged with an infrared camera (resolution, 1.2 mm/pixel): 5 patients had no visible signs of foot complications, 5 patients had local complications (e.g., abundant callus or neuropathic ulcer), and 5 patients had diffuse complications (e.g., Charcot foot, infected ulcer, or critical ischemia). Foot temperature was calculated as mean temperature across pixels for the whole foot and for specified regions of interest (ROIs). Results No differences in mean temperature >1.5 °C between the ipsilateral and the contralateral foot were found in patients without complications. In patients with local complications, mean temperatures of the ipsilateral and the contralateral foot were similar, but temperature at the ROI was >2 °C higher compared with the corresponding region in the contralateral foot and to the mean of the whole ipsilateral foot. In patients with diffuse complications, mean temperature differences of >3 °C between ipsilateral and contralateral foot were found. Conclusions With an algorithm based on parameters that can be captured and analyzed with a high-resolution infrared camera and a computer, it is possible to detect signs of diabetic foot disease and to discriminate between no, local, or diffuse diabetic foot complications. As such, an intelligent telemedicine monitoring system for noninvasive automated detection of signs of diabetic foot disease is one step closer. Future studies are essential to confirm and extend these promising early findings. PMID:24124937

  7. Automated transient detection in the STEREO Heliospheric Imagers.

    NASA Astrophysics Data System (ADS)

    Barnard, Luke; Scott, Chris; Owens, Mat; Lockwood, Mike; Tucker-Hood, Kim; Davies, Jackie

    2014-05-01

    Since the launch of the twin STEREO satellites, the heliospheric imagers (HI) have been used, with good results, in tracking transients of solar origin, such as Coronal Mass Ejections (CMEs), out far into the heliosphere. A frequently used approach is to build a "J-map", in which multiple elongation profiles along a constant position angle are stacked in time, building an image in which radially propagating transients form curved tracks in the J-map. From this the time-elongation profile of a solar transient can be manually identified. This is a time consuming and laborious process, and the results are subjective, depending on the skill and expertise of the investigator. Therefore, it is desirable to develop an automated algorithm for the detection and tracking of the transient features observed in HI data. This is to some extent previously covered ground, as similar problems have been encountered in the analysis of coronagraph data and have led to the development of products such as CACtus etc. We present the results of our investigation into the automated detection of solar transients observed in J-maps formed from HI data. We use edge and line detection methods to identify transients in the J-maps, and then use kinematic models of the solar transient propagation (such as the fixed-phi and harmonic mean geometric models) to estimate the solar transients properties, such as transient speed and propagation direction, from the time-elongation profile. The effectiveness of this process is assessed by comparison of our results with a set of manually identified CMEs, extracted and analysed by the Solar Storm Watch Project. Solar Storm Watch is a citizen science project in which solar transients are identified in J-maps formed from HI data and tracked multiple times by different users. This allows the calculation of a consensus time-elongation profile for each event, and therefore does not suffer from the potential subjectivity of an individual researcher tracking an

  8. Detecting and Predicting Changes

    ERIC Educational Resources Information Center

    Brown, Scott D.; Steyvers, Mark

    2009-01-01

    When required to predict sequential events, such as random coin tosses or basketball free throws, people reliably use inappropriate strategies, such as inferring temporal structure when none is present. We investigate the ability of observers to predict sequential events in dynamically changing environments, where there is an opportunity to detect…

  9. Automated focusing in bright-field microscopy for tuberculosis detection

    PubMed Central

    OSIBOTE, O.A.; DENDERE, R.; KRISHNAN, S.; DOUGLAS, T.S.

    2010-01-01

    Summary Automated microscopy to detect Mycobacterium tuberculosis in sputum smear slides would enable laboratories in countries with a high tuberculosis burden to cope efficiently with large numbers of smears. Focusing is a core component of automated microscopy, and successful autofocusing depends on selection of an appropriate focus algorithm for a specific task. We examined autofocusing algorithms for bright-field microscopy of Ziehl–Neelsen stained sputum smears. Six focus measures, defined in the spatial domain, were examined with respect to accuracy, execution time, range, full width at half maximum of the peak and the presence of local maxima. Curve fitting around an estimate of the focal plane was found to produce good results and is therefore an acceptable strategy to reduce the number of images captured for focusing and the processing time. Vollath's F4 measure performed best for full z-stacks, with a mean difference of 0.27 μm between manually and automatically determined focal positions, whereas it is jointly ranked best with the Brenner gradient for curve fitting. PMID:20946382

  10. Automated rice leaf disease detection using color image analysis

    NASA Astrophysics Data System (ADS)

    Pugoy, Reinald Adrian D. L.; Mariano, Vladimir Y.

    2011-06-01

    In rice-related institutions such as the International Rice Research Institute, assessing the health condition of a rice plant through its leaves, which is usually done as a manual eyeball exercise, is important to come up with good nutrient and disease management strategies. In this paper, an automated system that can detect diseases present in a rice leaf using color image analysis is presented. In the system, the outlier region is first obtained from a rice leaf image to be tested using histogram intersection between the test and healthy rice leaf images. Upon obtaining the outlier, it is then subjected to a threshold-based K-means clustering algorithm to group related regions into clusters. Then, these clusters are subjected to further analysis to finally determine the suspected diseases of the rice leaf.

  11. Automated detection of open magnetic field regions in EUV images

    NASA Astrophysics Data System (ADS)

    Krista, Larisza Diana; Reinard, Alysha

    2016-05-01

    Open magnetic regions on the Sun are either long-lived (coronal holes) or transient (dimmings) in nature, but both appear as dark regions in EUV images. For this reason their detection can be done in a similar way. As coronal holes are often large and long-lived in comparison to dimmings, their detection is more straightforward. The Coronal Hole Automated Recognition and Monitoring (CHARM) algorithm detects coronal holes using EUV images and a magnetogram. The EUV images are used to identify dark regions, and the magnetogam allows us to determine if the dark region is unipolar – a characteristic of coronal holes. There is no temporal sensitivity in this process, since coronal hole lifetimes span days to months. Dimming regions, however, emerge and disappear within hours. Hence, the time and location of a dimming emergence need to be known to successfully identify them and distinguish them from regular coronal holes. Currently, the Coronal Dimming Tracker (CoDiT) algorithm is semi-automated – it requires the dimming emergence time and location as an input. With those inputs we can identify the dimming and track it through its lifetime. CoDIT has also been developed to allow the tracking of dimmings that split or merge – a typical feature of dimmings.The advantage of these particular algorithms is their ability to adapt to detecting different types of open field regions. For coronal hole detection, each full-disk solar image is processed individually to determine a threshold for the image, hence, we are not limited to a single pre-determined threshold. For dimming regions we also allow individual thresholds for each dimming, as they can differ substantially. This flexibility is necessary for a subjective analysis of the studied regions. These algorithms were developed with the goal to allow us better understand the processes that give rise to eruptive and non-eruptive open field regions. We aim to study how these regions evolve over time and what environmental

  12. Computer automated movement detection for the analysis of behavior

    PubMed Central

    Ramazani, Roseanna B.; Krishnan, Harish R.; Bergeson, Susan E.; Atkinson, Nigel S.

    2007-01-01

    Currently, measuring ethanol behaviors in flies depends on expensive image analysis software or time intensive experimenter observation. We have designed an automated system for the collection and analysis of locomotor behavior data, using the IEEE 1394 acquisition program dvgrab, the image toolkit ImageMagick and the programming language Perl. In the proposed method, flies are placed in a clear container and a computer-controlled camera takes pictures at regular intervals. Digital subtraction removes the background and non-moving flies, leaving white pixels where movement has occurred. These pixels are tallied, giving a value that corresponds to the number of animals that have moved between images. Perl scripts automate these processes, allowing compatibility with high-throughput genetic screens. Four experiments demonstrate the utility of this method, the first showing heat-induced locomotor changes, the second showing tolerance to ethanol in a climbing assay, the third showing tolerance to ethanol by scoring the recovery of individual flies, and the fourth showing a mouse’s preference for a novel object. Our lab will use this method to conduct a genetic screen for ethanol induced hyperactivity and sedation, however, it could also be used to analyze locomotor behavior of any organism. PMID:17335906

  13. Automated High-Throughput Fluorescence Lifetime Imaging Microscopy to Detect Protein-Protein Interactions.

    PubMed

    Guzmán, Camilo; Oetken-Lindholm, Christina; Abankwa, Daniel

    2016-04-01

    Fluorescence resonance energy transfer (FRET) is widely used to study conformational changes of macromolecules and protein-protein, protein-nucleic acid, and protein-small molecule interactions. FRET biosensors can serve as valuable secondary assays in drug discovery and for target validation in mammalian cells. Fluorescence lifetime imaging microscopy (FLIM) allows precise quantification of the FRET efficiency in intact cells, as FLIM is independent of fluorophore concentration, detection efficiency, and fluorescence intensity. We have developed an automated FLIM system using a commercial frequency domain FLIM attachment (Lambert Instruments) for wide-field imaging. Our automated FLIM system is capable of imaging and analyzing up to 50 different positions of a slide in less than 4 min, or the inner 60 wells of a 96-well plate in less than 20 min. Automation is achieved using a motorized stage and controller (Prior Scientific) coupled with a Zeiss Axio Observer body and full integration into the Lambert Instruments FLIM acquisition software. As an application example, we analyze the interaction of the oncoprotein Ras and its effector Raf after drug treatment. In conclusion, our automated FLIM imaging system requires only commercial components and may therefore allow for a broader use of this technique in chemogenomics projects. PMID:26384400

  14. Automated detection of microaneurysms using robust blob descriptors

    NASA Astrophysics Data System (ADS)

    Adal, K.; Ali, S.; Sidibé, D.; Karnowski, T.; Chaum, E.; Mériaudeau, F.

    2013-03-01

    Microaneurysms (MAs) are among the first signs of diabetic retinopathy (DR) that can be seen as round dark-red structures in digital color fundus photographs of retina. In recent years, automated computer-aided detection and diagnosis (CAD) of MAs has attracted many researchers due to its low-cost and versatile nature. In this paper, the MA detection problem is modeled as finding interest points from a given image and several interest point descriptors are introduced and integrated with machine learning techniques to detect MAs. The proposed approach starts by applying a novel fundus image contrast enhancement technique using Singular Value Decomposition (SVD) of fundus images. Then, Hessian-based candidate selection algorithm is applied to extract image regions which are more likely to be MAs. For each candidate region, robust low-level blob descriptors such as Speeded Up Robust Features (SURF) and Intensity Normalized Radon Transform are extracted to characterize candidate MA regions. The combined features are then classified using SVM which has been trained using ten manually annotated training images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. Preliminary results show the competitiveness of the proposed candidate selection techniques against state-of-the art methods as well as the promising future for the proposed descriptors to be used in the localization of MAs from fundus images.

  15. Automated anomaly detection for Orbiter High Temperature Reusable Surface Insulation

    NASA Astrophysics Data System (ADS)

    Cooper, Eric G.; Jones, Sharon M.; Goode, Plesent W.; Vazquez, Sixto L.

    1992-11-01

    The description, analysis, and experimental results of a method for identifying possible defects on High Temperature Reusable Surface Insulation (HRSI) of the Orbiter Thermal Protection System (TPS) is presented. Currently, a visual postflight inspection of Orbiter TPS is conducted to detect and classify defects as part of the Orbiter maintenance flow. The objective of the method is to automate the detection of defects by identifying anomalies between preflight and postflight images of TPS components. The initial version is intended to detect and label gross (greater than 0.1 inches in the smallest dimension) anomalies on HRSI components for subsequent classification by a human inspector. The approach is a modified Golden Template technique where the preflight image of a tile serves as the template against which the postflight image of the tile is compared. Candidate anomalies are selected as a result of the comparison and processed to identify true anomalies. The processing methods are developed and discussed, and the results of testing on actual and simulated tile images are presented. Solutions to the problems of brightness and spatial normalization, timely execution, and minimization of false positives are also discussed.

  16. Geostationary Fire Detection with the Wildfire Automated Biomass Burning Algorithm

    NASA Astrophysics Data System (ADS)

    Hoffman, J.; Schmidt, C. C.; Brunner, J. C.; Prins, E. M.

    2010-12-01

    The Wild Fire Automated Biomass Burning Algorithm (WF_ABBA), developed at the Cooperative Institute for Meteorological Satellite Studies (CIMSS), has a long legacy of operational wildfire detection and characterization. In recent years, applications of geostationary fire detection and characterization data have been expanding. Fires are detected with a contextual algorithm and when the fires meet certain conditions the instantaneous fire size, temperature, and radiative power are calculated and provided in user products. The WF_ABBA has been applied to data from Geostationary Operational Environmental Satellite (GOES)-8 through 15, Meteosat-8/-9, and Multifunction Transport Satellite (MTSAT)-1R/-2. WF_ABBA is also being developed for the upcoming platforms like GOES-R Advanced Baseline Imager (ABI) and other geostationary satellites. Development of the WF_ABBA for GOES-R ABI has focused on adapting the legacy algorithm to the new satellite system, enhancing its capabilities to take advantage of the improvements available from ABI, and addressing user needs. By its nature as a subpixel feature, observation of fire is extraordinarily sensitive to the characteristics of the sensor and this has been a fundamental part of the GOES-R WF_ABBA development work.

  17. Automated Detection of Firearms and Knives in a CCTV Image

    PubMed Central

    Grega, Michał; Matiolański, Andrzej; Guzik, Piotr; Leszczuk, Mikołaj

    2016-01-01

    Closed circuit television systems (CCTV) are becoming more and more popular and are being deployed in many offices, housing estates and in most public spaces. Monitoring systems have been implemented in many European and American cities. This makes for an enormous load for the CCTV operators, as the number of camera views a single operator can monitor is limited by human factors. In this paper, we focus on the task of automated detection and recognition of dangerous situations for CCTV systems. We propose algorithms that are able to alert the human operator when a firearm or knife is visible in the image. We have focused on limiting the number of false alarms in order to allow for a real-life application of the system. The specificity and sensitivity of the knife detection are significantly better than others published recently. We have also managed to propose a version of a firearm detection algorithm that offers a near-zero rate of false alarms. We have shown that it is possible to create a system that is capable of an early warning in a dangerous situation, which may lead to faster and more effective response times and a reduction in the number of potential victims. PMID:26729128

  18. Automated analysis for detecting beams in laser wakefield simulations

    SciTech Connect

    Ushizima, Daniela M.; Rubel, Oliver; Prabhat, Mr.; Weber, Gunther H.; Bethel, E. Wes; Aragon, Cecilia R.; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Hamann, Bernd; Messmer, Peter; Hagen, Hans

    2008-07-03

    Laser wakefield particle accelerators have shown the potential to generate electric fields thousands of times higher than those of conventional accelerators. The resulting extremely short particle acceleration distance could yield a potential new compact source of energetic electrons and radiation, with wide applications from medicine to physics. Physicists investigate laser-plasma internal dynamics by running particle-in-cell simulations; however, this generates a large dataset that requires time-consuming, manual inspection by experts in order to detect key features such as beam formation. This paper describes a framework to automate the data analysis and classification of simulation data. First, we propose a new method to identify locations with high density of particles in the space-time domain, based on maximum extremum point detection on the particle distribution. We analyze high density electron regions using a lifetime diagram by organizing and pruning the maximum extrema as nodes in a minimum spanning tree. Second, we partition the multivariate data using fuzzy clustering to detect time steps in a experiment that may contain a high quality electron beam. Finally, we combine results from fuzzy clustering and bunch lifetime analysis to estimate spatially confined beams. We demonstrate our algorithms successfully on four different simulation datasets.

  19. Automated Detection of Firearms and Knives in a CCTV Image.

    PubMed

    Grega, Michał; Matiolański, Andrzej; Guzik, Piotr; Leszczuk, Mikołaj

    2016-01-01

    Closed circuit television systems (CCTV) are becoming more and more popular and are being deployed in many offices, housing estates and in most public spaces. Monitoring systems have been implemented in many European and American cities. This makes for an enormous load for the CCTV operators, as the number of camera views a single operator can monitor is limited by human factors. In this paper, we focus on the task of automated detection and recognition of dangerous situations for CCTV systems. We propose algorithms that are able to alert the human operator when a firearm or knife is visible in the image. We have focused on limiting the number of false alarms in order to allow for a real-life application of the system. The specificity and sensitivity of the knife detection are significantly better than others published recently. We have also managed to propose a version of a firearm detection algorithm that offers a near-zero rate of false alarms. We have shown that it is possible to create a system that is capable of an early warning in a dangerous situation, which may lead to faster and more effective response times and a reduction in the number of potential victims. PMID:26729128

  20. Automated Detection and Recognition of Wildlife Using Thermal Cameras

    PubMed Central

    Christiansen, Peter; Steen, Kim Arild; Jørgensen, Rasmus Nyholm; Karstoft, Henrik

    2014-01-01

    In agricultural mowing operations, thousands of animals are injured or killed each year, due to the increased working widths and speeds of agricultural machinery. Detection and recognition of wildlife within the agricultural fields is important to reduce wildlife mortality and, thereby, promote wildlife-friendly farming. The work presented in this paper contributes to the automated detection and classification of animals in thermal imaging. The methods and results are based on top-view images taken manually from a lift to motivate work towards unmanned aerial vehicle-based detection and recognition. Hot objects are detected based on a threshold dynamically adjusted to each frame. For the classification of animals, we propose a novel thermal feature extraction algorithm. For each detected object, a thermal signature is calculated using morphological operations. The thermal signature describes heat characteristics of objects and is partly invariant to translation, rotation, scale and posture. The discrete cosine transform (DCT) is used to parameterize the thermal signature and, thereby, calculate a feature vector, which is used for subsequent classification. Using a k-nearest-neighbor (kNN) classifier, animals are discriminated from non-animals with a balanced classification accuracy of 84.7% in an altitude range of 3–10 m and an accuracy of 75.2% for an altitude range of 10–20 m. To incorporate temporal information in the classification, a tracking algorithm is proposed. Using temporal information improves the balanced classification accuracy to 93.3% in an altitude range 3–10 of meters and 77.7% in an altitude range of 10–20 m PMID:25196105

  1. Automated detection and recognition of wildlife using thermal cameras.

    PubMed

    Christiansen, Peter; Steen, Kim Arild; Jørgensen, Rasmus Nyholm; Karstoft, Henrik

    2014-01-01

    In agricultural mowing operations, thousands of animals are injured or killed each year, due to the increased working widths and speeds of agricultural machinery. Detection and recognition of wildlife within the agricultural fields is important to reduce wildlife mortality and, thereby, promote wildlife-friendly farming. The work presented in this paper contributes to the automated detection and classification of animals in thermal imaging. The methods and results are based on top-view images taken manually from a lift to motivate work towards unmanned aerial vehicle-based detection and recognition. Hot objects are detected based on a threshold dynamically adjusted to each frame. For the classification of animals, we propose a novel thermal feature extraction algorithm. For each detected object, a thermal signature is calculated using morphological operations. The thermal signature describes heat characteristics of objects and is partly invariant to translation, rotation, scale and posture. The discrete cosine transform (DCT) is used to parameterize the thermal signature and, thereby, calculate a feature vector, which is used for subsequent classification. Using a k-nearest-neighbor (kNN) classifier, animals are discriminated from non-animals with a balanced classification accuracy of 84.7% in an altitude range of 3-10 m and an accuracy of 75.2% for an altitude range of 10-20 m. To incorporate temporal information in the classification, a tracking algorithm is proposed. Using temporal information improves the balanced classification accuracy to 93.3% in an altitude range 3-10 of meters and 77.7% in an altitude range of 10-20 m. PMID:25196105

  2. A practical automated polyp detection scheme for CT colonography

    NASA Astrophysics Data System (ADS)

    Li, Hong; Santago, Pete

    2004-05-01

    A fully automated computerized polyp detection (CPD) system is presented that takes DICOM images from CT scanners and provides a list of detected polyps. The system comprises three stages, segmentation, polyp candidate generation (PCG), and false positive reduction (FPR). Employing computer tomographic colonography (CTC), both supine and prone scans are used for improving detection sensitivity. We developed a novel and efficient segmentation scheme. Major shape features, e.g., the mean curvature and Gaussian curvature, together with a connectivity test efficiently produce polyp candidates. We select six shape features and introduce a multi-plane linear discriminant function (MLDF) classifier in our system for FPR. The classifier parameters are empirically assigned with respect to the geometric meanings of a specific feature. We have tested the system on 68 real subjects, 20 positive and 48 negative for 6 mm and larger polyps from colonoscopy results. Using a patient-based criterion, 95% accuracy and 31% specificity were achieved when 6 mm was used as the cutoff size, implying that 15 out of 48 healthy subjects could avoid OC. One 11 mm polyp was missed by CPD but was also not reported by the radiologist. With a complete polyp database, we anticipate that a maximum a posteriori probability (MAP) classifier tuned by supervised training will improve the detection performance. The execution time for both scans is about 10-15 minutes using a 1 GHz PC running Linux. The system may be used standalone, but is envisioned more as a part of a computer-aided CTC screening that can address the problems with a fully automatic approach and a fully physician approach.

  3. Automated Point Cloud Correspondence Detection for Underwater Mapping Using AUVs

    NASA Technical Reports Server (NTRS)

    Hammond, Marcus; Clark, Ashley; Mahajan, Aditya; Sharma, Sumant; Rock, Stephen

    2015-01-01

    An algorithm for automating correspondence detection between point clouds composed of multibeam sonar data is presented. This allows accurate initialization for point cloud alignment techniques even in cases where accurate inertial navigation is not available, such as iceberg profiling or vehicles with low-grade inertial navigation systems. Techniques from computer vision literature are used to extract, label, and match keypoints between "pseudo-images" generated from these point clouds. Image matches are refined using RANSAC and information about the vehicle trajectory. The resulting correspondences can be used to initialize an iterative closest point (ICP) registration algorithm to estimate accumulated navigation error and aid in the creation of accurate, self-consistent maps. The results presented use multibeam sonar data obtained from multiple overlapping passes of an underwater canyon in Monterey Bay, California. Using strict matching criteria, the method detects 23 between-swath correspondence events in a set of 155 pseudo-images with zero false positives. Using less conservative matching criteria doubles the number of matches but introduces several false positive matches as well. Heuristics based on known vehicle trajectory information are used to eliminate these.

  4. Comparison of automated haematology analysers for detection of apoptotic lymphocytes.

    PubMed

    Taga, K; Sawaya, M; Yoshida, M; Kaneko, M; Okada, M; Taniho, M

    2002-06-01

    Automated haematology analysers can rapidly provide accurate blood cell counts and white blood cell differentials. In this study, we evaluated four different haematology analysers for the detection of apoptotic lymphocytes in peripheral blood: MAXM A/L Retic, H*2, Cell-Dyn 3500 and NE-8000. With the MAXM A/L Retic haematology analyser, the apoptotic lymphocyte cluster appeared below the original lymphocyte cluster on the volume/DF1, and to the right under the original lymphocyte cluster on the volume/DF2 scattergrams. With the H*2 haematology analyser, the apoptotic polymorphonuclear lymphocytes produced a higher lobularity index on the BASO channel. With the Cell-Dyn 3500 haematology analyser, the apoptotic lymphocyte cluster appeared to the right side of the original lymphocyte cluster on the 0D/10D scattergram and to the left side of the polymorphonuclear cluster on the 90D/10D scattergram. With the NE-8000 haematology analyser, the apoptotic lymphocyte cluster was not distinguishable. Thus, apoptotic lymphocytes are readily detected on scattergrams generated by selected haematology analysers. PMID:12067276

  5. Evaluation of automated target detection using image fusion

    NASA Astrophysics Data System (ADS)

    Irvine, John M.; Abramson, Susan; Mossing, John

    2003-09-01

    Reliance on Automated Target Recognition (ATR) technology is essential to the future success of Intelligence, Surveillance, and Reconnaissance (ISR) missions. Although benefits may be realized through ATR processing of a single data source, fusion of information across multiple images and multiple sensors promises significant performance gains. A major challenge, as ATR fusion technologies mature, is the establishment of sound methods for evaluating ATR performance in the context of data fusion. The Deputy Under Secretary of Defense for Science and Technology (DUSD/S&T), as part of their ongoing ATR Program, has sponsored an effort to develop and demonstrate methods for evaluating ATR algorithms that utilize multiple data source, i.e., fusion-based ATR. This paper presents results from this program, focusing on the target detection and cueing aspect of the problem. The first step in assessing target detection performance is to relate the ground truth to the ATR decisions. Once the ATR decisions have been mapped to ground truth, the second step in the evaluation is to characterize ATR performance. A common approach is to vary the confidence threshold of the ATR and compute the Probability of Detection (PD) and the False Alarm Rate (FAR) associated with each threshold. Varying the threshold, therefore, produces an empirical performance curve relating detection performance to false alarms. Various statistical methods have been developed, largely in the medical imaging literature, to model this curve so that statistical inferences are possible. One approach, based on signal detection theory, generalizes the Receiver Operator Characteristic (ROC) curve. Under this approach, the Free Response Operating Characteristic (FROC) curve models performance for search problems. The FROC model is appropriate when multiple detections are possible and the number of false alarms is unconstrained. The parameterization of the FROC model provides a natural method for characterizing both

  6. Detection of Operator Performance Breakdown as an Automation Triggering Mechanism

    NASA Technical Reports Server (NTRS)

    Yoo, Hyo-Sang; Lee, Paul U.; Landry, Steven J.

    2015-01-01

    Performance breakdown (PB) has been anecdotally described as a state where the human operator "loses control of context" and "cannot maintain required task performance." Preventing such a decline in performance is critical to assure the safety and reliability of human-integrated systems, and therefore PB could be useful as a point at which automation can be applied to support human performance. However, PB has never been scientifically defined or empirically demonstrated. Moreover, there is no validated objective way of detecting such a state or the transition to that state. The purpose of this work is: 1) to empirically demonstrate a PB state, and 2) to develop an objective way of detecting such a state. This paper defines PB and proposes an objective method for its detection. A human-in-the-loop study was conducted: 1) to demonstrate PB by increasing workload until the subject reported being in a state of PB, and 2) to identify possible parameters of a detection method for objectively identifying the subjectively-reported PB point, and 3) to determine if the parameters are idiosyncratic to an individual/context or are more generally applicable. In the experiment, fifteen participants were asked to manage three concurrent tasks (one primary and two secondary) for 18 minutes. The difficulty of the primary task was manipulated over time to induce PB while the difficulty of the secondary tasks remained static. The participants' task performance data was collected. Three hypotheses were constructed: 1) increasing workload will induce subjectively-identified PB, 2) there exists criteria that identifies the threshold parameters that best matches the subjectively-identified PB point, and 3) the criteria for choosing the threshold parameters is consistent across individuals. The results show that increasing workload can induce subjectively-identified PB, although it might not be generalizable-only 12 out of 15 participants declared PB. The PB detection method based on

  7. Automated motion detection from space in sea surveilliance

    NASA Astrophysics Data System (ADS)

    Charalambous, Elisavet; Takaku, Junichi; Michalis, Pantelis; Dowman, Ian; Charalampopoulou, Vasiliki

    2015-06-01

    The Panchromatic Remote-sensing Instrument for Stereo Mapping (PRISM) carried by the Advanced Land-Observing Satellite (ALOS) was designed to generate worldwide topographic data with its high-resolution and stereoscopic observation. PRISM performs along-track (AT) triplet stereo observations using independent forward (FWD), nadir (NDR), and backward (BWD) panchromatic optical line sensors of 2.5m ground resolution in swaths 35 km wide. The FWD and BWD sensors are arranged at an inclination of ±23.8° from NDR. In this paper, PRISM images are used under a new perspective, in security domain for sea surveillance, based on the sequence of the triplet which is acquired in a time interval of 90 sec (45 sec between images). An automated motion detection algorithm is developed allowing the combination of encompassed information at each instant and therefore the identification of patterns and trajectories of moving objects on sea; including the extraction of geometric characteristics along with the speed of movement and direction. The developed methodology combines well established image segmentation and morphological operation techniques for the detection of objects. Each object in the scene is represented by dimensionless measure properties and maintained in a database to allow the generation of trajectories as these arise over time, while the location of moving objects is updated based on the result of neighbourhood calculations. Most importantly, the developed methodology can be deployed in any air borne (optionally piloted) sensor system with along the track stereo capability enabling the provision of near real time automatic detection of targets; a task that cannot be achieved with satellite imagery due to the very intermittent coverage.

  8. Automated optic disk boundary detection by modified active contour model.

    PubMed

    Xu, Juan; Chutatape, Opas; Chew, Paul

    2007-03-01

    This paper presents a novel deformable-model-based algorithm for fully automated detection of optic disk boundary in fundus images. The proposed method improves and extends the original snake (deforming-only technique) in two aspects: clustering and smoothing update. The contour points are first self-separated into edge-point group or uncertain-point group by clustering after each deformation, and these contour points are then updated by different criteria based on different groups. The updating process combines both the local and global information of the contour to achieve the balance of contour stability and accuracy. The modifications make the proposed algorithm more accurate and robust to blood vessel occlusions, noises, ill-defined edges and fuzzy contour shapes. The comparative results show that the proposed method can estimate the disk boundaries of 100 test images closer to the groundtruth, as measured by mean distance to closest point (MDCP) <3 pixels, with the better success rate when compared to those obtained by gradient vector flow snake (GVF-snake) and modified active shape models (ASM). PMID:17355059

  9. Automated single particle detection and tracking for large microscopy datasets

    PubMed Central

    Wilson, Rhodri S.; Yang, Lei; Dun, Alison; Smyth, Annya M.; Duncan, Rory R.; Rickman, Colin

    2016-01-01

    Recent advances in optical microscopy have enabled the acquisition of very large datasets from living cells with unprecedented spatial and temporal resolutions. Our ability to process these datasets now plays an essential role in order to understand many biological processes. In this paper, we present an automated particle detection algorithm capable of operating in low signal-to-noise fluorescence microscopy environments and handling large datasets. When combined with our particle linking framework, it can provide hitherto intractable quantitative measurements describing the dynamics of large cohorts of cellular components from organelles to single molecules. We begin with validating the performance of our method on synthetic image data, and then extend the validation to include experiment images with ground truth. Finally, we apply the algorithm to two single-particle-tracking photo-activated localization microscopy biological datasets, acquired from living primary cells with very high temporal rates. Our analysis of the dynamics of very large cohorts of 10 000 s of membrane-associated protein molecules show that they behave as if caged in nanodomains. We show that the robustness and efficiency of our method provides a tool for the examination of single-molecule behaviour with unprecedented spatial detail and high acquisition rates. PMID:27293801

  10. Stage Evolution of Office Automation Technological Change and Organizational Learning.

    ERIC Educational Resources Information Center

    Sumner, Mary

    1985-01-01

    A study was conducted to identify stage characteristics in terms of technology, applications, the role and responsibilities of the office automation organization, and planning and control strategies; and to describe the respective roles of data processing professionals, office automation analysts, and users in office automation systems development…

  11. Automated Ground Penetrating Radar hyperbola detection in complex environment

    NASA Astrophysics Data System (ADS)

    Mertens, Laurence; Lambot, Sébastien

    2015-04-01

    Ground Penetrating Radar (GPR) systems are commonly used in many applications to detect, amongst others, buried targets (various types of pipes, landmines, tree roots ...), which, in a cross-section, present theoretically a particular hyperbolic-shaped signature resulting from the antenna radiation pattern. Considering the large quantity of information we can acquire during a field campaign, a manual detection of these hyperbolas is barely possible, therefore we have a real need to have at our disposal a quick and automated detection of these hyperbolas. However, this task may reveal itself laborious in real field data because these hyperbolas are often ill-shaped due to the heterogeneity of the medium and to instrumentation clutter. We propose a new detection algorithm for well- and ill-shaped GPR reflection hyperbolas especially developed for complex field data. This algorithm is based on human recognition pattern to emulate human expertise to identify the hyperbolas apexes. The main principle relies in a fitting process of the GPR image edge dots detected with Canny filter to analytical hyperbolas, considering the object as a punctual disturbance with a physical constraint of the parameters. A long phase of observation of a large number of ill-shaped hyperbolas in various complex media led to the definition of smart criteria characterizing the hyperbolic shape and to the choice of accepted value ranges acceptable for an edge dot to correspond to the apex of a specific hyperbola. These values were defined to fit the ambiguity zone for the human brain and present the particularity of being functional in most heterogeneous media. Furthermore, the irregularity is particularly taken into account by defining a buffer zone around the theoretical hyperbola in which the edge dots need to be encountered to belong to this specific hyperbola. First, the method was tested in laboratory conditions over tree roots and over PVC pipes with both time- and frequency-domain radars

  12. Rapid toxicity detection in water quality control utilizing automated multispecies biomonitoring for permanent space stations

    NASA Technical Reports Server (NTRS)

    Morgan, E. L.; Young, R. C.; Smith, M. D.; Eagleson, K. W.

    1986-01-01

    The objective of this study was to evaluate proposed design characteristics and applications of automated biomonitoring devices for real-time toxicity detection in water quality control on-board permanent space stations. Simulated tests in downlinking transmissions of automated biomonitoring data to Earth-receiving stations were simulated using satellite data transmissions from remote Earth-based stations.

  13. Fully automated and colorimetric foodborne pathogen detection on an integrated centrifugal microfluidic device.

    PubMed

    Oh, Seung Jun; Park, Byung Hyun; Choi, Goro; Seo, Ji Hyun; Jung, Jae Hwan; Choi, Jong Seob; Kim, Do Hyun; Seo, Tae Seok

    2016-05-21

    This work describes fully automated and colorimetric foodborne pathogen detection on an integrated centrifugal microfluidic device, which is called a lab-on-a-disc. All the processes for molecular diagnostics including DNA extraction and purification, DNA amplification and amplicon detection were integrated on a single disc. Silica microbeads incorporated in the disc enabled extraction and purification of bacterial genomic DNA from bacteria-contaminated milk samples. We targeted four kinds of foodborne pathogens (Escherichia coli O157:H7, Salmonella typhimurium, Vibrio parahaemolyticus and Listeria monocytogenes) and performed loop-mediated isothermal amplification (LAMP) to amplify the specific genes of the targets. Colorimetric detection mediated by a metal indicator confirmed the results of the LAMP reactions with the colour change of the LAMP mixtures from purple to sky blue. The whole process was conducted in an automated manner using the lab-on-a-disc and a miniaturized rotary instrument equipped with three heating blocks. We demonstrated that a milk sample contaminated with foodborne pathogens can be automatically analysed on the centrifugal disc even at the 10 bacterial cell level in 65 min. The simplicity and portability of the proposed microdevice would provide an advanced platform for point-of-care diagnostics of foodborne pathogens, where prompt confirmation of food quality is needed. PMID:27112702

  14. Algorithm for Automated Detection of Edges of Clouds

    NASA Technical Reports Server (NTRS)

    Ward, Jennifer G.; Merceret, Francis J.

    2006-01-01

    An algorithm processes cloud-physics data gathered in situ by an aircraft, along with reflectivity data gathered by ground-based radar, to determine whether the aircraft is inside or outside a cloud at a given time. A cloud edge is deemed to be detected when the in/out state changes, subject to a hysteresis constraint. Such determinations are important in continuing research on relationships among lightning, electric charges in clouds, and decay of electric fields with distance from cloud edges.

  15. Quantitative Automated Image Analysis System with Automated Debris Filtering for the Detection of Breast Carcinoma Cells

    PubMed Central

    Martin, David T.; Sandoval, Sergio; Ta, Casey N.; Ruidiaz, Manuel E.; Cortes-Mateos, Maria Jose; Messmer, Davorka; Kummel, Andrew C.; Blair, Sarah L.; Wang-Rodriguez, Jessica

    2011-01-01

    Objective To develop an intraoperative method for margin status evaluation during breast conservation therapy (BCT) using an automated analysis of imprint cytology specimens. Study Design Imprint cytology samples were prospectively taken from 47 patients undergoing either BCT or breast reduction surgery. Touch preparations from BCT patients were taken on cut sections through the tumor to generate positive margin controls. For breast reduction patients, slide imprints were taken at cuts through the center of excised tissue. Analysis results from the presented technique were compared against standard pathologic diagnosis. Slides were stained with cytokeratin and Hoechst, imaged with an automated fluorescent microscope, and analyzed with a fast algorithm to automate discrimination between epithelial cells and noncellular debris. Results The accuracy of the automated analysis was 95% for identifying invasive cancers compared against final pathologic diagnosis. The overall sensitivity was 87% while specificity was 100% (no false positives). This is comparable to the best reported results from manual examination of intraoperative imprint cytology slides while reducing the need for direct input from a cytopathologist. Conclusion This work demonstrates a proof of concept for developing a highly accurate and automated system for the intraoperative evaluation of margin status to guide surgical decisions and lower positive margin rates. PMID:21525740

  16. Automated shock detection and analysis algorithm for space weather application

    NASA Astrophysics Data System (ADS)

    Vorotnikov, Vasiliy S.; Smith, Charles W.; Hu, Qiang; Szabo, Adam; Skoug, Ruth M.; Cohen, Christina M. S.

    2008-03-01

    Space weather applications have grown steadily as real-time data have become increasingly available. Numerous industrial applications have arisen with safeguarding of the power distribution grids being a particular interest. NASA uses short-term and long-term space weather predictions in its launch facilities. Researchers studying ionospheric, auroral, and magnetospheric disturbances use real-time space weather services to determine launch times. Commercial airlines, communication companies, and the military use space weather measurements to manage their resources and activities. As the effects of solar transients upon the Earth's environment and society grow with the increasing complexity of technology, better tools are needed to monitor and evaluate the characteristics of the incoming disturbances. A need is for automated shock detection and analysis methods that are applicable to in situ measurements upstream of the Earth. Such tools can provide advance warning of approaching disturbances that have significant space weather impacts. Knowledge of the shock strength and speed can also provide insight into the nature of the approaching solar transient prior to arrival at the magnetopause. We report on efforts to develop a tool that can find and analyze shocks in interplanetary plasma data without operator intervention. This method will run with sufficient speed to be a practical space weather tool providing useful shock information within 1 min of having the necessary data to ground. The ability to run without human intervention frees space weather operators to perform other vital services. We describe ways of handling upstream data that minimize the frequency of false positive alerts while providing the most complete description of approaching disturbances that is reasonably possible.

  17. Context-driven automated target detection in 3D data

    NASA Astrophysics Data System (ADS)

    West, Karen F.; Webb, Brian N.; Lersch, James R.; Pothier, Steven; Triscari, Joseph M.; Iverson, A. E.

    2004-09-01

    This paper summarizes a system, and its component algorithms, for context-driven target vehicle detection in 3-D data that was developed under the Defense Advanced Research Projects Agency (DARPA) Exploitation of 3-D Data (E3D) Program. In order to determine the power of shape and geometry for the extraction of context objects and the detection of targets, our algorithm research and development concentrated on the geometric aspects of the problem and did not utilize intensity information. Processing begins with extraction of context information and initial target detection at reduced resolution, followed by a detailed, full-resolution analysis of candidate targets. Our reduced-resolution processing includes a probabilistic procedure for finding the ground that is effective even in rough terrain; a hierarchical, graph-based approach for the extraction of context objects and potential vehicle hide sites; and a target detection process that is driven by context-object and hide-site locations. Full-resolution processing includes statistical false alarm reduction and decoy mitigation. When results are available from previously collected data, we also perform object-level change detection, which affects the probabilities that objects are context objects or targets. Results are presented for both synthetic and collected LADAR data.

  18. Land-cover change detection

    USGS Publications Warehouse

    Chen, Xuexia; Giri, Chandra; Vogelmann, James

    2012-01-01

    Land cover is the biophysical material on the surface of the earth. Land-cover types include grass, shrubs, trees, barren, water, and man-made features. Land cover changes continuously.  The rate of change can be either dramatic and abrupt, such as the changes caused by logging, hurricanes and fire, or subtle and gradual, such as regeneration of forests and damage caused by insects (Verbesselt et al., 2001).  Previous studies have shown that land cover has changed dramatically during the past sevearal centuries and that these changes have severely affected our ecosystems (Foody, 2010; Lambin et al., 2001). Lambin and Strahlers (1994b) summarized five types of cause for land-cover changes: (1) long-term natural changes in climate conditions, (2) geomorphological and ecological processes, (3) human-induced alterations of vegetation cover and landscapes, (4) interannual climate variability, and (5) human-induced greenhouse effect.  Tools and techniques are needed to detect, describe, and predict these changes to facilitate sustainable management of natural resources.

  19. Change detection in underwater imagery.

    PubMed

    Seemakurthy, Karthik; Rajagopalan, A N

    2016-03-01

    In this work, we deal with the problem of change detection in an underwater scenario given an unblurred-blurred image pair of a planar scene taken at different times. The blur is primarily due to the dynamic nature of the water surface and its nature is space-invariant in the presence of cyclic water flows. Exploiting the sparsity of the induced blur as well as the occlusions, we propose a distort-difference pipeline that employs an alternating minimization framework to perform change detection in the presence of geometric distortions (skew) as well as photometric degradations (blur and global illumination variations). The method can effectively yield both sharp and blurred occluder maps. Using synthetic as well as real data, we demonstrate how the proposed technique advances the state of the art. PMID:26974899

  20. Automated detection of cardiac phase from intracoronary ultrasound image sequences.

    PubMed

    Sun, Zheng; Dong, Yi; Li, Mengchan

    2015-01-01

    Intracoronary ultrasound (ICUS) is a widely used interventional imaging modality in clinical diagnosis and treatment of cardiac vessel diseases. Due to cyclic cardiac motion and pulsatile blood flow within the lumen, there exist changes of coronary arterial dimensions and relative motion between the imaging catheter and the lumen during continuous pullback of the catheter. The action subsequently causes cyclic changes to the image intensity of the acquired image sequence. Information on cardiac phases is implied in a non-gated ICUS image sequence. A 1-D phase signal reflecting cardiac cycles was extracted according to cyclical changes in local gray-levels in ICUS images. The local extrema of the signal were then detected to retrieve cardiac phases and to retrospectively gate the image sequence. Results of clinically acquired in vivo image data showed that the average inter-frame dissimilarity of lower than 0.1 was achievable with our technique. In terms of computational efficiency and complexity, the proposed method was shown to be competitive when compared with the current methods. The average frame processing time was lower than 30 ms. We effectively reduced the effect of image noises, useless textures, and non-vessel region on the phase signal detection by discarding signal components caused by non-cardiac factors. PMID:26406038

  1. Sludge settleability detection using automated SV30 measurement and its application to a field WWTP.

    PubMed

    Kim, Y J; Choi, S J; Bae, H; Kim, C W

    2011-01-01

    The need for automation & measurement technologies to detect the process state has been a driving force in the development of various measurements at wastewater treatment plants. While the number of applications of automation & measurement technologies to the field is increasing, there have only been a few cases where they have been applied to the area of sludge settling. This is because it is not easy to develop an automated operation support system for the detection of sludge settleability due to its site-specific characteristics. To automate the human operator's daily test and diagnosis works on sludge settling, an on-line SV30 measurement was developed and an automated detection algorithm on settleability was developed that imitated heuristics to detect settleability faults. The automated SV30 measurement is based on automatic pumping with a predefined schedule, the image capture of the settling test with a digital camera, and an analysis of the images to detect the settled sludge height. A sludge settleability detection method was developed and its applicability was investigated by field application. PMID:22335120

  2. Automated detection of point mutations using fluorescent sequence trace subtraction.

    PubMed Central

    Bonfield, J K; Rada, C; Staden, R

    1998-01-01

    The final step in the detection of mutations is to determine the sequence of the suspected mutant and to compare it with that of the wild-type, and for this fluorescence-based sequencing instruments are widely used. We describe some simple algorithms forcomparing sequence traces which, as part of our sequence assembly and analysis package, are proving useful for the discovery of mutations and which may also help to identify misplaced readings in sequence assembly projects. The mutations can be detected automatically by a new program called TRACE_DIFF and new types of trace display in our program GAP4 greatly simplify visual checking of the assigned changes. To assess the accuracy of the automatic mutation detection algorithm we analysed 214 sequence readings from hypermutating DNA comprising a total of 108 497 bases. After the readings were assembled there were 1232 base differences, including 392 Ns and 166 alignment characters. Visual inspection of the traces established that of the 1232 differences, 353 were real mutations while the rest were due to base calling errors. The TRACE_DIFF algorithm automatically identified all but 36, with 28 false positives. Further information about the software can be obtained from http://www.mrc-lmb.cam.ac.uk/pubseq/ PMID:9649626

  3. Assessment of Automated Disease Detection in Diabetic Retinopathy Screening Using Two-Field Photography

    PubMed Central

    Goatman, Keith; Charnley, Amanda; Webster, Laura; Nussey, Stephen

    2011-01-01

    Aim To assess the performance of automated disease detection in diabetic retinopathy screening using two field mydriatic photography. Methods Images from 8,271 sequential patient screening episodes from a South London diabetic retinopathy screening service were processed by the Medalytix iGrading™ automated grading system. For each screening episode macular-centred and disc-centred images of both eyes were acquired and independently graded according to the English national grading scheme. Where discrepancies were found between the automated result and original manual grade, internal and external arbitration was used to determine the final study grades. Two versions of the software were used: one that detected microaneurysms alone, and one that detected blot haemorrhages and exudates in addition to microaneurysms. Results for each version were calculated once using both fields and once using the macula-centred field alone. Results Of the 8,271 episodes, 346 (4.2%) were considered unassessable. Referable disease was detected in 587 episodes (7.1%). The sensitivity of the automated system for detecting unassessable images ranged from 97.4% to 99.1% depending on configuration. The sensitivity of the automated system for referable episodes ranged from 98.3% to 99.3%. All the episodes that included proliferative or pre-proliferative retinopathy were detected by the automated system regardless of configuration (192/192, 95% confidence interval 98.0% to 100%). If implemented as the first step in grading, the automated system would have reduced the manual grading effort by between 2,183 and 3,147 patient episodes (26.4% to 38.1%). Conclusion Automated grading can safely reduce the workload of manual grading using two field, mydriatic photography in a routine screening service. PMID:22174741

  4. Detecting change as it occurs

    NASA Technical Reports Server (NTRS)

    Radok, Uwe; Brown, Timothy J.

    1992-01-01

    Traditionally climate changes have been detected from long series of observations and long after they have happened. Our 'inverse sequential' procedure, for detecting change as soon as it occurs, describes the existing or most recent data by their frequency distribution. Its parameter(s) are estimated both from the existing set of observations and from the same set augmented by 1,2,....j new observations. Individual-value probability products ('likelihoods') are used to form ratios which yield two probabilities for erroneously accepting the existing parameter(s) as valid for the augmented data set, and vice versa. A genuine parameter change is signaled when these probabilities (or a more stable compound probability) show a progressive decrease. New parameter values can then be estimated from the new observations alone using standard statistical techniques. The inverse sequential procedure will be illustrated for global annual mean temperatures (assumed normally distributed), and for annual numbers of North Atlantic hurricanes (assumed to represent Poisson distributions). The procedure was developed, but not yet tested, for linear or exponential trends, and for chi-squared means or degrees of freedom, a special measure of autocorrelation.

  5. An automated analysis of wide area motion imagery for moving subject detection

    NASA Astrophysics Data System (ADS)

    Tahmoush, Dave

    2015-05-01

    Automated analysis of wide area motion imagery (WAMI) can significantly reduce the effort required for converting data into reliable decisions. We register consecutive WAMI frames and use false-color frame comparisons to enhance the visual detection of possible subjects in the imagery. The large number of WAMI detections produces the need for a prioritization of detections for further inspection. We create a priority queue of detections for automated revisit with smaller field-ofview assets based on the locations of the movers as well as the probability of the detection. This automated queue works within an operator's preset prioritizations but also allows the flexibility to dynamically respond to new events as well as incorporating additional information into the surveillance tasking.

  6. Neurodegenerative changes in Alzheimer's disease: a comparative study of manual, semi-automated, and fully automated assessment using MRI

    NASA Astrophysics Data System (ADS)

    Fritzsche, Klaus H.; Giesel, Frederik L.; Heimann, Tobias; Thomann, Philipp A.; Hahn, Horst K.; Pantel, Johannes; Schröder, Johannes; Essig, Marco; Meinzer, Hans-Peter

    2008-03-01

    Objective quantification of disease specific neurodegenerative changes can facilitate diagnosis and therapeutic monitoring in several neuropsychiatric disorders. Reproducibility and easy-to-perform assessment are essential to ensure applicability in clinical environments. Aim of this comparative study is the evaluation of a fully automated approach that assesses atrophic changes in Alzheimer's disease (AD) and Mild Cognitive Impairment (MCI). 21 healthy volunteers (mean age 66.2), 21 patients with MCI (66.6), and 10 patients with AD (65.1) were enrolled. Subjects underwent extensive neuropsychological testing and MRI was conducted on a 1.5 Tesla clinical scanner. Atrophic changes were measured automatically by a series of image processing steps including state of the art brain mapping techniques. Results were compared with two reference approaches: a manual segmentation of the hippocampal formation and a semi-automated estimation of temporal horn volume, which is based upon interactive selection of two to six landmarks in the ventricular system. All approaches separated controls and AD patients significantly (10 -5 < p < 10 -4) and showed a slight but not significant increase of neurodegeneration for subjects with MCI compared to volunteers. The automated approach correlated significantly with the manual (r = -0.65, p < 10 -6) and semi automated (r = -0.83, p < 10 -13) measurements. It proved high accuracy and at the same time maximized observer independency, time reduction and thus usefulness for clinical routine.

  7. A Simple Method for Automated Equilibration Detection in Molecular Simulations.

    PubMed

    Chodera, John D

    2016-04-12

    Molecular simulations intended to compute equilibrium properties are often initiated from configurations that are highly atypical of equilibrium samples, a practice which can generate a distinct initial transient in mechanical observables computed from the simulation trajectory. Traditional practice in simulation data analysis recommends this initial portion be discarded to equilibration, but no simple, general, and automated procedure for this process exists. Here, we suggest a conceptually simple automated procedure that does not make strict assumptions about the distribution of the observable of interest in which the equilibration time is chosen to maximize the number of effectively uncorrelated samples in the production timespan used to compute equilibrium averages. We present a simple Python reference implementation of this procedure and demonstrate its utility on typical molecular simulation data. PMID:26771390

  8. On Radar Resolution in Coherent Change Detection.

    SciTech Connect

    Bickel, Douglas L.

    2015-11-01

    It is commonly observed that resolution plays a role in coherent change detection. Although this is the case, the relationship of the resolution in coherent change detection is not yet defined . In this document, we present an analytical method of evaluating this relationship using detection theory. Specifically we examine the effect of resolution on receiver operating characteristic curves for coherent change detection.

  9. Load-differential features for automated detection of fatigue cracks using guided waves

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Lee, Sang Jun; Michaels, Jennifer E.; Michaels, Thomas E.

    2012-05-01

    Guided wave structural health monitoring (SHM) is being considered to assess the integrity of plate-like structures for many applications. Prior research has investigated how guided wave propagation is affected by applied loads, which induce anisotropic changes in both dimensions and phase velocity. In addition, it is well-known that applied tensile loads open fatigue cracks and thus enhance their detectability using ultrasonic methods. Here we describe load-differential methods in which signals recorded from different loads at the same damage state are compared without using previously obtained damage-free data. Changes in delay-and-sum images are considered as a function of differential loads and damage state. Load-differential features are extracted from these images that capture the effects of loading as fatigue cracks are opened. Damage detection thresholds are adaptively set based upon the load-differential behavior of the various features, which enables implementation of an automated fatigue crack detection process. The efficacy of the proposed approach is examined using data from a fatigue test performed on an aluminum plate specimen that is instrumented with a sparse array of surface-mounted ultrasonic guided wave transducers.

  10. An automated approach to detecting signals in electroantennogram data

    USGS Publications Warehouse

    Slone, D.H.; Sullivan, B.T.

    2007-01-01

    Coupled gas chromatography/electroantennographic detection (GC-EAD) is a widely used method for identifying insect olfactory stimulants present in mixtures of volatiles, and it can greatly accelerate the identification of insect semiochemicals. In GC-EAD, voltage changes across an insect's antenna are measured while the antenna is exposed to compounds eluting from a gas chromatograph. The antenna thus serves as a selective GC detector whose output can be compared to that of a "general" GC detector, commonly a flame ionization detector. Appropriate interpretation of GC-EAD results requires that olfaction-related voltage changes in the antenna be distinguishable from background noise that arises inevitably from antennal preparations and the GC-EAD-associated hardware. In this paper, we describe and compare mathematical algorithms for discriminating olfaction-generated signals in an EAD trace from background noise. The algorithms amplify signals by recognizing their characteristic shape and wavelength while suppressing unstructured noise. We have found these algorithms to be both powerful and highly discriminatory even when applied to noisy traces where the signals would be difficult to discriminate by eye. This new methodology removes operator bias as a factor in signal identification, can improve realized sensitivity of the EAD system, and reduces the number of runs required to confirm the identity of an olfactory stimulant. ?? 2007 Springer Science+Business Media, LLC.

  11. Carbapenem Resistance in Klebsiella pneumoniae Not Detected by Automated Susceptibility Testing

    PubMed Central

    Kalsi, Rajinder K.; Williams, Portia P.; Carey, Roberta B.; Stocker, Sheila; Lonsway, David; Rasheed, J. Kamile; Biddle, James W.; McGowan, John E.; Hanna, Bruce

    2006-01-01

    Detecting β-lactamase–mediated carbapenem resistance among Klebsiella pneumoniae isolates and other Enterobacteriaceae is an emerging problem. In this study, 15 blaKPC-positive Klebsiella pneumoniae that showed discrepant results for imipenem and meropenem from 4 New York City hospitals were characterized by isoelectric focusing; broth microdilution (BMD); disk diffusion (DD); and MicroScan, Phoenix, Sensititre, VITEK, and VITEK 2 automated systems. All 15 isolates were either intermediate or resistant to imipenem and meropenem by BMD; 1 was susceptible to imipenem by DD. MicroScan and Phoenix reported 1 (6.7%) and 2 (13.3%) isolates, respectively, as imipenem susceptible. VITEK and VITEK 2 reported 10 (67%) and 5 (33%) isolates, respectively, as imipenem susceptible. By Sensititre, 13 (87%) isolates were susceptible to imipenem, and 12 (80%) were susceptible to meropenem. The VITEK 2 Advanced Expert System changed 2 imipenem MIC results from >16 μg/mL to <2 μg/mL but kept the interpretation as resistant. The recognition of carbapenem-resistant K. pneumoniae continues to challenge automated susceptibility systems. PMID:16965699

  12. A method for the automated detection phishing websites through both site characteristics and image analysis

    NASA Astrophysics Data System (ADS)

    White, Joshua S.; Matthews, Jeanna N.; Stacy, John L.

    2012-06-01

    Phishing website analysis is largely still a time-consuming manual process of discovering potential phishing sites, verifying if suspicious sites truly are malicious spoofs and if so, distributing their URLs to the appropriate blacklisting services. Attackers increasingly use sophisticated systems for bringing phishing sites up and down rapidly at new locations, making automated response essential. In this paper, we present a method for rapid, automated detection and analysis of phishing websites. Our method relies on near real-time gathering and analysis of URLs posted on social media sites. We fetch the pages pointed to by each URL and characterize each page with a set of easily computed values such as number of images and links. We also capture a screen-shot of the rendered page image, compute a hash of the image and use the Hamming distance between these image hashes as a form of visual comparison. We provide initial results demonstrate the feasibility of our techniques by comparing legitimate sites to known fraudulent versions from Phishtank.com, by actively introducing a series of minor changes to a phishing toolkit captured in a local honeypot and by performing some initial analysis on a set of over 2.8 million URLs posted to Twitter over a 4 days in August 2011. We discuss the issues encountered during our testing such as resolvability and legitimacy of URL's posted on Twitter, the data sets used, the characteristics of the phishing sites we discovered, and our plans for future work.

  13. An Automated Intelligent Fault Detection System for Inspection of Sewer Pipes

    NASA Astrophysics Data System (ADS)

    Ahrary, Alireza; Kawamura, Yoshinori; Ishikawa, Masumi

    Automation is an important issue in industry, particularly in inspection of underground facilities. This paper describes an intelligent system for automatically detecting faulty areas in a sewer pipe system based on images. The proposed system can detect various types of faults and be implemented in a real time system. The present paper describes system architecture and focuses on two modules of image preprocessing and detection of faulty areas. The proposed approach demonstrates high performance in detection and reduction of time and cost.

  14. Strategies for Working with Library Staff Members in Embracing Change Caused by Library Automation.

    ERIC Educational Resources Information Center

    Shepherd, Murray

    This paper begins with a discussion of information management as it pertains to the four operations of automated library systems (i.e., acquisitions, cataloging, circulation, and reference). Library staff reactions to library automation change are summarized, including uncertainty, cynicism, and resignation or hope. Common pitfalls that interfere…

  15. Automated detection of a prostate Ni-Ti stent in electronic portal images

    SciTech Connect

    Carl, Jesper; Nielsen, Henning; Nielsen, Jane; Lund, Bente; Larsen, Erik Hoejkjaer

    2006-12-15

    Planning target volumes (PTV) in fractionated radiotherapy still have to be outlined with wide margins to the clinical target volume due to uncertainties arising from daily shift of the prostate position. A recently proposed new method of visualization of the prostate is based on insertion of a thermo-expandable Ni-Ti stent. The current study proposes a new detection algorithm for automated detection of the Ni-Ti stent in electronic portal images. The algorithm is based on the Ni-Ti stent having a cylindrical shape with a fixed diameter, which was used as the basis for an automated detection algorithm. The automated method uses enhancement of lines combined with a grayscale morphology operation that looks for enhanced pixels separated with a distance similar to the diameter of the stent. The images in this study are all from prostate cancer patients treated with radiotherapy in a previous study. Images of a stent inserted in a humanoid phantom demonstrated a localization accuracy of 0.4-0.7 mm which equals the pixel size in the image. The automated detection of the stent was compared to manual detection in 71 pairs of orthogonal images taken in nine patients. The algorithm was successful in 67 of 71 pairs of images. The method is fast, has a high success rate, good accuracy, and has a potential for unsupervised localization of the prostate before radiotherapy, which would enable automated repositioning before treatment and allow for the use of very tight PTV margins.

  16. On the Automated and Objective Detection of Emission Lines in Faint-Object Spectroscopy

    NASA Astrophysics Data System (ADS)

    Hong, Sungryong; Dey, Arjun; Prescott, Moire K. M.

    2014-11-01

    Modern spectroscopic surveys produce large spectroscopic databases, generally with sizes well beyond the scope of manual investigation. The need arises, therefore, for an automated line detection method with objective indicators for detection significance. In this paper, we present an automated and objective method for emission line detection in spectroscopic surveys and apply this technique to observed spectra from a Lyα emitter survey at z ~ 2.7, obtained with the Hectospec spectrograph on the MMT Observatory (MMTO). The basic idea is to generate on-source (signal plus noise) and off-source (noise only) mock observations using Monte Carlo simulations, and calculate completeness and reliability values, (C,R), for each simulated signal. By comparing the detections from real data with the Monte Carlo results, we assign the completeness and reliability values to each real detection. From 1574 spectra, we obtain 881 raw detections and, by removing low reliability detections, we finalize 652 detections from an automated pipeline. Most of high completeness and reliability detections, (C,R) ≈ (1.0,1.0), are robust detections when visually inspected; the low C and R detections are also marginal on visual inspection. This method of detecting faint sources is dependent on the accuracy of the sky subtraction.

  17. Cell-Detection Technique for Automated Patch Clamping

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2008-01-01

    A unique and customizable machinevision and image-data-processing technique has been developed for use in automated identification of cells that are optimal for patch clamping. [Patch clamping (in which patch electrodes are pressed against cell membranes) is an electrophysiological technique widely applied for the study of ion channels, and of membrane proteins that regulate the flow of ions across the membranes. Patch clamping is used in many biological research fields such as neurobiology, pharmacology, and molecular biology.] While there exist several hardware techniques for automated patch clamping of cells, very few of those techniques incorporate machine vision for locating cells that are ideal subjects for patch clamping. In contrast, the present technique is embodied in a machine-vision algorithm that, in practical application, enables the user to identify good and bad cells for patch clamping in an image captured by a charge-coupled-device (CCD) camera attached to a microscope, within a processing time of one second. Hence, the present technique can save time, thereby increasing efficiency and reducing cost. The present technique involves the utilization of cell-feature metrics to accurately make decisions on the degree to which individual cells are "good" or "bad" candidates for patch clamping. These metrics include position coordinates (x,y) in the image plane, major-axis length, minor-axis length, area, elongation, roundness, smoothness, angle of orientation, and degree of inclusion in the field of view. The present technique does not require any special hardware beyond commercially available, off-the-shelf patch-clamping hardware: A standard patchclamping microscope system with an attached CCD camera, a personal computer with an imagedata- processing board, and some experience in utilizing imagedata- processing software are all that are needed. A cell image is first captured by the microscope CCD camera and image-data-processing board, then the image

  18. Managing laboratory automation in a changing pharmaceutical industry.

    PubMed

    Rutherford, M L

    1995-01-01

    The health care reform movement in the USA and increased requirements by regulatory agencies continue to have a major impact on the pharmaceutical industry and the laboratory. Laboratory management is expected to improve effciency by providing more analytical results at a lower cost, increasing customer service, reducing cycle time, while ensuring accurate results and more effective use of their staff. To achieve these expectations, many laboratories are using robotics and automated work stations. Establishing automated systems presents many challenges for laboratory management, including project and hardware selection, budget justification, implementation, validation, training, and support. To address these management challenges, the rationale for project selection and implementation, the obstacles encountered, project outcome, and learning points for several automated systems recently implemented in the Quality Control Laboratories at Eli Lilly are presented. PMID:18925014

  19. Early detection of glaucoma using fully automated disparity analysis of the optic nerve head (ONH) from stereo fundus images

    NASA Astrophysics Data System (ADS)

    Sharma, Archie; Corona, Enrique; Mitra, Sunanda; Nutter, Brian S.

    2006-03-01

    Early detection of structural damage to the optic nerve head (ONH) is critical in diagnosis of glaucoma, because such glaucomatous damage precedes clinically identifiable visual loss. Early detection of glaucoma can prevent progression of the disease and consequent loss of vision. Traditional early detection techniques involve observing changes in the ONH through an ophthalmoscope. Stereo fundus photography is also routinely used to detect subtle changes in the ONH. However, clinical evaluation of stereo fundus photographs suffers from inter- and intra-subject variability. Even the Heidelberg Retina Tomograph (HRT) has not been found to be sufficiently sensitive for early detection. A semi-automated algorithm for quantitative representation of the optic disc and cup contours by computing accumulated disparities in the disc and cup regions from stereo fundus image pairs has already been developed using advanced digital image analysis methodologies. A 3-D visualization of the disc and cup is achieved assuming camera geometry. High correlation among computer-generated and manually segmented cup to disc ratios in a longitudinal study involving 159 stereo fundus image pairs has already been demonstrated. However, clinical usefulness of the proposed technique can only be tested by a fully automated algorithm. In this paper, we present a fully automated algorithm for segmentation of optic cup and disc contours from corresponding stereo disparity information. Because this technique does not involve human intervention, it eliminates subjective variability encountered in currently used clinical methods and provides ophthalmologists with a cost-effective and quantitative method for detection of ONH structural damage for early detection of glaucoma.

  20. An Optimal Cell Detection Technique for Automated Patch Clamping

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2004-01-01

    While there are several hardware techniques for the automated patch clamping of cells that describe the equipment apparatus used for patch clamping, very few explain the science behind the actual technique of locating the ideal cell for a patch clamping procedure. We present a machine vision approach to patch clamping cell selection by developing an intelligent algorithm technique that gives the user the ability to determine the good cell to patch clamp in an image within one second. This technique will aid the user in determining the best candidates for patch clamping and will ultimately save time, increase efficiency and reduce cost. The ultimate goal is to combine intelligent processing with instrumentation and controls in order to produce a complete turnkey automated patch clamping system capable of accurately and reliably patch clamping cells with a minimum amount of human intervention. We present a unique technique that identifies good patch clamping cell candidates based on feature metrics of a cell's (x, y) position, major axis length, minor axis length, area, elongation, roundness, smoothness, angle of orientation, thinness and whether or not the cell is only particularly in the field of view. A patent is pending for this research.

  1. Accurate, Automated Detection of Atrial Fibrillation in Ambulatory Recordings.

    PubMed

    Linker, David T

    2016-06-01

    A highly accurate, automated algorithm would facilitate cost-effective screening for asymptomatic atrial fibrillation. This study analyzed a new algorithm and compared it to existing techniques. The incremental benefit of each step in refinement of the algorithm was measured, and the algorithm was compared to other methods using the Physionet atrial fibrillation and normal sinus rhythm databases. When analyzing segments of 21 RR intervals or less, the algorithm had a significantly higher area under the receiver operating characteristic curve (AUC) than the other algorithms tested. At analysis segment sizes of up to 101 RR intervals, the algorithm continued to have a higher AUC than any of the other methods tested, although the difference from the second best other algorithm was no longer significant, with an AUC of 0.9992 with a 95% confidence interval (CI) of 0.9986-0.9998, vs. 0.9986 (CI 0.9978-0.9994). With identical per-subject sensitivity, per-subject specificity of the current algorithm was superior to the other tested algorithms even at 101 RR intervals, with no false positives (CI 0.0-0.8%) vs. 5.3% false positives for the second best algorithm (CI 3.4-7.9%). The described algorithm shows great promise for automated screening for atrial fibrillation by reducing false positives requiring manual review, while maintaining high sensitivity. PMID:26850411

  2. An Automated Classification Technique for Detecting Defects in Battery Cells

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2006-01-01

    Battery cell defect classification is primarily done manually by a human conducting a visual inspection to determine if the battery cell is acceptable for a particular use or device. Human visual inspection is a time consuming task when compared to an inspection process conducted by a machine vision system. Human inspection is also subject to human error and fatigue over time. We present a machine vision technique that can be used to automatically identify defective sections of battery cells via a morphological feature-based classifier using an adaptive two-dimensional fast Fourier transformation technique. The initial area of interest is automatically classified as either an anode or cathode cell view as well as classified as an acceptable or a defective battery cell. Each battery cell is labeled and cataloged for comparison and analysis. The result is the implementation of an automated machine vision technique that provides a highly repeatable and reproducible method of identifying and quantifying defects in battery cells.

  3. Assessment of an Automated Touchdown Detection Algorithm for the Orion Crew Module

    NASA Technical Reports Server (NTRS)

    Gay, Robert S.

    2011-01-01

    Orion Crew Module (CM) touchdown detection is critical to activating the post-landing sequence that safe?s the Reaction Control Jets (RCS), ensures that the vehicle remains upright, and establishes communication with recovery forces. In order to accommodate safe landing of an unmanned vehicle or incapacitated crew, an onboard automated detection system is required. An Orion-specific touchdown detection algorithm was developed and evaluated to differentiate landing events from in-flight events. The proposed method will be used to initiate post-landing cutting of the parachute riser lines, to prevent CM rollover, and to terminate RCS jet firing prior to submersion. The RCS jets continue to fire until touchdown to maintain proper CM orientation with respect to the flight path and to limit impact loads, but have potentially hazardous consequences if submerged while firing. The time available after impact to cut risers and initiate the CM Up-righting System (CMUS) is measured in minutes, whereas the time from touchdown to RCS jet submersion is a function of descent velocity, sea state conditions, and is often less than one second. Evaluation of the detection algorithms was performed for in-flight events (e.g. descent under chutes) using hi-fidelity rigid body analyses in the Decelerator Systems Simulation (DSS), whereas water impacts were simulated using a rigid finite element model of the Orion CM in LS-DYNA. Two touchdown detection algorithms were evaluated with various thresholds: Acceleration magnitude spike detection, and Accumulated velocity changed (over a given time window) spike detection. Data for both detection methods is acquired from an onboard Inertial Measurement Unit (IMU) sensor. The detection algorithms were tested with analytically generated in-flight and landing IMU data simulations. The acceleration spike detection proved to be faster while maintaining desired safety margin. Time to RCS jet submersion was predicted analytically across a series of

  4. Power Doppler imaging as a basis for automated endocardial border detection during left ventricular contrast enhancement.

    PubMed

    Mor-Avi, V; Bednarz, J; Weinert, L; Sugeng, L; Lang, R M

    2000-08-01

    Echocardiographic evaluation of left ventricular (LV) systolic function relies on endocardial visualization, which can be improved when necessary using contrast enhancement. However, there is no method to automatically detect the endocardial boundary from contrast-enhanced images. We hypothesized that this could be achieved using harmonic power Doppler imaging. Twenty-two patients were studied in two protocols: (1) 11 patients with poorly visualized endocardium (> 3 contiguous segments not visualized) and (2) 11 consecutive patients referred for dobutamine stress echocardiography who were studied at rest and at peak dobutamine infusion. Patients were imaged in the apical four-chamber view using harmonic power Doppler mode (HP SONOS 5500) during LV contrast enhancement (Optison or Definity DMP115). Digital images were analyzed using custom software designed to automatically extract the endocardial boundary from power Doppler color overlays. LV cavity area was automatically measured frame-by-frame throughout the cardiac cycle, and fractional area change calculated and compared with those obtained by manually tracing the endocardial boundary in end-systolic and end-diastolic gray scale images. Successful border detection and tracking throughout the cardiac cycle was possible in 9 of 11 patients with poor endocardial definition and in 10 of 11 unselected patients undergoing dobutamine stress testing. Fractional area change obtained from power Doppler images correlated well with manually traced area changes (r = 0.82 and r = 0.97, in protocols 1 and 2, respectively). Harmonic power Doppler imaging with contrast may provide a simple method for semi-automated border detection and thus facilitate the objective evaluation of LV function both at rest and under conditions of stress testing. This methodology may prove to be particularly useful in patients with poorly visualized endocardium. PMID:11000587

  5. Automated Network Anomaly Detection with Learning, Control and Mitigation

    ERIC Educational Resources Information Center

    Ippoliti, Dennis

    2014-01-01

    Anomaly detection is a challenging problem that has been researched within a variety of application domains. In network intrusion detection, anomaly based techniques are particularly attractive because of their ability to identify previously unknown attacks without the need to be programmed with the specific signatures of every possible attack.…

  6. DETECTION OF ENDOGENOUS TISSUE FACTOR LEVELS IN PLASMA USING THE CALIBRATED AUTOMATED THROMBOGRAM ASSAY

    PubMed Central

    Ollivier, Veronique; Wang, Jianguo; Manly, David; Machlus, Kellie R.; Wolberg, Alisa S.; Jandrot-Perrus, Martine; Mackman, Nigel

    2009-01-01

    Summary Background The calibrated automated thrombogram (CAT) assay measures thrombin generation in plasma. Objective Use the CAT assay to detect endogenous tissue factor (TF) in recalcified platelet-rich plasma (PRP) and platelet-free plasma (PFP). Methods Blood from healthy volunteers was collected into citrate and incubated at 37°C with or without lipopolysaccharide (LPS) for 5 hours. PRP and PFP were prepared and clotting was initiated by recalcification. Thrombin generation was measured using the CAT assay. Results The lag time (LT) was significantly shortened in PRP prepared from LPS-treated blood compared with untreated blood (10 ± 3 min versus 20 ± 6 min), and this change was reversed by the addition of inactivated human factor VIIa. LPS stimulation did not change the peak thrombin. Similar results were observed in PFP (21 ± 4 min versus 35 ± 5 min). LPS stimulation also significantly reduced the LT of PRP and PFP derived from blood containing citrate and a factor XIIa inhibitor. Finally, a low concentration of exogenous TF shortened the LT of PFP prepared from unstimulated, citrated blood without affecting the peak thrombin. Conclusion Changes in LT in the CAT assay can be used to monitor levels of endogenous TF in citrated plasma. PMID:19345399

  7. Object level HSI-LIDAR data fusion for automated detection of difficult targets.

    PubMed

    Kanaev, A V; Daniel, B J; Neumann, J G; Kim, A M; Lee, K R

    2011-10-10

    Data fusion from disparate sensors significantly improves automated man-made target detection performance compared to that of just an individual sensor. In particular, it can solve hyperspectral imagery (HSI) detection problems pertaining to low-radiance man-made objects and objects in shadows. We present an algorithm that fuses HSI and LIDAR data for automated detection of man-made objects. LIDAR is used to define a set of potential targets based on physical dimensions, and HSI is then used to discriminate between man-made and natural objects. The discrimination technique is a novel HSI detection concept that uses an HSI detection score localization metric capable of distinguishing between wide-area score distributions inherent to natural objects and highly localized score distributions indicative of man-made targets. A typical man-made localization score was found to be around 0.5 compared to natural background typical localization scores being less than 0.1. PMID:21997101

  8. Automation - Changes in cognitive demands and mental workload

    NASA Technical Reports Server (NTRS)

    Tsang, Pamela S.; Johnson, Walter W.

    1987-01-01

    The effect of partial automation on mental workloads in man/machine tasks is investigated experimentally. Subjective workload measures are obtained from six subjects after performance of a task battery comprising two manual (flight-path control, FC, and target acquisition, TA) tasks and one decisionmaking (engine failure, EF) task; the FC task was performed in both a fully manual (altitude and lateral control) mode and in a semiautomated mode (autmatic latitude control). The performance results and subjective evaluations are presented in graphs and characterized in detail. The automation is shown to improve objective performance and lower subjective workload significantly in the combined FC/TA task, but not in the FC task alone or in the FC/EF task.

  9. Automated detection scheme of architectural distortion in mammograms using adaptive Gabor filter

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Ruriha; Teramoto, Atsushi; Matsubara, Tomoko; Fujita, Hiroshi

    2013-03-01

    Breast cancer is a serious health concern for all women. Computer-aided detection for mammography has been used for detecting mass and micro-calcification. However, there are challenges regarding the automated detection of the architectural distortion about the sensitivity. In this study, we propose a novel automated method for detecting architectural distortion. Our method consists of the analysis of the mammary gland structure, detection of the distorted region, and reduction of false positive results. We developed the adaptive Gabor filter for analyzing the mammary gland structure that decides filter parameters depending on the thickness of the gland structure. As for post-processing, healthy mammary glands that run from the nipple to the chest wall are eliminated by angle analysis. Moreover, background mammary glands are removed based on the intensity output image obtained from adaptive Gabor filter. The distorted region of the mammary gland is then detected as an initial candidate using a concentration index followed by binarization and labeling. False positives in the initial candidate are eliminated using 23 types of characteristic features and a support vector machine. In the experiments, we compared the automated detection results with interpretations by a radiologist using 50 cases (200 images) from the Digital Database of Screening Mammography (DDSM). As a result, true positive rate was 82.72%, and the number of false positive per image was 1.39. There results indicate that the proposed method may be useful for detecting architectural distortion in mammograms.

  10. Automated Detection of Eruptive Structures for Solar Eruption Prediction

    NASA Astrophysics Data System (ADS)

    Georgoulis, Manolis K.

    2012-07-01

    The problem of data processing and assimilation for solar eruption prediction is, for contemporary solar physics, more pressing than the problem of data acquisition. Although critical solar data, such as the coronal magnetic field, are still not routinely available, space-based observatories deliver diverse, high-quality information at such a high rate that a manual or semi-manual processing becomes meaningless. We discuss automated data analysis methods and explain, using basic physics, why some of them are unlikely to advance eruption prediction. From this finding we also understand why solar eruption prediction is likely to remain inherently probabilistic. We discuss some promising eruption prediction measures and report on efforts to adapt them for use with high-resolution, high-cadence photospheric and coronal data delivered by the Solar Dynamics Observatory. Concluding, we touch on the problem of physical understanding and synthesis of different results: combining different measures inferred by different data sets is a yet-to-be-done exercise that, however, presents our best opportunity of realizing benefits in solar eruption prediction via a meaningful, targeted assimilation of solar data.

  11. Weld line detection and process control for welding automation

    NASA Astrophysics Data System (ADS)

    Yang, Sang-Min; Cho, Man-Ho; Lee, Ho-Young; Cho, Taik-Dong

    2007-03-01

    Welding has been widely used as a process to join metallic parts. But because of hazardous working conditions, workers tend to avoid this task. Techniques to achieve the automation are the recognition of joint line and process control. A CCD (charge coupled device) camera with a laser stripe was applied to enhance the automatic weld seam tracking in GMAW (gas metal arc welding). The adaptive Hough transformation having an on-line processing ability was used to extract laser stripes and to obtain specific weld points. The three-dimensional information obtained from the vision system made it possible to generate the weld torch path and to obtain information such as the width and depth of the weld line. In this study, a neural network based on the generalized delta rule algorithm was adapted to control the process of GMAW, such as welding speed, arc voltage and wire feeding speed. The width and depth of the weld joint have been selected as neurons in the input layer of the neural-network algorithm. The input variables, the width and depth of the weld joint, are determined by image information. The voltage, weld speed and wire feed rate are represented as the neurons in the output layer. The results of the neural-network learning applied to the welding are as follows: learning ratio 0.5, momentum ratio 0.7, the number of hidden layers 2 and the number of hidden units 8. They have significant influence on the weld quality.

  12. Automated detection and location of indications in eddy current signals

    DOEpatents

    Brudnoy, David M.; Oppenlander, Jane E.; Levy, Arthur J.

    2000-01-01

    A computer implemented information extraction process that locates and identifies eddy current signal features in digital point-ordered signals, signals representing data from inspection of test materials, by enhancing the signal features relative to signal noise, detecting features of the signals, verifying the location of the signal features that can be known in advance, and outputting information about the identity and location of all detected signal features.

  13. Automated Detection and Location of Indications in Eddy Current Signals

    SciTech Connect

    Brudnoy, David M.; Oppenlander, Jane E.; Levy, Arthur J.

    1998-06-30

    A computer implemented information extraction process that locates and identifies eddy current signal features in digital point-ordered signals, said signals representing data from inspection of test materials, by enhancing the signal features relative to signal noise, detecting features of the signals, verifying the location of the signal features that can be known in advance, and outputting information about the identity and location of all detected signal features.

  14. ASTRiDE: Automated Streak Detection for Astronomical Images

    NASA Astrophysics Data System (ADS)

    Kim, Dae-Won

    2016-05-01

    ASTRiDE detects streaks in astronomical images using a "border" of each object (i.e. "boundary-tracing" or "contour-tracing") and their morphological parameters. Fast moving objects such as meteors, satellites, near-Earth objects (NEOs), or even cosmic rays can leave streak-like traces in the images; ASTRiDE can detect not only long streaks but also relatively short or curved streaks.

  15. Automated detection of radiology reports that document non-routine communication of critical or significant results.

    PubMed

    Lakhani, Paras; Langlotz, Curtis P

    2010-12-01

    The purpose of this investigation is to develop an automated method to accurately detect radiology reports that indicate non-routine communication of critical or significant results. Such a classification system would be valuable for performance monitoring and accreditation. Using a database of 2.3 million free-text radiology reports, a rule-based query algorithm was developed after analyzing hundreds of radiology reports that indicated communication of critical or significant results to a healthcare provider. This algorithm consisted of words and phrases used by radiologists to indicate such communications combined with specific handcrafted rules. This algorithm was iteratively refined and retested on hundreds of reports until the precision and recall did not significantly change between iterations. The algorithm was then validated on the entire database of 2.3 million reports, excluding those reports used during the testing and refinement process. Human review was used as the reference standard. The accuracy of this algorithm was determined using precision, recall, and F measure. Confidence intervals were calculated using the adjusted Wald method. The developed algorithm for detecting critical result communication has a precision of 97.0% (95% CI, 93.5-98.8%), recall 98.2% (95% CI, 93.4-100%), and F measure of 97.6% (ß=1). Our query algorithm is accurate for identifying radiology reports that contain non-routine communication of critical or significant results. This algorithm can be applied to a radiology reports database for quality control purposes and help satisfy accreditation requirements. PMID:19826871

  16. Automated Detection of Heuristics and Biases among Pathologists in a Computer-Based System

    ERIC Educational Resources Information Center

    Crowley, Rebecca S.; Legowski, Elizabeth; Medvedeva, Olga; Reitmeyer, Kayse; Tseytlin, Eugene; Castine, Melissa; Jukic, Drazen; Mello-Thoms, Claudia

    2013-01-01

    The purpose of this study is threefold: (1) to develop an automated, computer-based method to detect heuristics and biases as pathologists examine virtual slide cases, (2) to measure the frequency and distribution of heuristics and errors across three levels of training, and (3) to examine relationships of heuristics to biases, and biases to…

  17. Detection of anti-salmonella flgk antibodies in chickens by automated capillary immunoassay

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Western blot is a very useful tool to identify specific protein, but is tedious, labor-intensive and time-consuming. An automated "Simple Western" assay has recently been developed that enables the protein separation, blotting and detection in an automatic manner. However, this technology has not ...

  18. An automated screening technique for the detection of sickle-cell haemoglobin

    PubMed Central

    Canning, D. M.; Crane, R. S.; Huntsman, R. G.; Yawson, G. I.

    1972-01-01

    An automated technique is described which is capable of detecting sickle-cell haemoglobin and differentiating the sickle-cell trait from sickle-cell anaemia. The method is based upon the Itano solubility test and utilizes Technicon equipment. Images PMID:5028640

  19. Automated Detection of Lupus White Matter Lesions in MRI.

    PubMed

    Roura, Eloy; Sarbu, Nicolae; Oliver, Arnau; Valverde, Sergi; González-Villà, Sandra; Cervera, Ricard; Bargalló, Núria; Lladó, Xavier

    2016-01-01

    Brain magnetic resonance imaging provides detailed information which can be used to detect and segment white matter lesions (WML). In this work we propose an approach to automatically segment WML in Lupus patients by using T1w and fluid-attenuated inversion recovery (FLAIR) images. Lupus WML appear as small focal abnormal tissue observed as hyperintensities in the FLAIR images. The quantification of these WML is a key factor for the stratification of lupus patients and therefore both lesion detection and segmentation play an important role. In our approach, the T1w image is first used to classify the three main tissues of the brain, white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF), while the FLAIR image is then used to detect focal WML as outliers of its GM intensity distribution. A set of post-processing steps based on lesion size, tissue neighborhood, and location are used to refine the lesion candidates. The proposal is evaluated on 20 patients, presenting qualitative, and quantitative results in terms of precision and sensitivity of lesion detection [True Positive Rate (62%) and Positive Prediction Value (80%), respectively] as well as segmentation accuracy [Dice Similarity Coefficient (72%)]. Obtained results illustrate the validity of the approach to automatically detect and segment lupus lesions. Besides, our approach is publicly available as a SPM8/12 toolbox extension with a simple parameter configuration. PMID:27570507

  20. Automated detection of periventricular veins on 7 T brain MRI

    NASA Astrophysics Data System (ADS)

    Kuijf, Hugo J.; Bouvy, Willem H.; Zwanenburg, Jaco J. M.; Viergever, Max A.; Biessels, Geert Jan; Vincken, Koen L.

    2015-03-01

    Cerebral small vessel disease is common in elderly persons and a leading cause of cognitive decline, dementia, and acute stroke. With the introduction of ultra-high field strength 7.0T MRI, it is possible to visualize small vessels in the brain. In this work, a proof-of-principle study is conducted to assess the feasibility of automatically detecting periventricular veins. Periventricular veins are organized in a fan-pattern and drain venous blood from the brain towards the caudate vein of Schlesinger, which is situated along the lateral ventricles. Just outside this vein, a region-of- interest (ROI) through which all periventricular veins must cross is defined. Within this ROI, a combination of the vesselness filter, tubular tracking, and hysteresis thresholding is applied to locate periventricular veins. All detected locations were evaluated by an expert human observer. The results showed a positive predictive value of 88% and a sensitivity of 95% for detecting periventricular veins. The proposed method shows good results in detecting periventricular veins in the brain on 7.0T MR images. Compared to previous works, that only use a 1D or 2D ROI and limited image processing, our work presents a more comprehensive definition of the ROI, advanced image processing techniques to detect periventricular veins, and a quantitative analysis of the performance. The results of this proof-of-principle study are promising and will be used to assess periventricular veins on 7.0T brain MRI.

  1. Automated Detection of Lupus White Matter Lesions in MRI

    PubMed Central

    Roura, Eloy; Sarbu, Nicolae; Oliver, Arnau; Valverde, Sergi; González-Villà, Sandra; Cervera, Ricard; Bargalló, Núria; Lladó, Xavier

    2016-01-01

    Brain magnetic resonance imaging provides detailed information which can be used to detect and segment white matter lesions (WML). In this work we propose an approach to automatically segment WML in Lupus patients by using T1w and fluid-attenuated inversion recovery (FLAIR) images. Lupus WML appear as small focal abnormal tissue observed as hyperintensities in the FLAIR images. The quantification of these WML is a key factor for the stratification of lupus patients and therefore both lesion detection and segmentation play an important role. In our approach, the T1w image is first used to classify the three main tissues of the brain, white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF), while the FLAIR image is then used to detect focal WML as outliers of its GM intensity distribution. A set of post-processing steps based on lesion size, tissue neighborhood, and location are used to refine the lesion candidates. The proposal is evaluated on 20 patients, presenting qualitative, and quantitative results in terms of precision and sensitivity of lesion detection [True Positive Rate (62%) and Positive Prediction Value (80%), respectively] as well as segmentation accuracy [Dice Similarity Coefficient (72%)]. Obtained results illustrate the validity of the approach to automatically detect and segment lupus lesions. Besides, our approach is publicly available as a SPM8/12 toolbox extension with a simple parameter configuration. PMID:27570507

  2. Characterizing interplanetary shocks for development and optimization of an automated solar wind shock detection algorithm

    NASA Astrophysics Data System (ADS)

    Cash, M. D.; Wrobel, J. S.; Cosentino, K. C.; Reinard, A. A.

    2014-06-01

    Human evaluation of solar wind data for interplanetary (IP) shock identification relies on both heuristics and pattern recognition, with the former lending itself to algorithmic representation and automation. Such detection algorithms can potentially alert forecasters of approaching shocks, providing increased warning of subsequent geomagnetic storms. However, capturing shocks with an algorithmic treatment alone is challenging, as past and present work demonstrates. We present a statistical analysis of 209 IP shocks observed at L1, and we use this information to optimize a set of shock identification criteria for use with an automated solar wind shock detection algorithm. In order to specify ranges for the threshold values used in our algorithm, we quantify discontinuities in the solar wind density, velocity, temperature, and magnetic field magnitude by analyzing 8 years of IP shocks detected by the SWEPAM and MAG instruments aboard the ACE spacecraft. Although automatic shock detection algorithms have previously been developed, in this paper we conduct a methodical optimization to refine shock identification criteria and present the optimal performance of this and similar approaches. We compute forecast skill scores for over 10,000 permutations of our shock detection criteria in order to identify the set of threshold values that yield optimal forecast skill scores. We then compare our results to previous automatic shock detection algorithms using a standard data set, and our optimized algorithm shows improvements in the reliability of automated shock detection.

  3. A microcomputer program for automated neuronal spike detection and analysis.

    PubMed

    Soto, E; Manjarrez, E; Vega, R

    1997-05-01

    A system for on-line spike detection and analysis based on an IBM PC/AT compatible computer, written in TURBO PASCAL 6.0 and using commercially available analog-to-digital hardware is described here. Spikes are detected by an adaptive threshold which varies as a function of signal mean and its variability. Since the threshold value is determined automatically by the signal-to-noise ratio analysis, the user is not actively involved in controlling its level. This program has been reliably used for the detection and analysis of the spike discharge of vestibular system afferent neurons. It generates the interval-joint distribution graph, the interval histogram, the autocorrelation function, the autocorrelation histogram, and phase-space graphs, thus, providing a complete set of graphical and statistical data for the characterization of the dynamics of neuronal spike activity. Data can be exported to other software such as Excel, Sigmaplot and MatLab, for example. PMID:9291011

  4. PCA method for automated detection of mispronounced words

    NASA Astrophysics Data System (ADS)

    Ge, Zhenhao; Sharma, Sudhendu R.; Smith, Mark J. T.

    2011-06-01

    This paper presents a method for detecting mispronunciations with the aim of improving Computer Assisted Language Learning (CALL) tools used by foreign language learners. The algorithm is based on Principle Component Analysis (PCA). It is hierarchical with each successive step refining the estimate to classify the test word as being either mispronounced or correct. Preprocessing before detection, like normalization and time-scale modification, is implemented to guarantee uniformity of the feature vectors input to the detection system. The performance using various features including spectrograms and Mel-Frequency Cepstral Coefficients (MFCCs) are compared and evaluated. Best results were obtained using MFCCs, achieving up to 99% accuracy in word verification and 93% in native/non-native classification. Compared with Hidden Markov Models (HMMs) which are used pervasively in recognition application, this particular approach is computational efficient and effective when training data is limited.

  5. An automated computer misuse detection system for UNICOS

    SciTech Connect

    Jackson, K.A.; Neuman, M.C.; Simmonds, D.D.; Stallings, C.A.; Thompson, J.L.; Christoph, G.G.

    1994-09-27

    An effective method for detecting computer misuse is the automatic monitoring and analysis of on-line user activity. This activity is reflected in the system audit record, in the system vulnerability posture, and in other evidence found through active testing of the system. During the last several years we have implemented an automatic misuse detection system at Los Alamos. This is the Network Anomaly Detection and Intrusion Reporter (NADIR). We are currently expanding NADIR to include processing of the Cray UNICOS operating system. This new component is called the UNICOS Realtime NADIR, or UNICORN. UNICORN summarizes user activity and system configuration in statistical profiles. It compares these profiles to expert rules that define security policy and improper or suspicious behavior. It reports suspicious behavior to security auditors and provides tools to aid in follow-up investigations. The first phase of UNICORN development is nearing completion, and will be operational in late 1994.

  6. Automated Detection of Anomalous Shipping Manifests to Identify Illicit Trade

    SciTech Connect

    Sanfilippo, Antonio P.; Chikkagoudar, Satish

    2013-11-12

    We describe an approach to analyzing trade data which uses clustering to detect similarities across shipping manifest records, classification to evaluate clustering results and categorize new unseen shipping data records, and visual analytics to provide to support situation awareness in dynamic decision making to monitor and warn against the movement of radiological threat materials through search, analysis and forecasting capabilities. The evaluation of clustering results through classification and systematic inspection of the clusters show the clusters have strong semantic cohesion and offer novel ways to detect transactions related to nuclear smuggling.

  7. Automated Detection of Ocular Alignment with Binocular Retinal Birefringence Scanning

    NASA Astrophysics Data System (ADS)

    Hunter, David G.; Shah, Ankoor S.; Sau, Soma; Nassif, Deborah; Guyton, David L.

    2003-06-01

    We previously developed a retinal birefringence scanning (RBS) device to detect eye fixation. The purpose of this study was to determine whether a new binocular RBS (BRBS) instrument can detect simultaneous fixation of both eyes. Control (nonmyopic and myopic) and strabismic subjects were studied by use of BRBS at a fixation distance of 45 cm. Binocularity (the percentage of measurements with bilateral fixation) was determined from the BRBS output. All nonstrabismic subjects with good quality signals had binocularity >75%. Binocularity averaged 5% in four subjects with strabismus (range of 0 -20%). BRBS may potentially be used to screen individuals for abnormal eye alignment.

  8. System and method for automated object detection in an image

    DOEpatents

    Kenyon, Garrett T.; Brumby, Steven P.; George, John S.; Paiton, Dylan M.; Schultz, Peter F.

    2015-10-06

    A contour/shape detection model may use relatively simple and efficient kernels to detect target edges in an object within an image or video. A co-occurrence probability may be calculated for two or more edge features in an image or video using an object definition. Edge features may be differentiated between in response to measured contextual support, and prominent edge features may be extracted based on the measured contextual support. The object may then be identified based on the extracted prominent edge features.

  9. Automated design of image operators that detect interest points.

    PubMed

    Trujillo, Leonardo; Olague, Gustavo

    2008-01-01

    This work describes how evolutionary computation can be used to synthesize low-level image operators that detect interesting points on digital images. Interest point detection is an essential part of many modern computer vision systems that solve tasks such as object recognition, stereo correspondence, and image indexing, to name but a few. The design of the specialized operators is posed as an optimization/search problem that is solved with genetic programming (GP), a strategy still mostly unexplored by the computer vision community. The proposed approach automatically synthesizes operators that are competitive with state-of-the-art designs, taking into account an operator's geometric stability and the global separability of detected points during fitness evaluation. The GP search space is defined using simple primitive operations that are commonly found in point detectors proposed by the vision community. The experiments described in this paper extend previous results (Trujillo and Olague, 2006a,b) by presenting 15 new operators that were synthesized through the GP-based search. Some of the synthesized operators can be regarded as improved manmade designs because they employ well-known image processing techniques and achieve highly competitive performance. On the other hand, since the GP search also generates what can be considered as unconventional operators for point detection, these results provide a new perspective to feature extraction research. PMID:19053496

  10. Assessing bat detectability and occupancy with multiple automated echolocation detectors

    USGS Publications Warehouse

    Gorresen, P.M.; Miles, A.C.; Todd, C.M.; Bonaccorso, F.J.; Weller, T.J.

    2008-01-01

    Occupancy analysis and its ability to account for differential detection probabilities is important for studies in which detecting echolocation calls is used as a measure of bat occurrence and activity. We examined the feasibility of remotely acquiring bat encounter histories to estimate detection probability and occupancy. We used echolocation detectors coupled to digital recorders operating at a series of proximate sites on consecutive nights in 2 trial surveys for the Hawaiian hoary bat (Lasiurus cinereus semotus). Our results confirmed that the technique is readily amenable for use in occupancy analysis. We also conducted a simulation exercise to assess the effects of sampling effort on parameter estimation. The results indicated that the precision and bias of parameter estimation were often more influenced by the number of sites sampled than number of visits. Acceptable accuracy often was not attained until at least 15 sites or 15 visits were used to estimate detection probability and occupancy. The method has significant potential for use in monitoring trends in bat activity and in comparative studies of habitat use. ?? 2008 American Society of Mammalogists.

  11. Automated detection of gait initiation and termination using wearable sensors.

    PubMed

    Novak, Domen; Reberšek, Peter; De Rossi, Stefano Marco Maria; Donati, Marco; Podobnik, Janez; Beravs, Tadej; Lenzi, Tommaso; Vitiello, Nicola; Carrozza, Maria Chiara; Munih, Marko

    2013-12-01

    This paper presents algorithms for detection of gait initiation and termination using wearable inertial measurement units and pressure-sensitive insoles. Body joint angles, joint angular velocities, ground reaction force and center of plantar pressure of each foot are obtained from these sensors and input into supervised machine learning algorithms. The proposed initiation detection method recognizes two events: gait onset (an anticipatory movement preceding foot lifting) and toe-off. The termination detection algorithm segments gait into steps, measures the signals over a buffer at the beginning of each step, and determines whether this measurement belongs to the final step. The approach is validated with 10 subjects at two gait speeds, using within-subject and subject-independent cross-validation. Results show that gait initiation can be detected timely and accurately, with few errors in the case of within-subject cross-validation and overall good performance in subject-independent cross-validation. Gait termination can be predicted in over 80% of trials well before the subject comes to a complete stop. Results also show that the two sensor types are equivalent in predicting gait initiation while inertial measurement units are generally superior in predicting gait termination. Potential use of the algorithms is foreseen primarily with assistive devices such as prostheses and exoskeletons. PMID:23938085

  12. Challenges in automated detection of cervical intraepithelial neoplasia

    NASA Astrophysics Data System (ADS)

    Srinivasan, Yeshwanth; Yang, Shuyu; Nutter, Brian; Mitra, Sunanda; Phillips, Benny; Long, Rodney

    2007-03-01

    Cervical Intraepithelial Neoplasia (CIN) is a precursor to invasive cervical cancer, which annually accounts for about 3700 deaths in the United States and about 274,000 worldwide. Early detection of CIN is important to reduce the fatalities due to cervical cancer. While the Pap smear is the most common screening procedure for CIN, it has been proven to have a low sensitivity, requiring multiple tests to confirm an abnormality and making its implementation impractical in resource-poor regions. Colposcopy and cervicography are two diagnostic procedures available to trained physicians for non-invasive detection of CIN. However, many regions suffer from lack of skilled personnel who can precisely diagnose the bio-markers due to CIN. Automatic detection of CIN deals with the precise, objective and non-invasive identification and isolation of these bio-markers, such as the Acetowhite (AW) region, mosaicism and punctations, due to CIN. In this paper, we study and compare three different approaches, based on Mathematical Morphology (MM), Deterministic Annealing (DA) and Gaussian Mixture Models (GMM), respectively, to segment the AW region of the cervix. The techniques are compared with respect to their complexity and execution times. The paper also presents an adaptive approach to detect and remove Specular Reflections (SR). Finally, algorithms based on MM and matched filtering are presented for the precise segmentation of mosaicism and punctations from AW regions containing the respective abnormalities.

  13. Automated Micro-Object Detection for Mobile Diagnostics Using Lens-Free Imaging Technology.

    PubMed

    Roy, Mohendra; Seo, Dongmin; Oh, Sangwoo; Chae, Yeonghun; Nam, Myung-Hyun; Seo, Sungkyu

    2016-01-01

    Lens-free imaging technology has been extensively used recently for microparticle and biological cell analysis because of its high throughput, low cost, and simple and compact arrangement. However, this technology still lacks a dedicated and automated detection system. In this paper, we describe a custom-developed automated micro-object detection method for a lens-free imaging system. In our previous work (Roy et al.), we developed a lens-free imaging system using low-cost components. This system was used to generate and capture the diffraction patterns of micro-objects and a global threshold was used to locate the diffraction patterns. In this work we used the same setup to develop an improved automated detection and analysis algorithm based on adaptive threshold and clustering of signals. For this purpose images from the lens-free system were then used to understand the features and characteristics of the diffraction patterns of several types of samples. On the basis of this information, we custom-developed an automated algorithm for the lens-free imaging system. Next, all the lens-free images were processed using this custom-developed automated algorithm. The performance of this approach was evaluated by comparing the counting results with standard optical microscope results. We evaluated the counting results for polystyrene microbeads, red blood cells, and HepG2, HeLa, and MCF7 cells. The comparison shows good agreement between the systems, with a correlation coefficient of 0.91 and linearity slope of 0.877. We also evaluated the automated size profiles of the microparticle samples. This Wi-Fi-enabled lens-free imaging system, along with the dedicated software, possesses great potential for telemedicine applications in resource-limited settings. PMID:27164146

  14. Automated Micro-Object Detection for Mobile Diagnostics Using Lens-Free Imaging Technology

    PubMed Central

    Roy, Mohendra; Seo, Dongmin; Oh, Sangwoo; Chae, Yeonghun; Nam, Myung-Hyun; Seo, Sungkyu

    2016-01-01

    Lens-free imaging technology has been extensively used recently for microparticle and biological cell analysis because of its high throughput, low cost, and simple and compact arrangement. However, this technology still lacks a dedicated and automated detection system. In this paper, we describe a custom-developed automated micro-object detection method for a lens-free imaging system. In our previous work (Roy et al.), we developed a lens-free imaging system using low-cost components. This system was used to generate and capture the diffraction patterns of micro-objects and a global threshold was used to locate the diffraction patterns. In this work we used the same setup to develop an improved automated detection and analysis algorithm based on adaptive threshold and clustering of signals. For this purpose images from the lens-free system were then used to understand the features and characteristics of the diffraction patterns of several types of samples. On the basis of this information, we custom-developed an automated algorithm for the lens-free imaging system. Next, all the lens-free images were processed using this custom-developed automated algorithm. The performance of this approach was evaluated by comparing the counting results with standard optical microscope results. We evaluated the counting results for polystyrene microbeads, red blood cells, HepG2, HeLa, and MCF7 cells lines. The comparison shows good agreement between the systems, with a correlation coefficient of 0.91 and linearity slope of 0.877. We also evaluated the automated size profiles of the microparticle samples. This Wi-Fi-enabled lens-free imaging system, along with the dedicated software, possesses great potential for telemedicine applications in resource-limited settings. PMID:27164146

  15. An Investigation of Automatic Change Detection for Topographic Map Updating

    NASA Astrophysics Data System (ADS)

    Duncan, P.; Smit, J.

    2012-08-01

    Changes to the landscape are constantly occurring and it is essential for geospatial and mapping organisations that these changes are regularly detected and captured, so that map databases can be updated to reflect the current status of the landscape. The Chief Directorate of National Geospatial Information (CD: NGI), South Africa's national mapping agency, currently relies on manual methods of detecting changes and capturing these changes. These manual methods are time consuming and labour intensive, and rely on the skills and interpretation of the operator. It is therefore necessary to move towards more automated methods in the production process at CD: NGI. The aim of this research is to do an investigation into a methodology for automatic or semi-automatic change detection for the purpose of updating topographic databases. The method investigated for detecting changes is through image classification as well as spatial analysis and is focussed on urban landscapes. The major data input into this study is high resolution aerial imagery and existing topographic vector data. Initial results indicate the traditional pixel-based image classification approaches are unsatisfactory for large scale land-use mapping and that object-orientated approaches hold more promise. Even in the instance of object-oriented image classification generalization of techniques on a broad-scale has provided inconsistent results. A solution may lie with a hybrid approach of pixel and object-oriented techniques.

  16. Left-ventricular cavity automated-border detection using an autocovariance technique in echocardiography

    NASA Astrophysics Data System (ADS)

    Morda, Louis S.; Konofagou, Elisa E.

    2005-04-01

    Left-ventricular (LV) segmentation is essential in the early detection of heart disease, where left-ventricular wall motion is being tracked in order to detect ischemia. In this paper, a new method for automated segmentation of the left-ventricular chamber is described. An autocorrelation-based technique isolates the LV cavity from the myocardial wall on 2-D slices of 3D short-axis echocardiograms. A morphological closing function and median filtering are used to generate a uniform border. The proposed segmentation technique is designed to be used in identifying the endocardial border and estimating the motion of the endocardial wall over a cardiac cycle. To this purpose, the proposed technique is particularly successful in border delineation by tracing around structures like papillary muscles and the mitral valve, which constitute the typical obstacle in LV segmentation techniques. The results using this new technique are compared to the manual detection results in short-axis views obtained at the papillary muscle level from 3D datasets in human and canine experiments in vivo. Qualitatively, the automatically-detected borders are highly comparable to the manually-detected borders enclosing regions in the left-ventricular cavity with a relative error within the range of 4.2% - 6%. The new technique constitutes, thus, a robust segmentation method for automated segmentation of endocardial borders and suitable for wall motion tracking for automated detection of ischemia.

  17. Automated Detection of Objects Based on Sérsic Profiles

    NASA Astrophysics Data System (ADS)

    Cabrera, Guillermo; Miller, C.; Harrison, C.; Vera, E.; Asahi, T.

    2011-01-01

    We present the results of a new astronomical object detection and deblending algorithm when applied to Sloan Digital Sky Survey data. Our algorithm fits PSF-convolved Sérsic profiles to elliptical isophotes of source candidates. The main advantage of our method is that it minimizes the amount and complexity of real-time user input relative to many commonly used source detection algorithms. Our results are compared with 1D radial profile Sérsic fits. Our long-term goal is to use these techniques in a mixture-model environment to leverage the speed and advantages of machine learning. This approach will have a great impact when re-processing large data-sets and data-streams from next generation telescopes, such as the LSST and the E-ELT.

  18. Toward automated detection and segmentation of aortic calcifications from radiographs

    NASA Astrophysics Data System (ADS)

    Lauze, François; de Bruijne, Marleen

    2007-03-01

    This paper aims at automatically measuring the extent of calcified plaques in the lumbar aorta from standard radiographs. Calcifications in the abdominal aorta are an important predictor for future cardiovascular morbidity and mortality. Accurate and reproducible measurement of the amount of calcified deposit in the aorta is therefore of great value in disease diagnosis and prognosis, treatment planning, and the study of drug effects. We propose a two-step approach in which first the calcifications are detected by an iterative statistical pixel classification scheme combined with aorta shape model optimization. Subsequently, the detected calcified pixels are used as the initialization for an inpainting based segmentation. We present results on synthetic images from the inpainting based segmentation as well as results on several X-ray images based on the two-steps approach.

  19. Automated vehicle detection in forward-looking infrared imagery.

    PubMed

    Der, Sandor; Chan, Alex; Nasrabadi, Nasser; Kwon, Heesung

    2004-01-10

    We describe an algorithm for the detection and clutter rejection of military vehicles in forward-looking infrared (FLIR) imagery. The detection algorithm is designed to be a prescreener that selects regions for further analysis and uses a spatial anomaly approach that looks for target-sized regions of the image that differ in texture, brightness, edge strength, or other spatial characteristics. The features are linearly combined to form a confidence image that is thresholded to find likely target locations. The clutter rejection portion uses target-specific information extracted from training samples to reduce the false alarms of the detector. The outputs of the clutter rejecter and detector are combined by a higher-level evidence integrator to improve performance over simple concatenation of the detector and clutter rejecter. The algorithm has been applied to a large number of FLIR imagery sets, and some of these results are presented here. PMID:14735953

  20. [Partially automated antigen determination and antibody detection with microtiter plates].

    PubMed

    Rapp, C; Weisshaar, C

    1993-01-01

    In addition to several conventional methods for the detection of red cell antigens, the use of microplates has various advantages either as a solid-phase assay (enzyme immunoassay) or as native microplate. Microplates may also be used for the detection of red cell antibodies in 'pooled-cell solid-phase assays' of the second generation and for antibody screening. Blood donors and patients are the two main fields which are to be examined in immunohematology. There are various advantages in using the microplate in blood group serology: (i) if there is hardware already available, like sample processors and microplate readers, the use of microplates in blood group serology reduces the costs even if the equipment has to be purchased for this purpose only; (ii) low quantities of reagents are used in microplate assays; (iii) the application of bar codes on tubes and microplates guarantees the most security in sample identification; (iv) it is possible to investigate blood samples selectively depending on the available software if antibody detection is done as the sixth test beside anti-HIV, anti-HCV, HBsAG, lues antibodies and ALT, and (v) recording of data will be easy if electronic data processing is used. PMID:7693246

  1. Automated detection of diabetic retinopathy in retinal images

    PubMed Central

    Valverde, Carmen; García, María; Hornero, Roberto; López-Gálvez, María I

    2016-01-01

    Diabetic retinopathy (DR) is a disease with an increasing prevalence and the main cause of blindness among working-age population. The risk of severe vision loss can be significantly reduced by timely diagnosis and treatment. Systematic screening for DR has been identified as a cost-effective way to save health services resources. Automatic retinal image analysis is emerging as an important screening tool for early DR detection, which can reduce the workload associated to manual grading as well as save diagnosis costs and time. Many research efforts in the last years have been devoted to developing automatic tools to help in the detection and evaluation of DR lesions. However, there is a large variability in the databases and evaluation criteria used in the literature, which hampers a direct comparison of the different studies. This work is aimed at summarizing the results of the available algorithms for the detection and classification of DR pathology. A detailed literature search was conducted using PubMed. Selected relevant studies in the last 10 years were scrutinized and included in the review. Furthermore, we will try to give an overview of the available commercial software for automatic retinal image analysis. PMID:26953020

  2. Automated video quality measurement based on manmade object characterization and motion detection

    NASA Astrophysics Data System (ADS)

    Kalukin, Andrew; Harguess, Josh; Maltenfort, A. J.; Irvine, John; Algire, C.

    2016-05-01

    Automated video quality assessment methods have generally been based on measurements of engineering parameters such as ground sampling distance, level of blur, and noise. However, humans rate video quality using specific criteria that measure the interpretability of the video by determining the kinds of objects and activities that might be detected in the video. Given the improvements in tracking, automatic target detection, and activity characterization that have occurred in video science, it is worth considering whether new automated video assessment methods might be developed by imitating the logical steps taken by humans in evaluating scene content. This article will outline a new procedure for automatically evaluating video quality based on automated object and activity recognition, and demonstrate the method for several ground-based and maritime examples. The detection and measurement of in-scene targets makes it possible to assess video quality without relying on source metadata. A methodology is given for comparing automated assessment with human assessment. For the human assessment, objective video quality ratings can be obtained through a menu-driven, crowd-sourced scheme of video tagging, in which human participants tag objects such as vehicles and people on film clips. The size, clarity, and level of detail of features present on the tagged targets are compared directly with the Video National Image Interpretability Rating Scale (VNIIRS).

  3. Automated detection of meteors in observed image sequence

    NASA Astrophysics Data System (ADS)

    Šimberová, Stanislava; Suk, Tomáš

    2015-12-01

    We propose a new detection technique based on statistical characteristics of images in the video sequence. These characteristics displayed in time enable to catch any bright track during the whole sequence. We applied our method to the image datacubes that are created from camera pictures of the night sky. Meteor flying through the Earth's atmosphere leaves a light trail lasting a few seconds on the sky background. We developed a special technique to recognize this event automatically in the complete observed video sequence. For further analysis leading to the precise recognition of object we suggest to apply Fourier and Hough transformations.

  4. High-Speed Observer: Automated Streak Detection in SSME Plumes

    NASA Technical Reports Server (NTRS)

    Rieckoff, T. J.; Covan, M.; OFarrell, J. M.

    2001-01-01

    A high frame rate digital video camera installed on test stands at Stennis Space Center has been used to capture images of Space Shuttle main engine plumes during test. These plume images are processed in real time to detect and differentiate anomalous plume events occurring during a time interval on the order of 5 msec. Such speed yields near instantaneous availability of information concerning the state of the hardware. This information can be monitored by the test conductor or by other computer systems, such as the integrated health monitoring system processors, for possible test shutdown before occurrence of a catastrophic engine failure.

  5. Fast reversible single-step method for enhanced band contrast of polyacrylamide gels for automated detection.

    PubMed

    Ling, Wei-Li; Lua, Wai-Heng; Gan, Samuel Ken-En

    2015-05-01

    Staining SDS-PAGE is commonly used in protein analysis for many downstream characterization processes. Although staining and destaining protocols can be adjusted, they can be laborious, and faint bands often become false negatives. Similarly, these faint bands hinder automated software band detections that are necessary for quantitative analyses. To overcome these problems, we describe a single-step rapid and reversible method to increase (up to 500%) band contrast in stained gels. Through the use of alcohols, we improved band detection and facilitated gel storage by drying the gels into compact white sheets. This method is suitable for all stained SDS-PAGE gels, including gradient gels and is shown to improve automated band detection by enhanced band contrast. PMID:25782090

  6. Fast reversible single-step method for enhanced band contrast of polyacrylamide gels for automated detection

    PubMed Central

    Ling, Wei-Li; Lua, Wai-Heng; Gan, Samuel Ken-En

    2015-01-01

    Staining SDS-PAGE is commonly used in protein analysis for many downstream characterization processes. Although staining and destaining protocols can be adjusted, they can be laborious, and faint bands often become false negatives. Similarly, these faint bands hinder automated software band detections that are necessary for quantitative analyses. To overcome these problems, we describe a single-step rapid and reversible method to increase (up to 500%) band contrast in stained gels. Through the use of alcohols, we improved band detection and facilitated gel storage by drying the gels into compact white sheets. This method is suitable for all stained SDS-PAGE gels, including gradient gels and is shown to improve automated band detection by enhanced band contrast. PMID:25782090

  7. Airborne hyperspectral detection of small changes.

    PubMed

    Eismann, Michael T; Meola, Joseph; Stocker, Alan D; Beaven, Scott G; Schaum, Alan P

    2008-10-01

    Hyperspectral change detection offers a promising approach to detect objects and features of remotely sensed areas that are too difficult to find in single images, such as slight changes in land cover and the insertion, deletion, or movement of small objects, by exploiting subtle differences in the imagery over time. Methods for performing such change detection, however, must effectively maintain invariance to typically larger image-to-image changes in illumination and environmental conditions, as well as misregistration and viewing differences between image observations, while remaining sensitive to small differences in scene content. Previous research has established predictive algorithms to overcome such natural changes between images, and these approaches have recently been extended to deal with space-varying changes. The challenges to effective change detection, however, are often exacerbated in an airborne imaging geometry because of the limitations in control over flight conditions and geometry, and some of the recent change detection algorithms have not been demonstrated in an airborne setting. We describe the airborne implementation and relative performance of such methods. We specifically attempt to characterize the effects of spatial misregistration on change detection performance, the efficacy of class-conditional predictors in an airborne setting, and extensions to the change detection approach, including physically motivated shadow transition classifiers and matched change filtering based on in-scene atmospheric normalization. PMID:18830283

  8. Fully Automated Detection of Cloud and Aerosol Layers in the CALIPSO Lidar Measurements

    NASA Technical Reports Server (NTRS)

    Vaughan, Mark A.; Powell, Kathleen A.; Kuehn, Ralph E.; Young, Stuart A.; Winker, David M.; Hostetler, Chris A.; Hunt, William H.; Liu, Zhaoyan; McGill, Matthew J.; Getzewich, Brian J.

    2009-01-01

    Accurate knowledge of the vertical and horizontal extent of clouds and aerosols in the earth s atmosphere is critical in assessing the planet s radiation budget and for advancing human understanding of climate change issues. To retrieve this fundamental information from the elastic backscatter lidar data acquired during the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) mission, a selective, iterated boundary location (SIBYL) algorithm has been developed and deployed. SIBYL accomplishes its goals by integrating an adaptive context-sensitive profile scanner into an iterated multiresolution spatial averaging scheme. This paper provides an in-depth overview of the architecture and performance of the SIBYL algorithm. It begins with a brief review of the theory of target detection in noise-contaminated signals, and an enumeration of the practical constraints levied on the retrieval scheme by the design of the lidar hardware, the geometry of a space-based remote sensing platform, and the spatial variability of the measurement targets. Detailed descriptions are then provided for both the adaptive threshold algorithm used to detect features of interest within individual lidar profiles and the fully automated multiresolution averaging engine within which this profile scanner functions. The resulting fusion of profile scanner and averaging engine is specifically designed to optimize the trade-offs between the widely varying signal-to-noise ratio of the measurements and the disparate spatial resolutions of the detection targets. Throughout the paper, specific algorithm performance details are illustrated using examples drawn from the existing CALIPSO dataset. Overall performance is established by comparisons to existing layer height distributions obtained by other airborne and space-based lidars.

  9. Anomalous change detection in imagery

    DOEpatents

    Theiler, James P.; Perkins, Simon J.

    2011-05-31

    A distribution-based anomaly detection platform is described that identifies a non-flat background that is specified in terms of the distribution of the data. A resampling approach is also disclosed employing scrambled resampling of the original data with one class specified by the data and the other by the explicit distribution, and solving using binary classification.

  10. Application of Reflectance Transformation Imaging Technique to Improve Automated Edge Detection in a Fossilized Oyster Reef

    NASA Astrophysics Data System (ADS)

    Djuricic, Ana; Puttonen, Eetu; Harzhauser, Mathias; Dorninger, Peter; Székely, Balázs; Mandic, Oleg; Nothegger, Clemens; Molnár, Gábor; Pfeifer, Norbert

    2016-04-01

    The world's largest fossilized oyster reef is located in Stetten, Lower Austria excavated during field campaigns of the Natural History Museum Vienna between 2005 and 2008. It is studied in paleontology to learn about change in climate from past events. In order to support this study, a laser scanning and photogrammetric campaign was organized in 2014 for 3D documentation of the large and complex site. The 3D point clouds and high resolution images from this field campaign are visualized by photogrammetric methods in form of digital surface models (DSM, 1 mm resolution) and orthophoto (0.5 mm resolution) to help paleontological interpretation of data. Due to size of the reef, automated analysis techniques are needed to interpret all digital data obtained from the field. One of the key components in successful automation is detection of oyster shell edges. We have tested Reflectance Transformation Imaging (RTI) to visualize the reef data sets for end-users through a cultural heritage viewing interface (RTIViewer). The implementation includes a Lambert shading method to visualize DSMs derived from terrestrial laser scanning using scientific software OPALS. In contrast to shaded RTI no devices consisting of a hardware system with LED lights, or a body to rotate the light source around the object are needed. The gray value for a given shaded pixel is related to the angle between light source and the normal at that position. Brighter values correspond to the slope surfaces facing the light source. Increasing of zenith angle results in internal shading all over the reef surface. In total, oyster reef surface contains 81 DSMs with 3 m x 2 m each. Their surface was illuminated by moving the virtual sun every 30 degrees (12 azimuth angles from 20-350) and every 20 degrees (4 zenith angles from 20-80). This technique provides paleontologists an interactive approach to virtually inspect the oyster reef, and to interpret the shell surface by changing the light source direction

  11. A feasibility assessment of automated FISH image and signal analysis to assist cervical cancer detection

    NASA Astrophysics Data System (ADS)

    Wang, Xingwei; Li, Yuhua; Liu, Hong; Li, Shibo; Zhang, Roy R.; Zheng, Bin

    2012-02-01

    Fluorescence in situ hybridization (FISH) technology provides a promising molecular imaging tool to detect cervical cancer. Since manual FISH analysis is difficult, time-consuming, and inconsistent, the automated FISH image scanning systems have been developed. Due to limited focal depth of scanned microscopic image, a FISH-probed specimen needs to be scanned in multiple layers that generate huge image data. To improve diagnostic efficiency of using automated FISH image analysis, we developed a computer-aided detection (CAD) scheme. In this experiment, four pap-smear specimen slides were scanned by a dual-detector fluorescence image scanning system that acquired two spectrum images simultaneously, which represent images of interphase cells and FISH-probed chromosome X. During image scanning, once detecting a cell signal, system captured nine image slides by automatically adjusting optical focus. Based on the sharpness index and maximum intensity measurement, cells and FISH signals distributed in 3-D space were projected into a 2-D con-focal image. CAD scheme was applied to each con-focal image to detect analyzable interphase cells using an adaptive multiple-threshold algorithm and detect FISH-probed signals using a top-hat transform. The ratio of abnormal cells was calculated to detect positive cases. In four scanned specimen slides, CAD generated 1676 con-focal images that depicted analyzable cells. FISH-probed signals were independently detected by our CAD algorithm and an observer. The Kappa coefficients for agreement between CAD and observer ranged from 0.69 to 1.0 in detecting/counting FISH signal spots. The study demonstrated the feasibility of applying automated FISH image and signal analysis to assist cyto-geneticists in detecting cervical cancers.

  12. An Automated Optimal Engagement and Attention Detection System Using Electrocardiogram

    PubMed Central

    Belle, Ashwin; Hargraves, Rosalyn Hobson; Najarian, Kayvan

    2012-01-01

    This research proposes to develop a monitoring system which uses Electrocardiograph (ECG) as a fundamental physiological signal, to analyze and predict the presence or lack of cognitive attention in individuals during a task execution. The primary focus of this study is to identify the correlation between fluctuating level of attention and its implications on the cardiac rhythm recorded in the ECG. Furthermore, Electroencephalograph (EEG) signals are also analyzed and classified for use as a benchmark for comparison with ECG analysis. Several advanced signal processing techniques have been implemented and investigated to derive multiple clandestine and informative features from both these physiological signals. Decomposition and feature extraction are done using Stockwell-transform for the ECG signal, while Discrete Wavelet Transform (DWT) is used for EEG. These features are then applied to various machine-learning algorithms to produce classification models that are capable of differentiating between the cases of a person being attentive and a person not being attentive. The presented results show that detection and classification of cognitive attention using ECG are fairly comparable to EEG. PMID:22924060

  13. An automated optimal engagement and attention detection system using electrocardiogram.

    PubMed

    Belle, Ashwin; Hargraves, Rosalyn Hobson; Najarian, Kayvan

    2012-01-01

    This research proposes to develop a monitoring system which uses Electrocardiograph (ECG) as a fundamental physiological signal, to analyze and predict the presence or lack of cognitive attention in individuals during a task execution. The primary focus of this study is to identify the correlation between fluctuating level of attention and its implications on the cardiac rhythm recorded in the ECG. Furthermore, Electroencephalograph (EEG) signals are also analyzed and classified for use as a benchmark for comparison with ECG analysis. Several advanced signal processing techniques have been implemented and investigated to derive multiple clandestine and informative features from both these physiological signals. Decomposition and feature extraction are done using Stockwell-transform for the ECG signal, while Discrete Wavelet Transform (DWT) is used for EEG. These features are then applied to various machine-learning algorithms to produce classification models that are capable of differentiating between the cases of a person being attentive and a person not being attentive. The presented results show that detection and classification of cognitive attention using ECG are fairly comparable to EEG. PMID:22924060

  14. An automated procedure for covariation-based detection of RNA structure

    SciTech Connect

    Winker, S.; Overbeek, R.; Woese, C.R.; Olsen, G.J.; Pfluger, N.

    1989-12-01

    This paper summarizes our investigations into the computational detection of secondary and tertiary structure of ribosomal RNA. We have developed a new automated procedure that not only identifies potential bondings of secondary and tertiary structure, but also provides the covariation evidence that supports the proposed bondings, and any counter-evidence that can be detected in the known sequences. A small number of previously unknown bondings have been detected in individual RNA molecules (16S rRNA and 7S RNA) through the use of our automated procedure. Currently, we are systematically studying mitochondrial rRNA. Our goal is to detect tertiary structure within 16S rRNA and quaternary structure between 16S and 23S rRNA. Our ultimate hope is that automated covariation analysis will contribute significantly to a refined picture of ribosome structure. Our colleagues in biology have begun experiments to test certain hypotheses suggested by an examination of our program's output. These experiments involve sequencing key portions of the 23S ribosomal RNA for species in which the known 16S ribosomal RNA exhibits variation (from the dominant pattern) at the site of a proposed bonding. The hope is that the 23S ribosomal RNA of these species will exhibit corresponding complementary variation or generalized covariation. 24 refs.

  15. Advances in automated deception detection in text-based computer-mediated communication

    NASA Astrophysics Data System (ADS)

    Adkins, Mark; Twitchell, Douglas P.; Burgoon, Judee K.; Nunamaker, Jay F., Jr.

    2004-08-01

    The Internet has provided criminals, terrorists, spies, and other threats to national security a means of communication. At the same time it also provides for the possibility of detecting and tracking their deceptive communication. Recent advances in natural language processing, machine learning and deception research have created an environment where automated and semi-automated deception detection of text-based computer-mediated communication (CMC, e.g. email, chat, instant messaging) is a reachable goal. This paper reviews two methods for discriminating between deceptive and non-deceptive messages in CMC. First, Document Feature Mining uses document features or cues in CMC messages combined with machine learning techniques to classify messages according to their deceptive potential. The method, which is most useful in asynchronous applications, also allows for the visualization of potential deception cues in CMC messages. Second, Speech Act Profiling, a method for quantifying and visualizing synchronous CMC, has shown promise in aiding deception detection. The methods may be combined and are intended to be a part of a suite of tools for automating deception detection.

  16. Automating dicentric chromosome detection from cytogenetic biodosimetry data.

    PubMed

    Rogan, Peter K; Li, Yanxin; Wickramasinghe, Asanka; Subasinghe, Akila; Caminsky, Natasha; Khan, Wahab; Samarabandu, Jagath; Wilkins, Ruth; Flegal, Farrah; Knoll, Joan H

    2014-06-01

    We present a prototype software system with sufficient capacity and speed to estimate radiation exposures in a mass casualty event by counting dicentric chromosomes (DCs) in metaphase cells from many individuals. Top-ranked metaphase cell images are segmented by classifying and defining chromosomes with an active contour gradient vector field (GVF) and by determining centromere locations along the centreline. The centreline is extracted by discrete curve evolution (DCE) skeleton branch pruning and curve interpolation. Centromere detection minimises the global width and DAPI-staining intensity profiles along the centreline. A second centromere is identified by reapplying this procedure after masking the first. Dicentrics can be identified from features that capture width and intensity profile characteristics as well as local shape features of the object contour at candidate pixel locations. The correct location of the centromere is also refined in chromosomes with sister chromatid separation. The overall algorithm has both high sensitivity (85 %) and specificity (94 %). Results are independent of the shape and structure of chromosomes in different cells, or the laboratory preparation protocol followed. The prototype software was recoded in C++/OpenCV; image processing was accelerated by data and task parallelisation with Message Passaging Interface and Intel Threading Building Blocks and an asynchronous non-blocking I/O strategy. Relative to a serial process, metaphase ranking, GVF and DCE are, respectively, 100 and 300-fold faster on an 8-core desktop and 64-core cluster computers. The software was then ported to a 1024-core supercomputer, which processed 200 metaphase images each from 1025 specimens in 1.4 h. PMID:24757176

  17. Automating dicentric chromosome detection from cytogenetic biodosimetry data

    PubMed Central

    Rogan, Peter K.; Li, Yanxin; Wickramasinghe, Asanka; Subasinghe, Akila; Caminsky, Natasha; Khan, Wahab; Samarabandu, Jagath; Wilkins, Ruth; Flegal, Farrah; Knoll, Joan H.

    2014-01-01

    We present a prototype software system with sufficient capacity and speed to estimate radiation exposures in a mass casualty event by counting dicentric chromosomes (DCs) in metaphase cells from many individuals. Top-ranked metaphase cell images are segmented by classifying and defining chromosomes with an active contour gradient vector field (GVF) and by determining centromere locations along the centreline. The centreline is extracted by discrete curve evolution (DCE) skeleton branch pruning and curve interpolation. Centromere detection minimises the global width and DAPI-staining intensity profiles along the centreline. A second centromere is identified by reapplying this procedure after masking the first. Dicentrics can be identified from features that capture width and intensity profile characteristics as well as local shape features of the object contour at candidate pixel locations. The correct location of the centromere is also refined in chromosomes with sister chromatid separation. The overall algorithm has both high sensitivity (85 %) and specificity (94 %). Results are independent of the shape and structure of chromosomes in different cells, or the laboratory preparation protocol followed. The prototype software was recoded in C++/OpenCV; image processing was accelerated by data and task parallelisation with Message Passaging Interface and Intel Threading Building Blocks and an asynchronous non-blocking I/O strategy. Relative to a serial process, metaphase ranking, GVF and DCE are, respectively, 100 and 300-fold faster on an 8-core desktop and 64-core cluster computers. The software was then ported to a 1024-core supercomputer, which processed 200 metaphase images each from 1025 specimens in 1.4 h. PMID:24757176

  18. Automated Retinal Image Analysis for Evaluation of Focal Hyperpigmentary Changes in Intermediate Age-Related Macular Degeneration

    PubMed Central

    Schmitz-Valckenberg, Steffen; Göbel, Arno P.; Saur, Stefan C.; Steinberg, Julia S.; Thiele, Sarah; Wojek, Christian; Russmann, Christoph; Holz, Frank G.; for the MODIAMD-Study Group

    2016-01-01

    Purpose To develop and evaluate a software tool for automated detection of focal hyperpigmentary changes (FHC) in eyes with intermediate age-related macular degeneration (AMD). Methods Color fundus (CFP) and autofluorescence (AF) photographs of 33 eyes with FHC of 28 AMD patients (mean age 71 years) from the prospective longitudinal natural history MODIAMD-study were included. Fully automated to semiautomated registration of baseline to corresponding follow-up images was evaluated. Following the manual circumscription of individual FHC (four different readings by two readers), a machine-learning algorithm was evaluated for automatic FHC detection. Results The overall pixel distance error for the semiautomated (CFP follow-up to CFP baseline: median 5.7; CFP to AF images from the same visit: median 6.5) was larger as compared for the automated image registration (4.5 and 5.7; P < 0.001 and P < 0.001). The total number of manually circumscribed objects and the corresponding total size varied between 637 to 1163 and 520,848 pixels to 924,860 pixels, respectively. Performance of the learning algorithms showed a sensitivity of 96% at a specificity level of 98% using information from both CFP and AF images and defining small areas of FHC (“speckle appearance”) as “neutral.” Conclusions FHC as a high-risk feature for progression of AMD to late stages can be automatically assessed at different time points with similar sensitivity and specificity as compared to manual outlining. Upon further development of the research prototype, this approach may be useful both in natural history and interventional large-scale studies for a more refined classification and risk assessment of eyes with intermediate AMD. Translational Relevance Automated FHC detection opens the door for a more refined and detailed classification and risk assessment of eyes with intermediate AMD in both natural history and future interventional studies. PMID:26966639

  19. Image Change Detection via Ensemble Learning

    SciTech Connect

    Martin, Benjamin W; Vatsavai, Raju

    2013-01-01

    The concept of geographic change detection is relevant in many areas. Changes in geography can reveal much information about a particular location. For example, analysis of changes in geography can identify regions of population growth, change in land use, and potential environmental disturbance. A common way to perform change detection is to use a simple method such as differencing to detect regions of change. Though these techniques are simple, often the application of these techniques is very limited. Recently, use of machine learning methods such as neural networks for change detection has been explored with great success. In this work, we explore the use of ensemble learning methodologies for detecting changes in bitemporal synthetic aperture radar (SAR) images. Ensemble learning uses a collection of weak machine learning classifiers to create a stronger classifier which has higher accuracy than the individual classifiers in the ensemble. The strength of the ensemble lies in the fact that the individual classifiers in the ensemble create a mixture of experts in which the final classification made by the ensemble classifier is calculated from the outputs of the individual classifiers. Our methodology leverages this aspect of ensemble learning by training collections of weak decision tree based classifiers to identify regions of change in SAR images collected of a region in the Staten Island, New York area during Hurricane Sandy. Preliminary studies show that the ensemble method has approximately 11.5% higher change detection accuracy than an individual classifier.

  20. Automated cerebellar segmentation: Validation and application to detect smaller volumes in children prenatally exposed to alcohol☆

    PubMed Central

    Cardenas, Valerie A.; Price, Mathew; Infante, M. Alejandra; Moore, Eileen M.; Mattson, Sarah N.; Riley, Edward P.; Fein, George

    2014-01-01

    Objective To validate an automated cerebellar segmentation method based on active shape and appearance modeling and then segment the cerebellum on images acquired from adolescents with histories of prenatal alcohol exposure (PAE) and non-exposed controls (NC). Methods Automated segmentations of the total cerebellum, right and left cerebellar hemispheres, and three vermal lobes (anterior, lobules I–V; superior posterior, lobules VI–VII; inferior posterior, lobules VIII–X) were compared to expert manual labelings on 20 subjects, studied twice, that were not used for model training. The method was also used to segment the cerebellum on 11 PAE and 9 NC adolescents. Results The test–retest intraclass correlation coefficients (ICCs) of the automated method were greater than 0.94 for all cerebellar volume and mid-sagittal vermal area measures, comparable or better than the test–retest ICCs for manual measurement (all ICCs > 0.92). The ICCs computed on all four cerebellar measurements (manual and automated measures on the repeat scans) to compare comparability were above 0.97 for non-vermis parcels, and above 0.89 for vermis parcels. When applied to patients, the automated method detected smaller cerebellar volumes and mid-sagittal areas in the PAE group compared to controls (p < 0.05 for all regions except the superior posterior lobe, consistent with prior studies). Discussion These results demonstrate excellent reliability and validity of automated cerebellar volume and mid-sagittal area measurements, compared to manual measurements. These data also illustrate that this new technology for automatically delineating the cerebellum leads to conclusions regarding the effects of prenatal alcohol exposure on the cerebellum consistent with prior studies that used labor intensive manual delineation, even with a very small sample. PMID:25061566

  1. Automated detection, 3D segmentation and analysis of high resolution spine MR images using statistical shape models

    NASA Astrophysics Data System (ADS)

    Neubert, A.; Fripp, J.; Engstrom, C.; Schwarz, R.; Lauer, L.; Salvado, O.; Crozier, S.

    2012-12-01

    Recent advances in high resolution magnetic resonance (MR) imaging of the spine provide a basis for the automated assessment of intervertebral disc (IVD) and vertebral body (VB) anatomy. High resolution three-dimensional (3D) morphological information contained in these images may be useful for early detection and monitoring of common spine disorders, such as disc degeneration. This work proposes an automated approach to extract the 3D segmentations of lumbar and thoracic IVDs and VBs from MR images using statistical shape analysis and registration of grey level intensity profiles. The algorithm was validated on a dataset of volumetric scans of the thoracolumbar spine of asymptomatic volunteers obtained on a 3T scanner using the relatively new 3D T2-weighted SPACE pulse sequence. Manual segmentations and expert radiological findings of early signs of disc degeneration were used in the validation. There was good agreement between manual and automated segmentation of the IVD and VB volumes with the mean Dice scores of 0.89 ± 0.04 and 0.91 ± 0.02 and mean absolute surface distances of 0.55 ± 0.18 mm and 0.67 ± 0.17 mm respectively. The method compares favourably to existing 3D MR segmentation techniques for VBs. This is the first time IVDs have been automatically segmented from 3D volumetric scans and shape parameters obtained were used in preliminary analyses to accurately classify (100% sensitivity, 98.3% specificity) disc abnormalities associated with early degenerative changes.

  2. Automated Region of Interest Detection of Fluorescent Neurons for Optogenetic Stimulation

    NASA Astrophysics Data System (ADS)

    Mishler, Jonathan; Plenz, Dietmar

    With the emergence of optogenetics, light has been used to simultaneously stimulate and image neural clusters in vivofor the purpose of understanding neural dynamics. Spatial light modulators (SLMs) have become the choice method for the targeted stimulation of neural clusters, offering unprecedented spatio-temporal resolution. By first imaging, and subsequently selecting the desired neurons for stimulation, SLMs can reliably stimulate those regions of interest (ROIs). However, as the cluster size grows, manually selecting the neurons becomes cumbersome and inefficient. Automated ROI detectors for this purpose have been developed, but rely on neural fluorescent spiking for detection, requiring several thousand imaging frames. To overcome this limitation, we present an automated ROI detection algorithm utilizing neural geometry and stationary information from a few hundred imaging frames that can be adjusted for sensitivity.

  3. Effect of Using Automated Auditing Tools on Detecting Compliance Failures in Unmanaged Processes

    NASA Astrophysics Data System (ADS)

    Doganata, Yurdaer; Curbera, Francisco

    The effect of using automated auditing tools to detect compliance failures in unmanaged business processes is investigated. In the absence of a process execution engine, compliance of an unmanaged business process is tracked by using an auditing tool developed based on business provenance technology or employing auditors. Since budget constraints limit employing auditors to evaluate all process instances, a methodology is devised to use both expert opinion on a limited set of process instances and the results produced by fallible automated audit machines on all process instances. An improvement factor is defined based on the average number of non-compliant process instances detected and it is shown that the improvement depends on the prevalence of non-compliance in the process as well as the sensitivity and the specificity of the audit machine.

  4. Effects of Response Bias and Judgment Framing on Operator Use of an Automated Aid in a Target Detection Task

    ERIC Educational Resources Information Center

    Rice, Stephen; McCarley, Jason S.

    2011-01-01

    Automated diagnostic aids prone to false alarms often produce poorer human performance in signal detection tasks than equally reliable miss-prone aids. However, it is not yet clear whether this is attributable to differences in the perceptual salience of the automated aids' misses and false alarms or is the result of inherent differences in…

  5. Automated aortic calcification detection in low-dose chest CT images

    NASA Astrophysics Data System (ADS)

    Xie, Yiting; Htwe, Yu Maw; Padgett, Jennifer; Henschke, Claudia; Yankelevitz, David; Reeves, Anthony P.

    2014-03-01

    The extent of aortic calcification has been shown to be a risk indicator for vascular events including cardiac events. We have developed a fully automated computer algorithm to segment and measure aortic calcification in low-dose noncontrast, non-ECG gated, chest CT scans. The algorithm first segments the aorta using a pre-computed Anatomy Label Map (ALM). Then based on the segmented aorta, aortic calcification is detected and measured in terms of the Agatston score, mass score, and volume score. The automated scores are compared with reference scores obtained from manual markings. For aorta segmentation, the aorta is modeled as a series of discrete overlapping cylinders and the aortic centerline is determined using a cylinder-tracking algorithm. Then the aortic surface location is detected using the centerline and a triangular mesh model. The segmented aorta is used as a mask for the detection of aortic calcification. For calcification detection, the image is first filtered, then an elevated threshold of 160 Hounsfield units (HU) is used within the aorta mask region to reduce the effect of noise in low-dose scans, and finally non-aortic calcification voxels (bony structures, calcification in other organs) are eliminated. The remaining candidates are considered as true aortic calcification. The computer algorithm was evaluated on 45 low-dose non-contrast CT scans. Using linear regression, the automated Agatston score is 98.42% correlated with the reference Agatston score. The automated mass and volume score is respectively 98.46% and 98.28% correlated with the reference mass and volume score.

  6. A Statistical Analysis of Automated and Manually Detected Fires Using Environmental Satellites

    NASA Astrophysics Data System (ADS)

    Ruminski, M. G.; McNamara, D.

    2003-12-01

    The National Environmental Satellite and Data Information Service (NESDIS) of the National Oceanic and Atmospheric Administration (NOAA) has been producing an analysis of fires and smoke over the US since 1998. This product underwent significant enhancement in June 2002 with the introduction of the Hazard Mapping System (HMS), an interactive workstation based system that displays environmental satellite imagery (NOAA Geostationary Operational Environmental Satellite (GOES), NOAA Polar Operational Environmental Satellite (POES) and National Aeronautics and Space Administration (NASA) MODIS data) and fire detects from the automated algorithms for each of the satellite sensors. The focus of this presentation is to present statistics compiled on the fire detects since November 2002. The Automated Biomass Burning Algorithm (ABBA) detects fires using GOES East and GOES West imagery. The Fire Identification, Mapping and Monitoring Algorithm (FIMMA) utilizes NOAA POES 15/16/17 imagery and the MODIS algorithm uses imagery from the MODIS instrument on the Terra and Aqua spacecraft. The HMS allows satellite analysts to inspect and interrogate the automated fire detects and the input satellite imagery. The analyst can then delete those detects that are felt to be false alarms and/or add fire points that the automated algorithms have not selected. Statistics are compiled for the number of automated detects from each of the algorithms, the number of automated detects that are deleted and the number of fire points added by the analyst for the contiguous US and immediately adjacent areas of Mexico and Canada. There is no attempt to distinguish between wildfires and control or agricultural fires. A detailed explanation of the automated algorithms is beyond the scope of this presentation. However, interested readers can find a more thorough description by going to www.ssd.noaa.gov/PS/FIRE/hms.html and scrolling down to Individual Fire Layers. For the period November 2002 thru August

  7. Detecting Concentration Changes with Cooperative Receptors

    NASA Astrophysics Data System (ADS)

    Bo, Stefano; Celani, Antonio

    2016-03-01

    Cells constantly need to monitor the state of the environment to detect changes and timely respond. The detection of concentration changes of a ligand by a set of receptors can be cast as a problem of hypothesis testing, and the cell viewed as a Neyman-Pearson detector. Within this framework, we investigate the role of receptor cooperativity in improving the cell's ability to detect changes. We find that cooperativity decreases the probability of missing an occurred change. This becomes especially beneficial when difficult detections have to be made. Concerning the influence of cooperativity on how fast a desired detection power is achieved, we find in general that there is an optimal value at finite levels of cooperation, even though easy discrimination tasks can be performed more rapidly by noncooperative receptors.

  8. Indigenous people's detection of rapid ecological change.

    PubMed

    Aswani, Shankar; Lauer, Matthew

    2014-06-01

    When sudden catastrophic events occur, it becomes critical for coastal communities to detect and respond to environmental transformations because failure to do so may undermine overall ecosystem resilience and threaten people's livelihoods. We therefore asked how capable of detecting rapid ecological change following massive environmental disruptions local, indigenous people are. We assessed the direction and periodicity of experimental learning of people in the Western Solomon Islands after a tsunami in 2007. We compared the results of marine science surveys with local ecological knowledge of the benthos across 3 affected villages and 3 periods before and after the tsunami. We sought to determine how people recognize biophysical changes in the environment before and after catastrophic events such as earthquakes and tsunamis and whether people have the ability to detect ecological changes over short time scales or need longer time scales to recognize changes. Indigenous people were able to detect changes in the benthos over time. Detection levels differed between marine science surveys and local ecological knowledge sources over time, but overall patterns of statistically significant detection of change were evident for various habitats. Our findings have implications for marine conservation, coastal management policies, and disaster-relief efforts because when people are able to detect ecological changes, this, in turn, affects how they exploit and manage their marine resources. PMID:24528101

  9. Probabilistic Change Detection Framework for Analyzing Settlement Dynamics Using Very High-resolution Satellite Imagery

    SciTech Connect

    Vatsavai, Raju; Graesser, Jordan B

    2012-01-01

    Global human population growth and an increasingly urbanizing world have led to rapid changes in human settlement landscapes and patterns. Timely monitoring and assessment of these changes and dissemination of accurate information is important for policy makers, city planners, and humanitarian relief workers. Satellite imagery provides useful data for the aforementioned applications, and remote sensing can be used to identify and quantify change areas. We explore a probabilistic framework to identify changes in human settlements using very high-resolution satellite imagery. As compared to predominantly pixel-based change detection systems which are highly sensitive to image registration errors, our grid (block) based approach is more robust to registration errors. The presented framework is an automated change detection system applicable to both panchromatic and multi-spectral imagery. The detection system provides comprehensible information about change areas, and minimizes the post-detection thresholding procedure often needed in traditional change detection algorithms.

  10. A sequential framework for image change detection.

    PubMed

    Lingg, Andrew J; Zelnio, Edmund; Garber, Fred; Rigling, Brian D

    2014-05-01

    We present a sequential framework for change detection. This framework allows us to use multiple images from reference and mission passes of a scene of interest in order to improve detection performance. It includes a change statistic that is easily updated when additional data becomes available. Detection performance using this statistic is predictable when the reference and image data are drawn from known distributions. We verify our performance prediction by simulation. Additionally, we show that detection performance improves with additional measurements on a set of synthetic aperture radar images and a set of visible images with unknown probability distributions. PMID:24818249

  11. Automated detection of broadband clicks of freshwater fish using spectro-temporal features.

    PubMed

    Kottege, Navinda; Jurdak, Raja; Kroon, Frederieke; Jones, Dean

    2015-05-01

    Large scale networks of embedded wireless sensor nodes can passively capture sound for species detection. However, the acoustic recordings result in large amounts of data requiring in-network classification for such systems to be feasible. The current state of the art in the area of in-network bioacoustics classification targets narrowband or long-duration signals, which render it unsuitable for detecting species that emit impulsive broadband signals. In this study, impulsive broadband signals were classified using a small set of spectral and temporal features to aid in their automatic detection and classification. A prototype system is presented along with an experimental evaluation of automated classification methods. The sound used was recorded from a freshwater invasive fish in Australia, the spotted tilapia (Tilapia mariae). Results show a high degree of accuracy after evaluating the proposed detection and classification method for T. mariae sounds and comparing its performance against the state of the art. Moreover, performance slightly improves when the original signal was down-sampled from 44.1 to 16 kHz. This indicates that the proposed method is well-suited for detection and classification on embedded devices, which can be deployed to implement a large scale wireless sensor network for automated species detection. PMID:25994683

  12. A nationwide web-based automated system for outbreak early detection and rapid response in China

    PubMed Central

    Lan, Yajia; Wang, Jinfeng; Ma, Jiaqi; Jin, Lianmei; Sun, Qiao; Lv, Wei; Lai, Shengjie; Liao, Yilan; Hu, Wenbiao

    2011-01-01

    Timely reporting, effective analyses and rapid distribution of surveillance data can assist in detecting the aberration of disease occurrence and further facilitate a timely response. In China, a new nationwide web-based automated system for outbreak detection and rapid response was developed in 2008. The China Infectious Disease Automated-alert and Response System (CIDARS) was developed by the Chinese Center for Disease Control and Prevention based on the surveillance data from the existing electronic National Notifiable Infectious Diseases Reporting Information System (NIDRIS) started in 2004. NIDRIS greatly improved the timeliness and completeness of data reporting with real-time reporting information via the Internet. CIDARS further facilitates the data analysis, aberration detection, signal dissemination, signal response and information communication needed by public health departments across the country. In CIDARS, three aberration detection methods are used to detect the unusual occurrence of 28 notifiable infectious diseases at the county level and transmit information either in real time or on a daily basis. The Internet, computers and mobile phones are used to accomplish rapid signal generation and dissemination, timely reporting and reviewing of the signal response results. CIDARS has been used nationwide since 2008; all Centers for Disease Control and Prevention (CDC) in China at the county, prefecture, provincial and national levels are involved in the system. It assists with early outbreak detection at the local level and prompts reporting of unusual disease occurrences or potential outbreaks to CDCs throughout the country. PMID:23908878

  13. Optimal training dataset composition for SVM-based, age-independent, automated epileptic seizure detection.

    PubMed

    Bogaarts, J G; Gommer, E D; Hilkman, D M W; van Kranen-Mastenbroek, V H J M; Reulen, J P H

    2016-08-01

    Automated seizure detection is a valuable asset to health professionals, which makes adequate treatment possible in order to minimize brain damage. Most research focuses on two separate aspects of automated seizure detection: EEG feature computation and classification methods. Little research has been published regarding optimal training dataset composition for patient-independent seizure detection. This paper evaluates the performance of classifiers trained on different datasets in order to determine the optimal dataset for use in classifier training for automated, age-independent, seizure detection. Three datasets are used to train a support vector machine (SVM) classifier: (1) EEG from neonatal patients, (2) EEG from adult patients and (3) EEG from both neonates and adults. To correct for baseline EEG feature differences among patients feature, normalization is essential. Usually dedicated detection systems are developed for either neonatal or adult patients. Normalization might allow for the development of a single seizure detection system for patients irrespective of their age. Two classifier versions are trained on all three datasets: one with feature normalization and one without. This gives us six different classifiers to evaluate using both the neonatal and adults test sets. As a performance measure, the area under the receiver operating characteristics curve (AUC) is used. With application of FBC, it resulted in performance values of 0.90 and 0.93 for neonatal and adult seizure detection, respectively. For neonatal seizure detection, the classifier trained on EEG from adult patients performed significantly worse compared to both the classifier trained on EEG data from neonatal patients and the classier trained on both neonatal and adult EEG data. For adult seizure detection, optimal performance was achieved by either the classifier trained on adult EEG data or the classifier trained on both neonatal and adult EEG data. Our results show that age

  14. Automated thematic mapping and change detection of ERTS-1 images

    NASA Technical Reports Server (NTRS)

    Gramenopoulos, N. (Principal Investigator); Alpaugh, H.

    1972-01-01

    The author has identified the following significant results. An ERTS-1 image was compared to aircraft photography and maps of an area near Brownsville, Texas. In the coastal region of Cameron County, natural and cultural detail were identified in the ERTS-1 image. In Hidalgo County, ground truth was located on the ERTS-1 image. Haze and 50% cloud cover over Hidalgo County reduced the usefulness of multispectral techniques for recognizing crops.

  15. Automated Guided-Wave Scanning Developed to Characterize Materials and Detect Defects

    NASA Technical Reports Server (NTRS)

    Martin, Richard E.; Gyekenyeski, Andrew L.; Roth, Don J.

    2004-01-01

    The Nondestructive Evaluation (NDE) Group of the Optical Instrumentation Technology Branch at the NASA Glenn Research Center has developed a scanning system that uses guided waves to characterize materials and detect defects. The technique uses two ultrasonic transducers to interrogate the condition of a material. The sending transducer introduces an ultrasonic pulse at a point on the surface of the specimen, and the receiving transducer detects the signal after it has passed through the material. The aim of the method is to correlate certain parameters in both the time and frequency domains of the detected waveform to characteristics of the material between the two transducers. The scanning system is shown. The waveform parameters of interest include the attenuation due to internal damping, waveform shape parameters, and frequency shifts due to material changes. For the most part, guided waves are used to gauge the damage state and defect growth of materials subjected to various mechanical or environmental loads. The technique has been applied to polymer matrix composites, ceramic matrix composites, and metal matrix composites as well as metallic alloys. Historically, guided wave analysis has been a point-by-point, manual technique with waveforms collected at discrete locations and postprocessed. Data collection and analysis of this type limits the amount of detail that can be obtained. Also, the manual movement of the sensors is prone to user error and is time consuming. The development of an automated guided-wave scanning system has allowed the method to be applied to a wide variety of materials in a consistent, repeatable manner. Experimental studies have been conducted to determine the repeatability of the system as well as compare the results obtained using more traditional NDE methods. The following screen capture shows guided-wave scan results for a ceramic matrix composite plate, including images for each of nine calculated parameters. The system can

  16. Automated and miniaturized detection of biological threats with a centrifugal microfluidic system

    NASA Astrophysics Data System (ADS)

    Mark, D.; van Oordt, T.; Strohmeier, O.; Roth, G.; Drexler, J.; Eberhard, M.; Niedrig, M.; Patel, P.; Zgaga-Griesz, A.; Bessler, W.; Weidmann, M.; Hufert, F.; Zengerle, R.; von Stetten, F.

    2012-06-01

    The world's growing mobility, mass tourism, and the threat of terrorism increase the risk of the fast spread of infectious microorganisms and toxins. Today's procedures for pathogen detection involve complex stationary devices, and are often too time consuming for a rapid and effective response. Therefore a robust and mobile diagnostic system is required. We present a microstructured LabDisk which performs complex biochemical analyses together with a mobile centrifugal microfluidic device which processes the LabDisk. This portable system will allow fully automated and rapid detection of biological threats at the point-of-need.

  17. Automated, per pixel Cloud Detection from High-Resolution VNIR Data

    NASA Technical Reports Server (NTRS)

    Varlyguin, Dmitry L.

    2007-01-01

    CASA is a fully automated software program for the per-pixel detection of clouds and cloud shadows from medium- (e.g., Landsat, SPOT, AWiFS) and high- (e.g., IKONOS, QuickBird, OrbView) resolution imagery without the use of thermal data. CASA is an object-based feature extraction program which utilizes a complex combination of spectral, spatial, and contextual information available in the imagery and the hierarchical self-learning logic for accurate detection of clouds and their shadows.

  18. Airborne change detection system for the detection of route mines

    NASA Astrophysics Data System (ADS)

    Donzelli, Thomas P.; Jackson, Larry; Yeshnik, Mark; Petty, Thomas E.

    2003-09-01

    The US Army is interested in technologies that will enable it to maintain the free flow of traffic along routes such as Main Supply Routes (MSRs). Mines emplaced in the road by enemy forces under cover of darkness represent a major threat to maintaining a rapid Operational Tempo (OPTEMPO) along such routes. One technique that shows promise for detecting enemy mining activity is Airborne Change Detection, which allows an operator to detect suspicious day-to-day changes in and around the road that may be indicative of enemy mining. This paper presents an Airborne Change Detection that is currently under development at the US Army Night Vision and Electronic Sensors Directorate (NVESD). The system has been tested using a longwave infrared (LWIR) sensor on a vertical take-off and landing unmanned aerial vehicle (VTOL UAV) and a midwave infrared (MWIR) sensor on a fixed wing aircraft. The system is described and results of the various tests conducted to date are presented.

  19. Automated local bright feature image analysis of nuclear proteindistribution identifies changes in tissue phenotype

    SciTech Connect

    Knowles, David; Sudar, Damir; Bator, Carol; Bissell, Mina

    2006-02-01

    The organization of nuclear proteins is linked to cell and tissue phenotypes. When cells arrest proliferation, undergo apoptosis, or differentiate, the distribution of nuclear proteins changes. Conversely, forced alteration of the distribution of nuclear proteins modifies cell phenotype. Immunostaining and fluorescence microscopy have been critical for such findings. However, there is an increasing need for quantitative analysis of nuclear protein distribution to decipher epigenetic relationships between nuclear structure and cell phenotype, and to unravel the mechanisms linking nuclear structure and function. We have developed imaging methods to quantify the distribution of fluorescently-stained nuclear protein NuMA in different mammary phenotypes obtained using three-dimensional cell culture. Automated image segmentation of DAPI-stained nuclei was generated to isolate thousands of nuclei from three-dimensional confocal images. Prominent features of fluorescently-stained NuMA were detected using a novel local bright feature analysis technique, and their normalized spatial density calculated as a function of the distance from the nuclear perimeter to its center. The results revealed marked changes in the distribution of the density of NuMA bright features as non-neoplastic cells underwent phenotypically normal acinar morphogenesis. In contrast, we did not detect any reorganization of NuMA during the formation of tumor nodules by malignant cells. Importantly, the analysis also discriminated proliferating non-neoplastic cells from proliferating malignant cells, suggesting that these imaging methods are capable of identifying alterations linked not only to the proliferation status but also to the malignant character of cells. We believe that this quantitative analysis will have additional applications for classifying normal and pathological tissues.

  20. THE IMPACT OF TECHNOLOGICAL CHANGE IN THE MEATPACKING INDUSTRY. AUTOMATION PROGRAM REPORT, NUMBER 1.

    ERIC Educational Resources Information Center

    DICK, WILLIAM G.

    TWENTY AUTOMATION MANPOWER SERVICES DEMONSTRATION PROJECTS WERE STARTED TO PROVIDE EXPERIENCE WITH JOB MARKET PROBLEMS CAUSED BY CHANGING TECHNOLOGY AND MASS LAYOFFS. THE FIRST OF THE SERIES, ESTABLISHED IN LOCAL PUBLIC EMPLOYMENT SERVICE OFFICES, THIS PROJECT DEALT WITH THE LAYOFF OF 675 WORKERS, PROBLEMS OF READJUSTMENT IN THE PLANT, THE…

  1. Change Detection via Morphological Comparative Filters

    NASA Astrophysics Data System (ADS)

    Vizilter, Y. V.; Rubis, A. Y.; Zheltov, S. Y.; Vygolov, O. V.

    2016-06-01

    In this paper we propose the new change detection technique based on morphological comparative filtering. This technique generalizes the morphological image analysis scheme proposed by Pytiev. A new class of comparative filters based on guided contrasting is developed. Comparative filtering based on diffusion morphology is implemented too. The change detection pipeline contains: comparative filtering on image pyramid, calculation of morphological difference map, binarization, extraction of change proposals and testing change proposals using local morphological correlation coefficient. Experimental results demonstrate the applicability of proposed approach.

  2. Real-time automated detection, tracking, classification, and geolocation of dismounts using EO and IR FMV

    NASA Astrophysics Data System (ADS)

    Muncaster, J.; Collins, G.; Waltman, J.

    2015-05-01

    The VideoPlus®-Aware (VPA) system enables autonomous video-based target detection, tracking and classification. The system stabilizes video and operates completely autonomously. A statistical background model enables robust acquisition of moving targets, while stopped targets are tracked using feature-based detectors. An ensemble classifier is trained for automated detection and classification of dismounts (i.e., humans) and a planar scene model is used to both improve system performance and reduce false positives. A formal evaluation of the VPA system was performed by the government, to quantify the system's abilities to detect, track, and classify, humans. The evaluation provided 811 separate data points gathered over a period of four days with an overall probability of sensing of 99.9%. The probability of detection was 86.2% and the percentage of correct action classification was 82%. The data provided a False Alarm Rate of 0 per hour and Nuisance Alarm Rate of 0.72 per hour. Dismounts were reliably classified with pixel heights as low as 25 pixels. Real-time automated detection, tracking, and classification of targets with low false positive rates was achieved, even with few pixels on target. The planar scene model based optimizations were sufficient to dramatically reduce the runtime of sliding-window classifiers.

  3. Automated Detection of Brain Abnormalities in Neonatal Hypoxia Ischemic Injury from MR Images

    PubMed Central

    Ghosh, Nirmalya; Sun, Yu; Bhanu, Bir; Ashwal, Stephen; Obenaus, Andre

    2014-01-01

    We compared the efficacy of three automated brain injury detection methods, namely symmetry-integrated region growing (SIRG), hierarchical region splitting (HRS) and modified watershed segmentation (MWS) in human and animal magnetic resonance imaging (MRI) datasets for the detection of hypoxic ischemic injuries (HII). Diffusion weighted imaging (DWI, 1.5T) data from neonatal arterial ischemic stroke (AIS) patients, as well as T2-weighted imaging (T2WI, 11.7T, 4.7T) at seven different time-points (1, 4, 7, 10, 17, 24 and 31 days post HII) in rat-pup model of hypoxic ischemic injury were used to check the temporal efficacy of our computational approaches. Sensitivity, specificity, similarity were used as performance metrics based on manual (‘gold standard’) injury detection to quantify comparisons. When compared to the manual gold standard, automated injury location results from SIRG performed the best in 62% of the data, while 29% for HRS and 9% for MWS. Injury severity detection revealed that SIRG performed the best in 67% cases while HRS for 33% data. Prior information is required by HRS and MWS, but not by SIRG. However, SIRG is sensitive to parameter-tuning, while HRS and MWS are not. Among these methods, SIRG performs the best in detecting lesion volumes; HRS is the most robust, while MWS lags behind in both respects. PMID:25000294

  4. Filament Chirality over an Entire Cycle Determined with an Automated Detection Module -- a Neat Surprise!

    NASA Astrophysics Data System (ADS)

    Martens, Petrus C.; Yeates, A. R.; Mackay, D.; Pillai, K. G.

    2013-07-01

    Using metadata produced by automated solar feature detection modules developed for SDO (Martens et al. 2012) we have discovered some trends in filament chirality and filament-sigmoid relations that are new and in part contradict the current consensus. Automated detection of solar features has the advantage over manual detection of having the detection criteria applied consistently, and in being able to deal with enormous amounts of data, like the 1 Terabyte per day that SDO produces. Here we use the filament detection module developed by Bernasconi, which has metadata from 2000 on, and the sigmoid sniffer, which has been producing metadata from AIA 94 A images since October 2011. The most interesting result we find is that the hemispheric chirality preference for filaments (dextral in the north, and v.v.), studied in detail for a three year period by Pevtsov et al. (2003) seems to disappear during parts of the decline of cycle 23 and during the extended solar minimum that followed. Moreover the hemispheric chirality rule seems to be much less pronounced during the onset of cycle 24. For sigmoids we find the expected correlation between chirality and handedness (S or Z) shape but not as strong as expected.

  5. Change Detection Experiments Using Low Cost UAVs

    NASA Technical Reports Server (NTRS)

    Logan, Michael J.; Vranas, Thomas L.; Motter, Mark; Hines, Glenn D.; Rahman, Zia-ur

    2005-01-01

    This paper presents the progress in the development of a low-cost change-detection system. This system is being developed to provide users with the ability to use a low-cost unmanned aerial vehicle (UAV) and image processing system that can detect changes in specific fixed ground locations using video provided by an autonomous UAV. The results of field experiments conducted with the US Army at Ft. A.P.Hill are presented.

  6. Change Point Detection in Correlation Networks

    PubMed Central

    Barnett, Ian; Onnela, Jukka-Pekka

    2016-01-01

    Many systems of interacting elements can be conceptualized as networks, where network nodes represent the elements and network ties represent interactions between the elements. In systems where the underlying network evolves, it is useful to determine the points in time where the network structure changes significantly as these may correspond to functional change points. We propose a method for detecting change points in correlation networks that, unlike previous change point detection methods designed for time series data, requires minimal distributional assumptions. We investigate the difficulty of change point detection near the boundaries of the time series in correlation networks and study the power of our method and competing methods through simulation. We also show the generalizable nature of the method by applying it to stock price data as well as fMRI data. PMID:26739105

  7. Automated Detection of Benzodiazepine Dosage in ICU Patients through a Computational Analysis of Electrocardiographic Data

    PubMed Central

    Spadafore, Maxwell T.; Syed, Zeeshan; Rubinfeld, Ilan S.

    2015-01-01

    To enable automated maintenance of patient sedation in an intensive care unit (ICU) setting, more robust, quantitative metrics of sedation depth must be developed. In this study, we demonstrated the feasibility of a fully computational system that leverages low-quality electrocardiography (ECG) from a single lead to detect the presence of benzodiazepine sedatives in a subject’s system. Starting with features commonly examined manually by cardiologists searching for evidence of poisonings, we generalized the extraction of these features to a fully automated process. We tested the predictive power of these features using nine subjects from an intensive care clinical database. Features were found to be significantly indicative of a binary relationship between dose and ECG morphology, but we were unable to find evidence of a predictable continuous relationship. Fitting this binary relationship to a classifier, we achieved a sensitivity of 89% and a specificity of 95%. PMID:26958308

  8. Updating National Topographic Data Base Using Change Detection Methods

    NASA Astrophysics Data System (ADS)

    Keinan, E.; Felus, Y. A.; Tal, Y.; Zilberstien, O.; Elihai, Y.

    2016-06-01

    The traditional method for updating a topographic database on a national scale is a complex process that requires human resources, time and the development of specialized procedures. In many National Mapping and Cadaster Agencies (NMCA), the updating cycle takes a few years. Today, the reality is dynamic and the changes occur every day, therefore, the users expect that the existing database will portray the current reality. Global mapping projects which are based on community volunteers, such as OSM, update their database every day based on crowdsourcing. In order to fulfil user's requirements for rapid updating, a new methodology that maps major interest areas while preserving associated decoding information, should be developed. Until recently, automated processes did not yield satisfactory results, and a typically process included comparing images from different periods. The success rates in identifying the objects were low, and most were accompanied by a high percentage of false alarms. As a result, the automatic process required significant editorial work that made it uneconomical. In the recent years, the development of technologies in mapping, advancement in image processing algorithms and computer vision, together with the development of digital aerial cameras with NIR band and Very High Resolution satellites, allow the implementation of a cost effective automated process. The automatic process is based on high-resolution Digital Surface Model analysis, Multi Spectral (MS) classification, MS segmentation, object analysis and shape forming algorithms. This article reviews the results of a novel change detection methodology as a first step for updating NTDB in the Survey of Israel.

  9. Development of an Automated DNA Detection System Using an Electrochemical DNA Chip Technology

    NASA Astrophysics Data System (ADS)

    Hongo, Sadato; Okada, Jun; Hashimoto, Koji; Tsuji, Koichi; Nikaido, Masaru; Gemma, Nobuhiro

    A new compact automated DNA detection system Genelyzer™ has been developed. After injecting a sample solution into a cassette with a built-in electrochemical DNA chip, processes from hybridization reaction to detection and analysis are all operated fully automatically. In order to detect a sample DNA, electrical currents from electrodes due to an oxidization reaction of electrochemically active intercalator molecules bound to hybridized DNAs are detected. The intercalator is supplied as a reagent solution by a fluid supply unit of the system. The feasibility test proved that the simultaneous typing of six single nucleotide polymorphisms (SNPs) associated with a rheumatoid arthritis (RA) was carried out within two hours and that all the results were consistent with those by conventional typing methods. It is expected that this system opens a new way to a DNA testing such as a test for infectious diseases, a personalized medicine, a food inspection, a forensic application and any other applications.

  10. A system for automated outbreak detection of communicable diseases in Germany.

    PubMed

    Salmon, Maëlle; Schumacher, Dirk; Burmann, Hendrik; Frank, Christina; Claus, Hermann; Höhle, Michael

    2016-03-31

    We describe the design and implementation of a novel automated outbreak detection system in Germany that monitors the routinely collected surveillance data for communicable diseases. Detecting unusually high case counts as early as possible is crucial as an accumulation may indicate an ongoing outbreak. The detection in our system is based on state-of-the-art statistical procedures conducting the necessary data mining task. In addition, we have developed effective methods to improve the presentation of the results of such algorithms to epidemiologists and other system users. The objective was to effectively integrate automatic outbreak detection into the epidemiological workflow of a public health institution. Since 2013, the system has been in routine use at the German Robert Koch Institute. PMID:27063588

  11. Automated sinkhole detection using a DEM subsetting technique and fill tools at Mammoth Cave National Park

    NASA Astrophysics Data System (ADS)

    Wall, J.; Bohnenstiehl, D. R.; Levine, N. S.

    2013-12-01

    An automated workflow for sinkhole detection is developed using Light Detection and Ranging (Lidar) data from Mammoth Cave National Park (MACA). While the park is known to sit within a karst formation, the generally dense canopy cover and the size of the park (~53,000 acres) creates issues for sinkhole inventorying. Lidar provides a useful remote sensing technology for peering beneath the canopy in hard to reach areas of the park. In order to detect sinkholes, a subsetting technique is used to interpolate a Digital Elevation Model (DEM) thereby reducing edge effects. For each subset, standard GIS fill tools are used to fill depressions within the DEM. The initial DEM is then subtracted from the filled DEM resulting in detected depressions or sinkholes. Resulting depressions are then described in terms of size and geospatial trend.

  12. A rich Internet application for automated detection of road blockage in post-disaster scenarios

    NASA Astrophysics Data System (ADS)

    Liu, W.; Dong, P.; Liu, S.; Liu, J.

    2014-02-01

    This paper presents the development of a rich Internet application for automated detection of road blockage in post-disaster scenarios using volunteered geographic information from OpenStreetMap street centerlines and airborne light detection and ranging (LiDAR) data. The architecture of the application on the client-side and server-side was described. The major functionality of the application includes shapefile uploading, Web editing for spatial features, road blockage detection, and blockage points downloading. An example from the 2010 Haiti earthquake was included to demonstrate the effectiveness of the application. The results suggest that the prototype application can effectively detect (1) road blockage caused by earthquakes, and (2) some human errors caused by contributors of volunteered geographic information.

  13. Normalized gradient fields cross-correlation for automated detection of prostate in magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Fotin, Sergei V.; Yin, Yin; Periaswamy, Senthil; Kunz, Justin; Haldankar, Hrishikesh; Muradyan, Naira; Cornud, François; Turkbey, Baris; Choyke, Peter L.

    2012-02-01

    Fully automated prostate segmentation helps to address several problems in prostate cancer diagnosis and treatment: it can assist in objective evaluation of multiparametric MR imagery, provides a prostate contour for MR-ultrasound (or CT) image fusion for computer-assisted image-guided biopsy or therapy planning, may facilitate reporting and enables direct prostate volume calculation. Among the challenges in automated analysis of MR images of the prostate are the variations of overall image intensities across scanners, the presence of nonuniform multiplicative bias field within scans and differences in acquisition setup. Furthermore, images acquired with the presence of an endorectal coil suffer from localized high-intensity artifacts at the posterior part of the prostate. In this work, a three-dimensional method for fast automated prostate detection based on normalized gradient fields cross-correlation, insensitive to intensity variations and coil-induced artifacts, is presented and evaluated. The components of the method, offline template learning and the localization algorithm, are described in detail. The method was validated on a dataset of 522 T2-weighted MR images acquired at the National Cancer Institute, USA that was split in two halves for development and testing. In addition, second dataset of 29 MR exams from Centre d'Imagerie Médicale Tourville, France were used to test the algorithm. The 95% confidence intervals for the mean Euclidean distance between automatically and manually identified prostate centroids were 4.06 +/- 0.33 mm and 3.10 +/- 0.43 mm for the first and second test datasets respectively. Moreover, the algorithm provided the centroid within the true prostate volume in 100% of images from both datasets. Obtained results demonstrate high utility of the detection method for a fully automated prostate segmentation.

  14. Frequency doubling technology and standard automated perimetry in detection of glaucoma among glaucoma suspects

    PubMed Central

    Patyal, Sagarika; Kotwal, Atul; Banarji, Ajay; Gurunadh, V.S.

    2014-01-01

    Background Frequency Doubling perimetry (FDT) has been found to precede visual loss detected by standard automated perimetry (SAP) by as much as four years and the initial development of glaucomatous visual field loss as measured by SAP was found to occur in regions that had previously demonstrated abnormalities on FDT testing. Methods A study on 55 glaucoma suspects (determined as per American Academy Guidelines, Preferred Practice Pattern, Oct 2010), was compared to 50 healthy participants (HP). Both glaucoma suspects and HP underwent SAP and FDT in random order. Only reliable fields were compared. Results Mean deviation of FDT Matrix was significantly lower than SAP SITA in suspect and healthy group ; two devices showed significant correlation amongst both groups (suspects p = 0.002, healthy p = 0.011). Significant difference was found in PSD of SAP SITA and FDT Matrix (p = 0.001) in the glaucoma suspect group, PSD of FDT Matrix was significantly higher than PSD of SAP SITA in the healthy group (p < 0.001). PSD of SAP SITA significantly correlated with FDT Matrix PSD in glaucoma group (r = 0.579; p = 0.001) but no significant correlation found in healthy group (r = 0.153; p = 0.290). Percentages of normal test locations significantly higher in FDT Matrix compared to SAP SITA in glaucoma suspects and healthy participants. Conclusion FDT correlates well with SAP and may be used for patients who are unable to perform well and reliably with SAP but does not show any features of earlier glaucoma changes in this study. PMID:25382906

  15. Pitfalls in creation of left atrial pressure-area relationships with automated border detection.

    PubMed

    Keren, A; DeAnda, A; Komeda, M; Tye, T; Handen, C R; Daughters, G T; Ingels, N B; Miller, C; Popp, R L; Nikolic, S D

    1995-01-01

    Creation of pressure-area relationships (loops) with automated border detection (ABD) involves correction for the variable inherent delay in the ABD signal relative to the pressure recording. This article summarizes (1) the results of in vitro experiments performed to define the range of, and factors that might influence, the ABD delay; (2) the difficulties encountered in evaluating a thin-walled structure like the left atrium in the dog model; and (3) the solutions to some of the difficulties found. The in vitro experiments showed that the ABD delay relative to high-fidelity pressure recordings ranges from 20 to 34 msec and 35 to 57 msec at echocardiographic frame rates of 60/sec and 33/sec, respectively. The delay was not influenced significantly by the type of transducer used, distance from the target area, or size of the target area. The delay in the ABD signal, relative to the echocardiographic image, ranges from nil to less than one frame duration, whereas it is delayed one to two frame durations relative to the electrocardiogram processed by the imaging system. In the dog model, inclusion of even small areas outside the left atrium rendered curves with apparent physiologic contour but inappropriately long delays of 90 to 130 msec. To exclude areas outside the left atrial cavity, time-gain compensation and lateral gain compensation were used much more extensively than during left ventricular ABD recording. By changing the type of sonomicrometers used in our experiments, we were able to record simultaneously ABD and ultrasonic crystal data. However, both spontaneous contrast originating from a right-sided heart bypass pump and electronic noise from the eletrocautery severely interferred with ABD recording. PMID:9417210

  16. Automated Detection of Selective Logging in Amazon Forests Using Airborne Lidar Data and Pattern Recognition Algorithms

    NASA Astrophysics Data System (ADS)

    Keller, M. M.; d'Oliveira, M. N.; Takemura, C. M.; Vitoria, D.; Araujo, L. S.; Morton, D. C.

    2012-12-01

    Selective logging, the removal of several valuable timber trees per hectare, is an important land use in the Brazilian Amazon and may degrade forests through long term changes in structure, loss of forest carbon and species diversity. Similar to deforestation, the annual area affected by selected logging has declined significantly in the past decade. Nonetheless, this land use affects several thousand km2 per year in Brazil. We studied a 1000 ha area of the Antimary State Forest (FEA) in the State of Acre, Brazil (9.304 ○S, 68.281 ○W) that has a basal area of 22.5 m2 ha-1 and an above-ground biomass of 231 Mg ha-1. Logging intensity was low, approximately 10 to 15 m3 ha-1. We collected small-footprint airborne lidar data using an Optech ALTM 3100EA over the study area once each in 2010 and 2011. The study area contained both recent and older logging that used both conventional and technologically advanced logging techniques. Lidar return density averaged over 20 m-2 for both collection periods with estimated horizontal and vertical precision of 0.30 and 0.15 m. A relative density model comparing returns from 0 to 1 m elevation to returns in 1-5 m elevation range revealed the pattern of roads and skid trails. These patterns were confirmed by ground-based GPS survey. A GIS model of the road and skid network was built using lidar and ground data. We tested and compared two pattern recognition approaches used to automate logging detection. Both segmentation using commercial eCognition segmentation and a Frangi filter algorithm identified the road and skid trail network compared to the GIS model. We report on the effectiveness of these two techniques.

  17. Sensor for detecting changes in magnetic fields

    DOEpatents

    Praeg, Walter F.

    1981-01-01

    A sensor for detecting changes in the magnetic field of the equilibrium-field coil of a Tokamak plasma device comprises a pair of bifilar wires disposed circumferentially, one inside and one outside the equilibrium-field coil. Each is shorted at one end. The difference between the voltages detected at the other ends of the bifilar wires provides a measure of changing flux in the equilibrium-field coil. This difference can be used to detect faults in the coil in time to take action to protect the coil.

  18. Sensor for detecting changes in magnetic fields

    DOEpatents

    Praeg, W.F.

    1980-02-26

    A sensor is described for detecting changes in the magnetic field of the equilibrium-field coil of a Tokamak plasma device that comprises a pair of bifilar wires disposed circumferentially, one inside and one outside the equilibrium-field coil. Each is shorted at one end. The difference between the voltages detected at the other ends of the bifilar wires provides a measure of changing flux in the equilibrium-field coil. This difference can be used to detect faults in the coil in time to take action to protect the coil.

  19. Computerized detection of breast cancer on automated breast ultrasound imaging of women with dense breasts

    SciTech Connect

    Drukker, Karen Sennett, Charlene A.; Giger, Maryellen L.

    2014-01-15

    Purpose: Develop a computer-aided detection method and investigate its feasibility for detection of breast cancer in automated 3D ultrasound images of women with dense breasts. Methods: The HIPAA compliant study involved a dataset of volumetric ultrasound image data, “views,” acquired with an automated U-Systems Somo•V{sup ®} ABUS system for 185 asymptomatic women with dense breasts (BI-RADS Composition/Density 3 or 4). For each patient, three whole-breast views (3D image volumes) per breast were acquired. A total of 52 patients had breast cancer (61 cancers), diagnosed through any follow-up at most 365 days after the original screening mammogram. Thirty-one of these patients (32 cancers) had a screening-mammogram with a clinically assigned BI-RADS Assessment Category 1 or 2, i.e., were mammographically negative. All software used for analysis was developed in-house and involved 3 steps: (1) detection of initial tumor candidates, (2) characterization of candidates, and (3) elimination of false-positive candidates. Performance was assessed by calculating the cancer detection sensitivity as a function of the number of “marks” (detections) per view. Results: At a single mark per view, i.e., six marks per patient, the median detection sensitivity by cancer was 50.0% (16/32) ± 6% for patients with a screening mammogram-assigned BI-RADS category 1 or 2—similar to radiologists’ performance sensitivity (49.9%) for this dataset from a prior reader study—and 45.9% (28/61) ± 4% for all patients. Conclusions: Promising detection sensitivity was obtained for the computer on a 3D ultrasound dataset of women with dense breasts at a rate of false-positive detections that may be acceptable for clinical implementation.

  20. Automated detection of presence of mucus foci in airway diseases: preliminary results

    NASA Astrophysics Data System (ADS)

    Odry, Benjamin L.; Kiraly, Atilla P.; Novak, Carol L.; Naidich, David P.; Ko, Jane; Godoy, Myrna C. B.

    2009-02-01

    Chronic Obstructive Pulmonary Disease (COPD) is often characterized by partial or complete obstruction of airflow in the lungs. This can be due to airway wall thickening and retained secretions, resulting in foci of mucoid impactions. Although radiologists have proposed scoring systems to assess extent and severity of airway diseases from CT images, these scores are seldom used clinically due to impracticality. The high level of subjectivity from visual inspection and the sheer number of airways in the lungs mean that automation is critical in order to realize accurate scoring. In this work we assess the feasibility of including an automated mucus detection method in a clinical scoring system. Twenty high-resolution datasets of patients with mild to severe bronchiectasis were randomly selected, and used to test the ability of the computer to detect the presence or absence of mucus in each lobe (100 lobes in all). Two experienced radiologists independently scored the presence or absence of mucus in each lobe based on the visual assessment method recommended by Sheehan et al [1]. These results were compared with an automated method developed for mucus plug detection [2]. Results showed agreement between the two readers on 44% of the lobes for presence of mucus, 39% of lobes for absence of mucus, and discordant opinions on 17 lobes. For 61 lobes where 1 or both readers detected mucus, the computer sensitivity was 75.4%, the specificity was 69.2%, and the positive predictive value (PPV) was 79.3%. Six computer false positives were a-posteriori reviewed by the experts and reassessed as true positives, yielding results of 77.6% sensitivity, 81.8% for specificity, and 89.6% PPV.

  1. Automated Cell Detection and Morphometry on Growth Plate Images of Mouse Bone

    PubMed Central

    Ascenzi, Maria-Grazia; Du, Xia; Harding, James I; Beylerian, Emily N; de Silva, Brian M; Gross, Ben J; Kastein, Hannah K; Wang, Weiguang; Lyons, Karen M; Schaeffer, Hayden

    2014-01-01

    Microscopy imaging of mouse growth plates is extensively used in biology to understand the effect of specific molecules on various stages of normal bone development and on bone disease. Until now, such image analysis has been conducted by manual detection. In fact, when existing automated detection techniques were applied, morphological variations across the growth plate and heterogeneity of image background color, including the faint presence of cells (chondrocytes) located deeper in tissue away from the image’s plane of focus, and lack of cell-specific features, interfered with identification of cell. We propose the first method of automated detection and morphometry applicable to images of cells in the growth plate of long bone. Through ad hoc sequential application of the Retinex method, anisotropic diffusion and thresholding, our new cell detection algorithm (CDA) addresses these challenges on bright-field microscopy images of mouse growth plates. Five parameters, chosen by the user in respect of image characteristics, regulate our CDA. Our results demonstrate effectiveness of the proposed numerical method relative to manual methods. Our CDA confirms previously established results regarding chondrocytes’ number, area, orientation, height and shape of normal growth plates. Our CDA also confirms differences previously found between the genetic mutated mouse Smad1/5CKO and its control mouse on fluorescence images. The CDA aims to aid biomedical research by increasing efficiency and consistency of data collection regarding arrangement and characteristics of chondrocytes. Our results suggest that automated extraction of data from microscopy imaging of growth plates can assist in unlocking information on normal and pathological development, key to the underlying biological mechanisms of bone growth. PMID:25525552

  2. Fully Automated Centrifugal Microfluidic Device for Ultrasensitive Protein Detection from Whole Blood.

    PubMed

    Park, Yang-Seok; Sunkara, Vijaya; Kim, Yubin; Lee, Won Seok; Han, Ja-Ryoung; Cho, Yoon-Kyoung

    2016-01-01

    Enzyme-linked immunosorbent assay (ELISA) is a promising method to detect small amount of proteins in biological samples. The devices providing a platform for reduced sample volume and assay time as well as full automation are required for potential use in point-of-care-diagnostics. Recently, we have demonstrated ultrasensitive detection of serum proteins, C-reactive protein (CRP) and cardiac troponin I (cTnI), utilizing a lab-on-a-disc composed of TiO2 nanofibrous (NF) mats. It showed a large dynamic range with femto molar (fM) detection sensitivity, from a small volume of whole blood in 30 min. The device consists of several components for blood separation, metering, mixing, and washing that are automated for improved sensitivity from low sample volumes. Here, in the video demonstration, we show the experimental protocols and know-how for the fabrication of NFs as well as the disc, their integration and the operation in the following order: processes for preparing TiO2 NF mat; transfer-printing of TiO2 NF mat onto the disc; surface modification for immune-reactions, disc assembly and operation; on-disc detection and representative results for immunoassay. Use of this device enables multiplexed analysis with minimal consumption of samples and reagents. Given the advantages, the device should find use in a wide variety of applications, and prove beneficial in facilitating the analysis of low abundant proteins. PMID:27167836

  3. lidar change detection using building models

    NASA Astrophysics Data System (ADS)

    Kim, Angela M.; Runyon, Scott C.; Jalobeanu, Andre; Esterline, Chelsea H.; Kruse, Fred A.

    2014-06-01

    Terrestrial LiDAR scans of building models collected with a FARO Focus3D and a RIEGL VZ-400 were used to investigate point-to-point and model-to-model LiDAR change detection. LiDAR data were scaled, decimated, and georegistered to mimic real world airborne collects. Two physical building models were used to explore various aspects of the change detection process. The first model was a 1:250-scale representation of the Naval Postgraduate School campus in Monterey, CA, constructed from Lego blocks and scanned in a laboratory setting using both the FARO and RIEGL. The second model at 1:8-scale consisted of large cardboard boxes placed outdoors and scanned from rooftops of adjacent buildings using the RIEGL. A point-to-point change detection scheme was applied directly to the point-cloud datasets. In the model-to-model change detection scheme, changes were detected by comparing Digital Surface Models (DSMs). The use of physical models allowed analysis of effects of changes in scanner and scanning geometry, and performance of the change detection methods on different types of changes, including building collapse or subsistence, construction, and shifts in location. Results indicate that at low false-alarm rates, the point-to-point method slightly outperforms the model-to-model method. The point-to-point method is less sensitive to misregistration errors in the data. Best results are obtained when the baseline and change datasets are collected using the same LiDAR system and collection geometry.

  4. The impact of misregistration on change detection

    NASA Technical Reports Server (NTRS)

    Townshend, John R. G.; Justice, Christopher O.; Gurney, Charlotte; Mcmanus, James

    1992-01-01

    The impact of images misregistration on the detection of changes in land cover was studied using spatially degraded Landsat MSS images. Emphasis is placed on simulated images of the Normalized Difference Vegetation Index (NDVI) at spatial resolutions of 250 and 500 m. It is pointed out that there is the need to achieve high values of registration accuracy. The evidence from simulations suggests that misregistrations can have a marked effect on the ability of remotely sensed data to detect changes in land cover. Even subpixel misregistrations can have a major impact, and the most marked proportional changes will tend to occur at the finest misregistrations.

  5. Line matching for automatic change detection algorithm

    NASA Astrophysics Data System (ADS)

    Dhollande, Jérôme; Monnin, David; Gond, Laetitia; Cudel, Christophe; Kohler, Sophie; Dieterlen, Alain

    2012-06-01

    During foreign operations, Improvised Explosive Devices (IEDs) are one of major threats that soldiers may unfortunately encounter along itineraries. Based on a vehicle-mounted camera, we propose an original approach by image comparison to detect signicant changes on these roads. The classic 2D-image registration techniques do not take into account parallax phenomena. The consequence is that the misregistration errors could be detected as changes. According to stereovision principles, our automatic method compares intensity proles along corresponding epipolar lines by extrema matching. An adaptive space warping compensates scale dierence in 3D-scene. When the signals are matched, the signal dierence highlights changes which are marked in current video.

  6. Consistency in multi-modal automated target detection using temporally filtered reporting

    NASA Astrophysics Data System (ADS)

    Breckon, Toby P.; Han, Ji W.; Richardson, Julia

    2012-09-01

    Autonomous target detection is an important goal in the wide-scale deployment of unattended sensor networks. Current approaches are often sample-centric with an emphasis on achieving maximal detection on any given isolated target signature received. This can often lead to both high false alarm rates and the frequent re-reporting of detected targets, given the required trade-off between detection sensitivity and false positive target detection. Here, by assuming that the number of samples on a true target will both be high and temporally consistent we can treat our given detection approach as a ensemble classifier distributed over time with classification from each sample, at each time-step, contributing to an overall detection threshold. Following this approach, we develop a mechanism whereby the temporal consistency of a given target must be statistically strong, over a given temporal window, for an onward detection to be reported. If the sensor sample frequency and throughput is high, relative to target motion through the field of view (e.g. 25fps camera) then we can validly set such a temporal window to a value above the occurrence level of spurious false positive detections. This approach is illustrated using the example of automated real-time vehicle and people detection, in multi-modal visible (EO) and thermal (IR) imagery, deployed on an unattended dual-sensor pod. A sensitive target detection approach, based on a codebook mapping of visual features, classifies target regions initially extracted from the scene using an adaptive background model. The use of temporal filtering provides a consistent, fused onward information feed of targets detected from either or both sensors whilst minimizing the onward transmission of false positive detections and facilitating the use of an otherwise sensitive detection approaches within the robust target reporting context of a deployed sensor network.

  7. Automated Detection of Toxigenic Clostridium difficile in Clinical Samples: Isothermal tcdB Amplification Coupled to Array-Based Detection

    PubMed Central

    Pasko, Chris; Groves, Benjamin; Ager, Edward; Corpuz, Maylene; Frech, Georges; Munns, Denton; Smith, Wendy; Warcup, Ashley; Denys, Gerald; Ledeboer, Nathan A.; Lindsey, Wes; Owen, Charles; Rea, Larry; Jenison, Robert

    2012-01-01

    Clostridium difficile can carry a genetically variable pathogenicity locus (PaLoc), which encodes clostridial toxins A and B. In hospitals and in the community at large, this organism is increasingly identified as a pathogen. To develop a diagnostic test that combines the strengths of immunoassays (cost) and DNA amplification assays (sensitivity/specificity), we targeted a genetically stable PaLoc region, amplifying tcdB sequences and detecting them by hybridization capture. The assay employs a hot-start isothermal method coupled to a multiplexed chip-based readout, creating a manual assay that detects toxigenic C. difficile with high sensitivity and specificity within 1 h. Assay automation on an electromechanical instrument produced an analytical sensitivity of 10 CFU (95% probability of detection) of C. difficile in fecal samples, along with discrimination against other enteric bacteria. To verify automated assay function, 130 patient samples were tested: 31/32 positive samples (97% sensitive; 95% confidence interval [CI], 82 to 99%) and 98/98 negative samples (100% specific; 95% CI, 95 to 100%) were scored correctly. Large-scale clinical studies are now planned to determine clinical sensitivity and specificity. PMID:22675134

  8. Image change detection algorithms: a systematic survey.

    PubMed

    Radke, Richard J; Andra, Srinivas; Al-Kofahi, Omar; Roysam, Badrinath

    2005-03-01

    Detecting regions of change in multiple images of the same scene taken at different times is of widespread interest due to a large number of applications in diverse disciplines, including remote sensing, surveillance, medical diagnosis and treatment, civil infrastructure, and underwater sensing. This paper presents a systematic survey of the common processing steps and core decision rules in modern change detection algorithms, including significance and hypothesis testing, predictive models, the shading model, and background modeling. We also discuss important preprocessing methods, approaches to enforcing the consistency of the change mask, and principles for evaluating and comparing the performance of change detection algorithms. It is hoped that our classification of algorithms into a relatively small number of categories will provide useful guidance to the algorithm designer. PMID:15762326

  9. Priming effects under correct change detection and change blindness.

    PubMed

    Caudek, Corrado; Domini, Fulvio

    2013-03-01

    In three experiments, we investigated the priming effects induced by an image change on a successive animate/inanimate decision task. We studied both perceptual (Experiments 1 and 2) and conceptual (Experiment 3) priming effects, under correct change detection and change blindness (CB). Under correct change detection, we found larger positive priming effects on congruent trials for probes representing animate entities than for probes representing artifactual objects. Under CB, we found performance impairment relative to a "no-change" baseline condition. This inhibition effect induced by CB was modulated by the semantic congruency between the changed item and the probe in the case of probe images, but not for probe words. We discuss our results in the context of the literature on the negative priming effect. PMID:22964454

  10. Foreign object detection and removal to improve automated analysis of chest radiographs

    SciTech Connect

    Hogeweg, Laurens; Sanchez, Clara I.; Melendez, Jaime; Maduskar, Pragnya; Ginneken, Bram van; Story, Alistair; Hayward, Andrew

    2013-07-15

    Purpose: Chest radiographs commonly contain projections of foreign objects, such as buttons, brassier clips, jewellery, or pacemakers and wires. The presence of these structures can substantially affect the output of computer analysis of these images. An automated method is presented to detect, segment, and remove foreign objects from chest radiographs.Methods: Detection is performed using supervised pixel classification with a kNN classifier, resulting in a probability estimate per pixel to belong to a projected foreign object. Segmentation is performed by grouping and post-processing pixels with a probability above a certain threshold. Next, the objects are replaced by texture inpainting.Results: The method is evaluated in experiments on 257 chest radiographs. The detection at pixel level is evaluated with receiver operating characteristic analysis on pixels within the unobscured lung fields and an A{sub z} value of 0.949 is achieved. Free response operator characteristic analysis is performed at the object level, and 95.6% of objects are detected with on average 0.25 false positive detections per image. To investigate the effect of removing the detected objects through inpainting, a texture analysis system for tuberculosis detection is applied to images with and without pathology and with and without foreign object removal. Unprocessed, the texture analysis abnormality score of normal images with foreign objects is comparable to those with pathology. After removing foreign objects, the texture score of normal images with and without foreign objects is similar, while abnormal images, whether they contain foreign objects or not, achieve on average higher scores.Conclusions: The authors conclude that removal of foreign objects from chest radiographs is feasible and beneficial for automated image analysis.

  11. CCTV as an automated sensor for firearms detection: human-derived performance as a precursor to automatic recognition

    NASA Astrophysics Data System (ADS)

    Darker, Iain T.; Gale, Alastair G.; Blechko, Anastassia

    2008-10-01

    CCTV operators are able to detect firearms, via CCTV, but their capacity for surveillance is limited. Thus, it is desirable to automate the monitoring of CCTV cameras for firearms using machine vision techniques. The abilities of CCTV operators to detect concealed and unconcealed firearms in CCTV footage were quantified within a signal detection framework. Additionally, the visual search strategies adopted by the CCTV operators were elicited and their efficacies indexed with respect to signal detection performance, separately for concealed and unconcealed firearms. Future work will automate effective, human visual search strategies using image processing algorithms.

  12. Development of Raman microspectroscopy for automated detection and imaging of basal cell carcinoma

    NASA Astrophysics Data System (ADS)

    Larraona-Puy, Marta; Ghita, Adrian; Zoladek, Alina; Perkins, William; Varma, Sandeep; Leach, Iain H.; Koloydenko, Alexey A.; Williams, Hywel; Notingher, Ioan

    2009-09-01

    We investigate the potential of Raman microspectroscopy (RMS) for automated evaluation of excised skin tissue during Mohs micrographic surgery (MMS). The main aim is to develop an automated method for imaging and diagnosis of basal cell carcinoma (BCC) regions. Selected Raman bands responsible for the largest spectral differences between BCC and normal skin regions and linear discriminant analysis (LDA) are used to build a multivariate supervised classification model. The model is based on 329 Raman spectra measured on skin tissue obtained from 20 patients. BCC is discriminated from healthy tissue with 90+/-9% sensitivity and 85+/-9% specificity in a 70% to 30% split cross-validation algorithm. This multivariate model is then applied on tissue sections from new patients to image tumor regions. The RMS images show excellent correlation with the gold standard of histopathology sections, BCC being detected in all positive sections. We demonstrate the potential of RMS as an automated objective method for tumor evaluation during MMS. The replacement of current histopathology during MMS by a ``generalization'' of the proposed technique may improve the feasibility and efficacy of MMS, leading to a wider use according to clinical need.

  13. Development of automated high throughput single molecular microfluidic detection platform for signal transduction analysis

    NASA Astrophysics Data System (ADS)

    Huang, Po-Jung; Baghbani Kordmahale, Sina; Chou, Chao-Kai; Yamaguchi, Hirohito; Hung, Mien-Chie; Kameoka, Jun

    2016-03-01

    Signal transductions including multiple protein post-translational modifications (PTM), protein-protein interactions (PPI), and protein-nucleic acid interaction (PNI) play critical roles for cell proliferation and differentiation that are directly related to the cancer biology. Traditional methods, like mass spectrometry, immunoprecipitation, fluorescence resonance energy transfer, and fluorescence correlation spectroscopy require a large amount of sample and long processing time. "microchannel for multiple-parameter analysis of proteins in single-complex (mMAPS)"we proposed can reduce the process time and sample volume because this system is composed by microfluidic channels, fluorescence microscopy, and computerized data analysis. In this paper, we will present an automated mMAPS including integrated microfluidic device, automated stage and electrical relay for high-throughput clinical screening. Based on this result, we estimated that this automated detection system will be able to screen approximately 150 patient samples in a 24-hour period, providing a practical application to analyze tissue samples in a clinical setting.

  14. Semi-Automated Neuron Boundary Detection and Nonbranching Process Segmentation in Electron Microscopy Images

    SciTech Connect

    Jurrus, Elizabeth R.; Watanabe, Shigeki; Giuly, Richard J.; Paiva, Antonio R.; Ellisman, Mark H.; Jorgensen, Erik M.; Tasdizen, Tolga

    2013-01-01

    Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated process first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes.

  15. Sleep spindle detection: crowdsourcing and evaluating performance of experts, non-experts, and automated methods

    PubMed Central

    Warby, Simon C.; Wendt, Sabrina L.; Welinder, Peter; Munk, Emil G.S.; Carrillo, Oscar; Sorensen, Helge B.D.; Jennum, Poul; Peppard, Paul E.; Perona, Pietro; Mignot, Emmanuel

    2014-01-01

    Sleep spindles are discrete, intermittent patterns of brain activity that arise as a result of interactions of several circuits in the brain. Increasingly, these oscillations are of biological and clinical interest because of their role in development, learning, and neurological disorders. We used an internet interface to ‘crowdsource’ spindle identification from human experts and non-experts, and compared performance with 6 automated detection algorithms in middle-to-older aged subjects from the general population. We also developed a method for forming group consensus, and refined methods of evaluating the performance of event detectors in physiological data such as polysomnography. Compared to the gold standard, the highest performance was by individual experts and the non-expert group consensus, followed by automated spindle detectors. Crowdsourcing the scoring of sleep data is an efficient method to collect large datasets, even for difficult tasks such as spindle identification. Further refinements to automated sleep spindle algorithms are needed for middle-to-older aged subjects. PMID:24562424

  16. Semi-automated Neuron Boundary Detection and Nonbranching Process Segmentation in Electron Microscopy Images

    PubMed Central

    Jurrus, Elizabeth; Watanabe, Shigeki; Giuly, Richard J.; Paiva, Antonio R. C.; Ellisman, Mark H.; Jorgensen, Erik M.; Tasdizen, Tolga

    2013-01-01

    Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated process first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes. PMID:22644867

  17. Microbleed Detection Using Automated Segmentation (MIDAS): A New Method Applicable to Standard Clinical MR Images

    PubMed Central

    Seghier, Mohamed L.; Kolanko, Magdalena A.; Leff, Alexander P.; Jäger, Hans R.; Gregoire, Simone M.; Werring, David J.

    2011-01-01

    Background Cerebral microbleeds, visible on gradient-recalled echo (GRE) T2* MRI, have generated increasing interest as an imaging marker of small vessel diseases, with relevance for intracerebral bleeding risk or brain dysfunction. Methodology/Principal Findings Manual rating methods have limited reliability and are time-consuming. We developed a new method for microbleed detection using automated segmentation (MIDAS) and compared it with a validated visual rating system. In thirty consecutive stroke service patients, standard GRE T2* images were acquired and manually rated for microbleeds by a trained observer. After spatially normalizing each patient's GRE T2* images into a standard stereotaxic space, the automated microbleed detection algorithm (MIDAS) identified cerebral microbleeds by explicitly incorporating an “extra” tissue class for abnormal voxels within a unified segmentation-normalization model. The agreement between manual and automated methods was assessed using the intraclass correlation coefficient (ICC) and Kappa statistic. We found that MIDAS had generally moderate to good agreement with the manual reference method for the presence of lobar microbleeds (Kappa = 0.43, improved to 0.65 after manual exclusion of obvious artefacts). Agreement for the number of microbleeds was very good for lobar regions: (ICC = 0.71, improved to ICC = 0.87). MIDAS successfully detected all patients with multiple (≥2) lobar microbleeds. Conclusions/Significance MIDAS can identify microbleeds on standard MR datasets, and with an additional rapid editing step shows good agreement with a validated visual rating system. MIDAS may be useful in screening for multiple lobar microbleeds. PMID:21448456

  18. Automated feature detection and identification in digital point-ordered signals

    DOEpatents

    Oppenlander, Jane E.; Loomis, Kent C.; Brudnoy, David M.; Levy, Arthur J.

    1998-01-01

    A computer-based automated method to detect and identify features in digital point-ordered signals. The method is used for processing of non-destructive test signals, such as eddy current signals obtained from calibration standards. The signals are first automatically processed to remove noise and to determine a baseline. Next, features are detected in the signals using mathematical morphology filters. Finally, verification of the features is made using an expert system of pattern recognition methods and geometric criteria. The method has the advantage that standard features can be, located without prior knowledge of the number or sequence of the features. Further advantages are that standard features can be differentiated from irrelevant signal features such as noise, and detected features are automatically verified by parameters extracted from the signals. The method proceeds fully automatically without initial operator set-up and without subjective operator feature judgement.

  19. NOVELTY DETECTION UNDER CHANGING ENVIRONMENTAL CONDITIONS

    SciTech Connect

    H. SOHN; K. WORDER; C. R. FARRAR

    2001-04-01

    The primary objective of novelty detection is to examine a system's dynamic response to determine if the system significantly deviates from an initial baseline condition. In reality, the system is often subject to changing environmental and operation conditions that affect its dynamic characteristics. Such variations include changes in loading, boundary conditions, temperature, and moisture. Most damage diagnosis techniques, however, generally neglect the effects of these changing ambient conditions. Here, a novelty detection technique is developed explicitly taking into account these natural variations of the system in order to minimize false positive indications of true system changes. Auto-associative neural networks are employed to discriminate system changes of interest such as structural deterioration and damage from the natural variations of the system.

  20. Investigation of optimal feature value set in false positive reduction process for automated abdominal lymph node detection method

    NASA Astrophysics Data System (ADS)

    Nakamura, Yoshihiko; Nimura, Yukitaka; Kitasaka, Takayuki; Mizuno, Shinji; Furukawa, Kazuhiro; Goto, Hidemi; Fujiwara, Michitaka; Misawa, Kazunari; Ito, Masaaki; Nawano, Shigeru; Mori, Kensaku

    2015-03-01

    This paper presents an investigation of optimal feature value set in false positive reduction process for the automated method of enlarged abdominal lymph node detection. We have developed the automated abdominal lymph node detection method to aid for surgical planning. Because it is important to understand the location and the structure of an enlarged lymph node in order to make a suitable surgical plan. However, our previous method was not able to obtain the suitable feature value set. This method was able to detect 71.6% of the lymph nodes with 12.5 FPs per case. In this paper, we investigate the optimal feature value set in the false positive reduction process to improve the method for automated abdominal lymph node detection. By applying our improved method by using the optimal feature value set to 28 cases of abdominal 3D CT images, we detected about 74.7% of the abdominal lymph nodes with 11.8 FPs/case.

  1. Automated night/day standoff detection, tracking, and identification of personnel for installation protection

    NASA Astrophysics Data System (ADS)

    Lemoff, Brian E.; Martin, Robert B.; Sluch, Mikhail; Kafka, Kristopher M.; McCormick, William; Ice, Robert

    2013-06-01

    The capability to positively and covertly identify people at a safe distance, 24-hours per day, could provide a valuable advantage in protecting installations, both domestically and in an asymmetric warfare environment. This capability would enable installation security officers to identify known bad actors from a safe distance, even if they are approaching under cover of darkness. We will describe an active-SWIR imaging system being developed to automatically detect, track, and identify people at long range using computer face recognition. The system illuminates the target with an eye-safe and invisible SWIR laser beam, to provide consistent high-resolution imagery night and day. SWIR facial imagery produced by the system is matched against a watch-list of mug shots using computer face recognition algorithms. The current system relies on an operator to point the camera and to review and interpret the face recognition results. Automation software is being developed that will allow the system to be cued to a location by an external system, automatically detect a person, track the person as they move, zoom in on the face, select good facial images, and process the face recognition results, producing alarms and sharing data with other systems when people are detected and identified. Progress on the automation of this system will be presented along with experimental night-time face recognition results at distance.

  2. Automated processing integrated with a microflow cytometer for pathogen detection in clinical matrices

    PubMed Central

    Golden, J.P.; Verbarg, J.; Howell, P.B.; Shriver-Lake, L.C.; Ligler, F.S.

    2012-01-01

    A spinning magnetic trap (MagTrap) for automated sample processing was integrated with a microflow cytometer capable of simultaneously detecting multiple targets to provide an automated sample-to-answer diagnosis in 40 min. After target capture on fluorescently coded magnetic microspheres, the magnetic trap automatically concentrated the fluorescently coded microspheres, separated the captured target from the sample matrix, and exposed the bound target sequentially to biotinylated tracer molecules and streptavidin-labeled phycoerythrin. The concentrated microspheres were then hydrodynamically focused in a microflow cytometer capable of 4-color analysis (two wavelengths for microsphere identification, one for light scatter to discriminate single microspheres and one for phycoerythrin bound to the target). A three-fold decrease in sample preparation time and an improved detection limit, independent of target preconcentration, was demonstrated for detection of Escherichia coli 0157:H7 using the MagTrap as compared to manual processing. Simultaneous analysis of positive and negative controls, along with the assay reagents specific for the target, was used to obtain dose–response curves, demonstrating the potential for quantification of pathogen load in buffer and serum. PMID:22960010

  3. Automatic nipple detection on 3D images of an automated breast ultrasound system (ABUS)

    NASA Astrophysics Data System (ADS)

    Javanshir Moghaddam, Mandana; Tan, Tao; Karssemeijer, Nico; Platel, Bram

    2014-03-01

    Recent studies have demonstrated that applying Automated Breast Ultrasound in addition to mammography in women with dense breasts can lead to additional detection of small, early stage breast cancers which are occult in corresponding mammograms. In this paper, we proposed a fully automatic method for detecting the nipple location in 3D ultrasound breast images acquired from Automated Breast Ultrasound Systems. The nipple location is a valuable landmark to report the position of possible abnormalities in a breast or to guide image registration. To detect the nipple location, all images were normalized. Subsequently, features have been extracted in a multi scale approach and classification experiments were performed using a gentle boost classifier to identify the nipple location. The method was applied on a dataset of 100 patients with 294 different 3D ultrasound views from Siemens and U-systems acquisition systems. Our database is a representative sample of cases obtained in clinical practice by four medical centers. The automatic method could accurately locate the nipple in 90% of AP (Anterior-Posterior) views and in 79% of the other views.

  4. Robust automated detection, segmentation, and classification of hepatic tumors from CT data

    NASA Astrophysics Data System (ADS)

    Linguraru, Marius George; Richbourg, William J.; Pamulapati, Vivek; Wang, Shijun; Summers, Ronald M.

    2012-03-01

    The manuscript presents the automated detection and segmentation of hepatic tumors from abdominal CT images with variable acquisition parameters. After obtaining an initial segmentation of the liver, optimized graph cuts segment the liver tumor candidates using shape and enhancement constraints. One hundred and fifty-seven features are computed for the tumor candidates and support vector machines are used to select features and separate true and false detections. Training and testing are performed using leave-one-patientout on 14 patients with a total of 79 tumors. After selection, the feature space is reduced to eight. The resulting sensitivity for tumor detection was 100% at 2.3 false positives/case. For the true tumors, 74.1% overlap and 1.6mm average surface distance were recorded between the ground truth and the results of the automated method. Results from test data demonstrate the method's robustness to analyze livers from difficult clinical cases to allow the diagnoses and temporal monitoring of patients with hepatic cancer.

  5. Automated processing integrated with a microflow cytometer for pathogen detection in clinical matrices.

    PubMed

    Golden, J P; Verbarg, J; Howell, P B; Shriver-Lake, L C; Ligler, F S

    2013-02-15

    A spinning magnetic trap (MagTrap) for automated sample processing was integrated with a microflow cytometer capable of simultaneously detecting multiple targets to provide an automated sample-to-answer diagnosis in 40 min. After target capture on fluorescently coded magnetic microspheres, the magnetic trap automatically concentrated the fluorescently coded microspheres, separated the captured target from the sample matrix, and exposed the bound target sequentially to biotinylated tracer molecules and streptavidin-labeled phycoerythrin. The concentrated microspheres were then hydrodynamically focused in a microflow cytometer capable of 4-color analysis (two wavelengths for microsphere identification, one for light scatter to discriminate single microspheres and one for phycoerythrin bound to the target). A three-fold decrease in sample preparation time and an improved detection limit, independent of target preconcentration, was demonstrated for detection of Escherichia coli 0157:H7 using the MagTrap as compared to manual processing. Simultaneous analysis of positive and negative controls, along with the assay reagents specific for the target, was used to obtain dose-response curves, demonstrating the potential for quantification of pathogen load in buffer and serum. PMID:22960010

  6. Automated Framework for Detecting Lumen and Media-Adventitia Borders in Intravascular Ultrasound Images.

    PubMed

    Gao, Zhifan; Hau, William Kongto; Lu, Minhua; Huang, Wenhua; Zhang, Heye; Wu, Wanqing; Liu, Xin; Zhang, Yuan-Ting

    2015-07-01

    An automated framework for detecting lumen and media-adventitia borders in intravascular ultrasound images was developed on the basis of an adaptive region-growing method and an unsupervised clustering method. To demonstrate the capability of the framework, linear regression, Bland-Altman analysis and distance analysis were used to quantitatively investigate the correlation, agreement and spatial distance, respectively, between our detected borders and manually traced borders in 337 intravascular ultrasound images in vivo acquired from six patients. The results of these investigations revealed good correlation (r = 0.99), good agreement (>96.82% of results within the 95% confidence interval) and small average distance errors (lumen border: 0.08 mm, media-adventitia border: 0.10 mm) between the borders generated by the automated framework and the manual tracing method. The proposed framework was found to be effective in detecting lumen and media-adventitia borders in intravascular ultrasound images, indicating its potential for use in routine studies of vascular disease. PMID:25922134

  7. Procedure for Automated Eddy Current Crack Detection in Thin Titanium Plates

    NASA Technical Reports Server (NTRS)

    Wincheski, Russell A.

    2012-01-01

    This procedure provides the detailed instructions for conducting Eddy Current (EC) inspections of thin (5-30 mils) titanium membranes with thickness and material properties typical of the development of Ultra-Lightweight diaphragm Tanks Technology (ULTT). The inspection focuses on the detection of part-through, surface breaking fatigue cracks with depths between approximately 0.002" and 0.007" and aspect ratios (a/c) of 0.2-1.0 using an automated eddy current scanning and image processing technique.

  8. Automated immunomagnetic separation for the detection of Escherichia coli O157:H7 from spinach.

    PubMed

    Chen, Jing; Shi, Xianming; Gehring, Andrew G; Paoli, George C

    2014-06-01

    Escherichia coli O157:H7 is a major cause of foodborne illness and methods for rapid and sensitive detection of this deadly pathogen are needed to protect consumers. The use of immunomagnetic separation (IMS) for capturing and detecting foodborne pathogens has gained popularity, partially due to the introduction of automated and high throughput IMS instrumentation. Three methods for automated IMS that test different sample volumes, Kingfisher mL, Pathatrix Auto, and Pathatrix Ultra, were compared using microbiological detection of E. coli O157:H7 from buffered peptone water (BPW), in the presence of background microbial flora derived from spinach leaves, and from culture enrichments from artificially contaminated spinach leaves. The average efficiencies of capture of E. coli O157:H7 using the three methods were 32.1%, 3.7%, and 1.3%, respectively, in BPW; 43.4%, 8.8%, 2.9%, respectively, in the presence of spinach microbial flora; and 63.0%, 7.0%, and 6.3%, respectively, from artificially contaminated spinach. Despite the large differences in IMS capture efficiencies between the KingFisher and two Pathatrix methods, all three methods allowed the detection of E. coli O157:H7 from spinach that was artificially contaminated with the pathogen at relatively high (25 cfu/30 g sample) and low (1 cfu/30 g sample) levels after 4-6h of culture enrichment. The differences in capture efficiency were compensated for by the differences in sample volume used by the KingFisher mL (1 mL), Pathatrix Auto (50 mL) and Pathatrix Ultra (250 mL) instruments. Thus, despite the reduced capture efficiencies observed for the Pathatrix methods, the large increase in sample volume results in a greater number of captured cells for downstream detection resulting in improved detection sensitivity. PMID:24718031

  9. Automated high-pressure titration system with in situ infrared spectroscopic detection.

    PubMed

    Thompson, Christopher J; Martin, Paul F; Chen, Jeffrey; Benezeth, Pascale; Schaef, Herbert T; Rosso, Kevin M; Felmy, Andrew R; Loring, John S

    2014-04-01

    A fully automated titration system with infrared detection was developed for investigating interfacial chemistry at high pressures. The apparatus consists of a high-pressure fluid generation and delivery system coupled to a high-pressure cell with infrared optics. A manifold of electronically actuated valves is used to direct pressurized fluids into the cell. Precise reagent additions to the pressurized cell are made with calibrated tubing loops that are filled with reagent and placed in-line with the cell and a syringe pump. The cell's infrared optics facilitate both transmission and attenuated total reflection (ATR) measurements to monitor bulk-fluid composition and solid-surface phenomena such as adsorption, desorption, complexation, dissolution, and precipitation. Switching between the two measurement modes is accomplished with moveable mirrors that direct the light path of a Fourier transform infrared spectrometer into the cell along transmission or ATR light paths. The versatility of the high-pressure IR titration system was demonstrated with three case studies. First, we titrated water into supercritical CO2 (scCO2) to generate an infrared calibration curve and determine the solubility of water in CO2 at 50 °C and 90 bar. Next, we characterized the partitioning of water between a montmorillonite clay and scCO2 at 50 °C and 90 bar. Transmission-mode spectra were used to quantify changes in the clay's sorbed water concentration as a function of scCO2 hydration, and ATR measurements provided insights into competitive residency of water and CO2 on the clay surface and in the interlayer. Finally, we demonstrated how time-dependent studies can be conducted with the system by monitoring the carbonation reaction of forsterite (Mg2SiO4) in water-bearing scCO2 at 50 °C and 90 bar. Immediately after water dissolved in the scCO2, a thin film of adsorbed water formed on the mineral surface, and the film thickness increased with time as the forsterite began to

  10. Automated high-pressure titration system with in situ infrared spectroscopic detection

    NASA Astrophysics Data System (ADS)

    Thompson, Christopher J.; Martin, Paul F.; Chen, Jeffrey; Benezeth, Pascale; Schaef, Herbert T.; Rosso, Kevin M.; Felmy, Andrew R.; Loring, John S.

    2014-04-01

    A fully automated titration system with infrared detection was developed for investigating interfacial chemistry at high pressures. The apparatus consists of a high-pressure fluid generation and delivery system coupled to a high-pressure cell with infrared optics. A manifold of electronically actuated valves is used to direct pressurized fluids into the cell. Precise reagent additions to the pressurized cell are made with calibrated tubing loops that are filled with reagent and placed in-line with the cell and a syringe pump. The cell's infrared optics facilitate both transmission and attenuated total reflection (ATR) measurements to monitor bulk-fluid composition and solid-surface phenomena such as adsorption, desorption, complexation, dissolution, and precipitation. Switching between the two measurement modes is accomplished with moveable mirrors that direct the light path of a Fourier transform infrared spectrometer into the cell along transmission or ATR light paths. The versatility of the high-pressure IR titration system was demonstrated with three case studies. First, we titrated water into supercritical CO2 (scCO2) to generate an infrared calibration curve and determine the solubility of water in CO2 at 50 °C and 90 bar. Next, we characterized the partitioning of water between a montmorillonite clay and scCO2 at 50 °C and 90 bar. Transmission-mode spectra were used to quantify changes in the clay's sorbed water concentration as a function of scCO2 hydration, and ATR measurements provided insights into competitive residency of water and CO2 on the clay surface and in the interlayer. Finally, we demonstrated how time-dependent studies can be conducted with the system by monitoring the carbonation reaction of forsterite (Mg2SiO4) in water-bearing scCO2 at 50 °C and 90 bar. Immediately after water dissolved in the scCO2, a thin film of adsorbed water formed on the mineral surface, and the film thickness increased with time as the forsterite began to dissolve

  11. Automated High-Pressure Titration System with In Situ Infrared Spectroscopic Detection

    SciTech Connect

    Thompson, Christopher J.; Martin, Paul F.; Chen, Jeffrey; Benezeth, Pascale; Schaef, Herbert T.; Rosso, Kevin M.; Felmy, Andrew R.; Loring, John S.

    2014-04-17

    A fully automated titration system with infrared detection was developed for investigating interfacial chemistry at high pressures. The apparatus consists of a high-pressure fluid generation and delivery system coupled to a high-pressure cell with infrared optics. A manifold of electronically actuated valves is used to direct pressurized fluids into the cell. Precise reagent additions to the pressurized cell are made with calibrated tubing loops that are filled with reagent and placed in-line with the cell and a syringe pump. The cell’s infrared optics facilitate both transmission and attenuated total reflection (ATR) measurements to monitor bulk-fluid composition and solid-surface phenomena such as adsorption, desorption, complexation, dissolution, and precipitation. Switching between the two measurement modes is accomplished with moveable mirrors that direct radiation from a Fourier transform infrared spectrometer into the cell along transmission or ATR light paths. The versatility of the high-pressure IR titration system is demonstrated with three case studies. First, we titrated water into supercritical CO2 (scCO2) to generate an infrared calibration curve and determine the solubility of water in CO2 at 50 °C and 90 bar. Next, we characterized the partitioning of water between a montmorillonite clay and scCO2 at 50 °C and 90 bar. Transmission-mode spectra were used to quantify changes in the clay’s sorbed water concentration as a function of scCO2 hydration, and ATR measurements provided insights into competitive residency of water and CO2 on the clay surface and in the interlayer. Finally, we demonstrated how time-dependent studies can be conducted with the system by monitoring the carbonation reaction of forsterite (Mg2SiO4) in water-bearing scCO2 at 50 °C and 90 bar. Immediately after water dissolved in the scCO2, a thin film of adsorbed water formed on the mineral surface, and the film thickness increased with time as the forsterite began to dissolve

  12. Development of an Automated Microfluidic System for DNA Collection, Amplification, and Detection of Pathogens

    SciTech Connect

    Hagan, Bethany S.; Bruckner-Lea, Cynthia J.

    2002-12-01

    This project was focused on developing and testing automated routines for a microfluidic Pathogen Detection System. The basic pathogen detection routine has three primary components; cell concentration, DNA amplification, and detection. In cell concentration, magnetic beads are held in a flow cell by an electromagnet. Sample liquid is passed through the flow cell and bacterial cells attach to the beads. These beads are then released into a small volume of fluid and delivered to the peltier device for cell lysis and DNA amplification. The cells are lysed during initial heating in the peltier device, and the released DNA is amplified using polymerase chain reaction (PCR) or strand displacement amplification (SDA). Once amplified, the DNA is then delivered to a laser induced fluorescence detection unit in which the sample is detected. These three components create a flexible platform that can be used for pathogen detection in liquid and sediment samples. Future developments of the system will include on-line DNA detection during DNA amplification and improved capture and release methods for the magnetic beads during cell concentration.

  13. Detection and quantification of circulating immature platelets: agreement between flow cytometric and automated detection.

    PubMed

    Ibrahim, Homam; Nadipalli, Srinivas; Usmani, Saba; DeLao, Timothy; Green, LaShawna; Kleiman, Neal S

    2016-07-01

    Immature platelets-also termed reticulated platelets (RP)-are platelets newly released into the circulation, and have been associated with a variety of pathological thrombotic events. They can be assessed by flow cytometry after staining with thiazole orange (TO) or by using a module added to a fully automated analyzer that is currently in wide clinical use and expressed as a fraction of the total platelet count (IPF). We sought to assess the correlation and agreement between these two methods. IPF was measured using Sysmex XE 2100-and at the same time point- we used TO staining and flow cytometry to measure RP levels. Two different gates were used for the flow cytometry method, 1 and 0.5 %. Measurements from the automated analyzer were then compared separately to measurements performed using each gate. Agreement between methods was assessed using Bland-Altman method. Pearson's correlation coefficient was also calculated. 129 subjects were enrolled and stratified into 5 groups: (1) Healthy subjects, (2) End stage renal disease, (3) Chronic stable coronary artery disease, (4) Post Coronary artery bypass surgery, (5) Peripheral thrombocytopenia. Median IPF levels were increased for patients in groups 2, 3, 4 and 5 (4.0, 4.7, 4.3, and 8.3 % respectively) compared to healthy subjects (2.5 %) p = 0.0001. Although the observed correlation between the two methods tended to be good in patients with high IPF values (i.e., group 5), the overall observed correlation was poor (Pearson's correlation coefficient r = 0.27). Furthermore, there was poor agreement between the two methods in all groups. Despite the good correlation that was observed between the two methods at higher IPF values, the lack of agreement was significant. PMID:26831482

  14. Parametric probability distributions for anomalous change detection

    SciTech Connect

    Theiler, James P; Foy, Bernard R; Wohlberg, Brendt E; Scovel, James C

    2010-01-01

    The problem of anomalous change detection arises when two (or possibly more) images are taken of the same scene, but at different times. The aim is to discount the 'pervasive differences' that occur thoughout the imagery, due to the inevitably different conditions under which the images were taken (caused, for instance, by differences in illumination, atmospheric conditions, sensor calibration, or misregistration), and to focus instead on the 'anomalous changes' that actually take place in the scene. In general, anomalous change detection algorithms attempt to model these normal or pervasive differences, based on data taken directly from the imagery, and then identify as anomalous those pixels for which the model does not hold. For many algorithms, these models are expressed in terms of probability distributions, and there is a class of such algorithms that assume the distributions are Gaussian. By considering a broader class of distributions, however, a new class of anomalous change detection algorithms can be developed. We consider several parametric families of such distributions, derive the associated change detection algorithms, and compare the performance with standard algorithms that are based on Gaussian distributions. We find that it is often possible to significantly outperform these standard algorithms, even using relatively simple non-Gaussian models.

  15. Development of adapted GMR-probes for automated detection of hidden defects in thin steel sheets

    NASA Astrophysics Data System (ADS)

    Pelkner, Matthias; Pohl, Rainer; Kreutzbruck, Marc; Commandeur, Colin

    2016-02-01

    Thin steel sheets with a thickness of 0.3 mm and less are the base materials of many everyday life products (cans, batteries, etc.). Potential inhomogeneities such as non-metallic inclusions inside the steel can lead to a rupture of the sheets when it is formed into a product such as a beverage can. Therefore, there is a need to develop automated NDT techniques to detect hidden defects and inclusions in thin sheets during production. For this purpose Tata Steel Europe and BAM, the Federal Institute for Materials Research and Testing (Germany), collaborate in order to develop an automated NDT-system. Defect detection systems have to be robust against external influences, especially when used in an industrial environment. In addition, such a facility has to achieve a high sensitivity and a high spatial resolution in terms of detecting small inclusions in the μm-regime. In a first step, we carried out a feasibility study to determine which testing method is promising for detecting hidden defects and inclusions inside ferrous thin steel sheets. Therefore, two methods were investigated in more detail - magnetic flux leakage testing (MFL) using giant magneto resistance sensor arrays (GMR) as receivers [1,2] and eddy current testing (ET). The capabilities of both methods were tested with 0.2 mm-thick steel samples containing small defects with depths ranging from 5 µm up to 60 µm. Only in case of GMR-MFL-testing, we were able to detect parts of the hidden defects with a depth of 10 µm trustworthily with a SNR better than 10 dB. Here, the lift off between sensor and surface was 250 µm. On this basis, we investigated different testing scenarios including velocity tests and different lift offs. In this contribution we present the results of the feasibility study leading to first prototypes of GMR-probes which are now installed as part of a demonstrator inside a production line.

  16. A systematic review of automated melanoma detection in dermatoscopic images and its ground truth data

    NASA Astrophysics Data System (ADS)

    Ali, Abder-Rahman A.; Deserno, Thomas M.

    2012-02-01

    Malignant melanoma is the third most frequent type of skin cancer and one of the most malignant tumors, accounting for 79% of skin cancer deaths. Melanoma is highly curable if diagnosed early and treated properly as survival rate varies between 15% and 65% from early to terminal stages, respectively. So far, melanoma diagnosis is depending subjectively on the dermatologist's expertise. Computer-aided diagnosis (CAD) systems based on epiluminescense light microscopy can provide an objective second opinion on pigmented skin lesions (PSL). This work systematically analyzes the evidence of the effectiveness of automated melanoma detection in images from a dermatoscopic device. Automated CAD applications were analyzed to estimate their diagnostic outcome. Searching online databases for publication dates between 1985 and 2011, a total of 182 studies on dermatoscopic CAD were found. With respect to the systematic selection criterions, 9 studies were included, published between 2002 and 2011. Those studies formed databases of 14,421 dermatoscopic images including both malignant "melanoma" and benign "nevus", with 8,110 images being available ranging in resolution from 150 x 150 to 1568 x 1045 pixels. Maximum and minimum of sensitivity and specificity are 100.0% and 80.0% as well as 98.14% and 61.6%, respectively. Area under the receiver operator characteristics (AUC) and pooled sensitivity, specificity and diagnostics odds ratio are respectively 0.87, 0.90, 0.81, and 15.89. So, although that automated melanoma detection showed good accuracy in terms of sensitivity, specificity, and AUC, but diagnostic performance in terms of DOR was found to be poor. This might be due to the lack of dermatoscopic image resources (ground truth) that are needed for comprehensive assessment of diagnostic performance. In future work, we aim at testing this hypothesis by joining dermatoscopic images into a unified database that serves as a standard reference for dermatology related research in

  17. Automated detection of cortical dysplasia type II in MRI-negative epilepsy

    PubMed Central

    Hong, Seok-Jun; Kim, Hosung; Schrader, Dewi; Bernasconi, Neda; Bernhardt, Boris C.

    2014-01-01

    Objective: To detect automatically focal cortical dysplasia (FCD) type II in patients with extratemporal epilepsy initially diagnosed as MRI-negative on routine inspection of 1.5 and 3.0T scans. Methods: We implemented an automated classifier relying on surface-based features of FCD morphology and intensity, taking advantage of their covariance. The method was tested on 19 patients (15 with histologically confirmed FCD) scanned at 3.0T, and cross-validated using a leave-one-out strategy. We assessed specificity in 24 healthy controls and 11 disease controls with temporal lobe epilepsy. Cross-dataset classification performance was evaluated in 20 healthy controls and 14 patients with histologically verified FCD examined at 1.5T. Results: Sensitivity was 74%, with 100% specificity (i.e., no lesions detected in healthy or disease controls). In 50% of cases, a single cluster colocalized with the FCD lesion, while in the remaining cases a median of 1 extralesional cluster was found. Applying the classifier (trained on 3.0T data) to the 1.5T dataset yielded comparable performance (sensitivity 71%, specificity 95%). Conclusion: In patients initially diagnosed as MRI-negative, our fully automated multivariate approach offered a substantial gain in sensitivity over standard radiologic assessment. The proposed method showed generalizability across cohorts, scanners, and field strengths. Machine learning may assist presurgical decision-making by facilitating hypothesis formulation about the epileptogenic zone. Classification of evidence: This study provides Class II evidence that automated machine learning of MRI patterns accurately identifies FCD among patients with extratemporal epilepsy initially diagnosed as MRI-negative. PMID:24898923

  18. How Small Can Impact Craters Be Detected at Large Scale by Automated Algorithms?

    NASA Astrophysics Data System (ADS)

    Bandeira, L.; Machado, M.; Pina, P.; Marques, J. S.

    2013-12-01

    The last decade has seen a widespread publication of crater detection algorithms (CDA) with increasing detection performances. The adaptive nature of some of the algorithms [1] has permitting their use in the construction or update of global catalogues for Mars and the Moon. Nevertheless, the smallest craters detected in these situations by CDA have 10 pixels in diameter (or about 2 km in MOC-WA images) [2] or can go down to 16 pixels or 200 m in HRSC imagery [3]. The availability of Martian images with metric (HRSC and CTX) and centimetric (HiRISE) resolutions is permitting to unveil craters not perceived before, thus automated approaches seem a natural way of detecting the myriad of these structures. In this study we present the efforts, based on our previous algorithms [2-3] and new training strategies, to push the automated detection of craters to a dimensional threshold as close as possible to the detail that can be perceived on the images, something that has not been addressed yet in a systematic way. The approach is based on the selection of candidate regions of the images (portions that contain crescent highlight and shadow shapes indicating a possible presence of a crater) using mathematical morphology operators (connected operators of different sizes) and on the extraction of texture features (Haar-like) and classification by Adaboost, into crater and non-crater. This is a supervised approach, meaning that a training phase, in which manually labelled samples are provided, is necessary so the classifier can learn what crater and non-crater structures are. The algorithm is intensively tested in Martian HiRISE images, from different locations on the planet, in order to cover the largest surface types from the geological point view (different ages and crater densities) and also from the imaging or textural perspective (different degrees of smoothness/roughness). The quality of the detections obtained is clearly dependent on the dimension of the craters

  19. A Validation of Automated and Quality Controlled Satellite Based Fire Detection

    NASA Astrophysics Data System (ADS)

    Ruminski, M. G.; Hanna, J.

    2010-12-01

    The Satellite Analysis Branch (SAB) of NOAA/NESDIS performs a daily fire analysis for North America utilizing GOES, NOAA POES and MODIS satellite data. Automated fire detection algorithms are employed for each of the sensors. The automated detections are evaluated against the underlying satellite imagery by analysts, with detections that are believed to be false positives removed and missed fires added to the analysis. Previous validation of automated detections has typically utilized very high resolution satellite data, such as ASTER (30m), coincident in space and time with the sensor being validated. While this approach is useful for evaluating algorithm detection capability at a specific time for fires that are not obscured there is a high likelihood that it does not provide a comprehensive evaluation based on all fire occurrences for the day. Fires that occur before or after the satellite overpass would not be included and those that are obscured by clouds would also not be accounted for. These are important considerations in assessing climatology and for emission estimates. This study utilizes ground based reports from Florida, Montana, Idaho and South Carolina which have well established reporting and permitting procedures. These ground reports are primarily agricultural and prescribe burns for which permits are required. While it is possible that permits are obtained but the burn is not performed it is felt that this represents a small fraction of the number reported based on communication with permitting officials. Only the Probability Of Detection (POD) is computed. A positive detection occurs for satellite detections within 8km of a reported fire. This buffer is employed to allow for known satellite navigation errors. Determining false positive detects would not be reliable since there is no way of knowing with certainty that a detected fire did not actually occur at a location. It could easily be an unreported fire. Results for Florida based on daily

  20. Automated Detection and Extraction of Coronal Dimmings from SDO/AIA Data

    NASA Astrophysics Data System (ADS)

    Davey, Alisdair R.; Attrill, G. D. R.; Wills-Davey, M. J.

    2010-05-01

    The sheer volume of data anticipated from the Solar Dynamics Observatory/Atmospheric Imaging Assembly (SDO/AIA) highlights the necessity for the development of automatic detection methods for various types of solar activity. Initially recognised in the 1970s, it is now well established that coronal dimmings are closely associated with coronal mass ejections (CMEs), and are particularly recognised as an indicator of front-side (halo) CMEs, which can be difficult to detect in white-light coronagraph data. An automated coronal dimming region detection and extraction algorithm removes visual observer bias from determination of physical quantities such as spatial location, area and volume. This allows reproducible, quantifiable results to be mined from very large datasets. The information derived may facilitate more reliable early space weather detection, as well as offering the potential for conducting large-sample studies focused on determining the geoeffectiveness of CMEs, coupled with analysis of their associated coronal dimmings. We present examples of dimming events extracted using our algorithm from existing EUV data, demonstrating the potential for the anticipated application to SDO/AIA data. Metadata returned by our algorithm include: location, area, volume, mass and dynamics of coronal dimmings. As well as running on historic datasets, this algorithm is capable of detecting and extracting coronal dimmings in near real-time. The coronal dimming detection and extraction algorithm described in this poster is part of the SDO/Computer Vision Center effort hosted at SAO (Martens et al., 2009). We acknowledge NASA grant NNH07AB97C.

  1. Automated detection of extended sources in radio maps: progress from the SCORPIO survey

    NASA Astrophysics Data System (ADS)

    Riggi, S.; Ingallinera, A.; Leto, P.; Cavallaro, F.; Bufano, F.; Schillirò, F.; Trigilio, C.; Umana, G.; Buemi, C. S.; Norris, R. P.

    2016-04-01

    Automated source extraction and parameterization represents a crucial challenge for the next-generation radio interferometer surveys, such as those performed with the Square Kilometre Array (SKA) and its precursors. In this paper we present a new algorithm, dubbed CAESAR (Compact And Extended Source Automated Recognition), to detect and parametrize extended sources in radio interferometric maps. It is based on a pre-filtering stage, allowing image denoising, compact source suppression and enhancement of diffuse emission, followed by an adaptive superpixel clustering stage for final source segmentation. A parameterization stage provides source flux information and a wide range of morphology estimators for post-processing analysis. We developed CAESAR in a modular software library, including also different methods for local background estimation and image filtering, along with alternative algorithms for both compact and diffuse source extraction. The method was applied to real radio continuum data collected at the Australian Telescope Compact Array (ATCA) within the SCORPIO project, a pathfinder of the ASKAP-EMU survey. The source reconstruction capabilities were studied over different test fields in the presence of compact sources, imaging artefacts and diffuse emission from the Galactic plane and compared with existing algorithms. When compared to a human-driven analysis, the designed algorithm was found capable of detecting known target sources and regions of diffuse emission, outperforming alternative approaches over the considered fields.

  2. Semi-Automated, Occupationally Safe Immunofluorescence Microtip Sensor for Rapid Detection of Mycobacterium Cells in Sputum

    PubMed Central

    Soelberg, Scott D.; Weigel, Kris M.; Hiraiwa, Morgan; Cairns, Andrew; Lee, Hyun-Boo; Furlong, Clement E.; Oh, Kieseok; Lee, Kyong-Hoon; Gao, Dayong; Chung, Jae-Hyun; Cangelosi, Gerard A.

    2014-01-01

    An occupationally safe (biosafe) sputum liquefaction protocol was developed for use with a semi-automated antibody-based microtip immunofluorescence sensor. The protocol effectively liquefied sputum and inactivated microorganisms including Mycobacterium tuberculosis, while preserving the antibody-binding activity of Mycobacterium cell surface antigens. Sputum was treated with a synergistic chemical-thermal protocol that included moderate concentrations of NaOH and detergent at 60°C for 5 to 10 min. Samples spiked with M. tuberculosis complex cells showed approximately 106-fold inactivation of the pathogen after treatment. Antibody binding was retained post-treatment, as determined by analysis with a microtip immunosensor. The sensor correctly distinguished between Mycobacterium species and other cell types naturally present in biosafe-treated sputum, with a detection limit of 100 CFU/mL for M. tuberculosis, in a 30-minute sample-to-result process. The microtip device was also semi-automated and shown to be compatible with low-cost, LED-powered fluorescence microscopy. The device and biosafe sputum liquefaction method opens the door to rapid detection of tuberculosis in settings with limited laboratory infrastructure. PMID:24465845

  3. Automated detection and quantification of single RNAs at cellular resolution in zebrafish embryos.

    PubMed

    Stapel, L Carine; Lombardot, Benoit; Broaddus, Coleman; Kainmueller, Dagmar; Jug, Florian; Myers, Eugene W; Vastenhouw, Nadine L

    2016-02-01

    Analysis of differential gene expression is crucial for the study of cell fate and behavior during embryonic development. However, automated methods for the sensitive detection and quantification of RNAs at cellular resolution in embryos are lacking. With the advent of single-molecule fluorescence in situ hybridization (smFISH), gene expression can be analyzed at single-molecule resolution. However, the limited availability of protocols for smFISH in embryos and the lack of efficient image analysis pipelines have hampered quantification at the (sub)cellular level in complex samples such as tissues and embryos. Here, we present a protocol for smFISH on zebrafish embryo sections in combination with an image analysis pipeline for automated transcript detection and cell segmentation. We use this strategy to quantify gene expression differences between different cell types and identify differences in subcellular transcript localization between genes. The combination of our smFISH protocol and custom-made, freely available, analysis pipeline will enable researchers to fully exploit the benefits of quantitative transcript analysis at cellular and subcellular resolution in tissues and embryos. PMID:26700682

  4. Automated Aflatoxin Analysis Using Inline Reusable Immunoaffinity Column Cleanup and LC-Fluorescence Detection.

    PubMed

    Rhemrev, Ria; Pazdanska, Monika; Marley, Elaine; Biselli, Scarlett; Staiger, Simone

    2015-01-01

    A novel reusable immunoaffinity cartridge containing monoclonal antibodies to aflatoxins coupled to a pressure resistant polymer has been developed. The cartridge is used in conjunction with a handling system inline to LC with fluorescence detection to provide fully automated aflatoxin analysis for routine monitoring of a variety of food matrixes. The handling system selects an immunoaffinity cartridge from a tray and automatically applies the sample extract. The cartridge is washed, then aflatoxins B1, B2, G1, and G2 are eluted and transferred inline to the LC system for quantitative analysis using fluorescence detection with postcolumn derivatization using a KOBRA® cell. Each immunoaffinity cartridge can be used up to 15 times without loss in performance, offering increased sample throughput and reduced costs compared to conventional manual sample preparation and cleanup. The system was validated in two independent laboratories using samples of peanuts and maize spiked at 2, 8, and 40 μg/kg total aflatoxins, and paprika, nutmeg, and dried figs spiked at 5, 20, and 100 μg/kg total aflatoxins. Recoveries exceeded 80% for both aflatoxin B1 and total aflatoxins. The between-day repeatability ranged from 2.1 to 9.6% for aflatoxin B1 for the six levels and five matrixes. Satisfactory Z-scores were obtained with this automated system when used for participation in proficiency testing (FAPAS®) for samples of chilli powder and hazelnut paste containing aflatoxins. PMID:26651571

  5. Electrochemical pesticide detection with AutoDip--a portable platform for automation of crude sample analyses.

    PubMed

    Drechsel, Lisa; Schulz, Martin; von Stetten, Felix; Moldovan, Carmen; Zengerle, Roland; Paust, Nils

    2015-02-01

    Lab-on-a-chip devices hold promise for automation of complex workflows from sample to answer with minimal consumption of reagents in portable devices. However, complex, inhomogeneous samples as they occur in environmental or food analysis may block microchannels and thus often cause malfunction of the system. Here we present the novel AutoDip platform which is based on the movement of a solid phase through the reagents and sample instead of transporting a sequence of reagents through a fixed solid phase. A ball-pen mechanism operated by an external actuator automates unit operations such as incubation and washing by consecutively dipping the solid phase into the corresponding liquids. The platform is applied to electrochemical detection of organophosphorus pesticides in real food samples using an acetylcholinesterase (AChE) biosensor. Minimal sample preparation and an integrated reagent pre-storage module hold promise for easy handling of the assay. Detection of the pesticide chlorpyrifos-oxon (CPO) spiked into apple samples at concentrations of 10(-7) M has been demonstrated. This concentration is below the maximum residue level for chlorpyrifos in apples defined by the European Commission. PMID:25415182

  6. Intact mass detection, interpretation, and visualization to automate Top-Down proteomics on a large scale

    PubMed Central

    Durbin, Kenneth R.; Tran, John C.; Zamdborg, Leonid; Sweet, Steve M. M.; Catherman, Adam D.; Lee, Ji Eun; Li, Mingxi; Kellie, John F.; Kelleher, Neil L.

    2011-01-01

    Applying high-throughput Top-Down MS to an entire proteome requires a yet-to-be-established model for data processing. Since Top-Down is becoming possible on a large scale, we report our latest software pipeline dedicated to capturing the full value of intact protein data in automated fashion. For intact mass detection, we combine algorithms for processing MS1 data from both isotopically resolved (FT) and charge-state resolved (ion trap) LC-MS data, which are then linked to their fragment ions for database searching using ProSight. Automated determination of human keratin and tubulin isoforms is one result. Optimized for the intricacies of whole proteins, new software modules visualize proteome-scale data based on the LC retention time and intensity of intact masses and enable selective detection of PTMs to automatically screen for acetylation, phosphorylation, and methylation. Software functionality was demonstrated using comparative LC-MS data from yeast strains in addition to human cells undergoing chemical stress. We further these advances as a key aspect of realizing Top-Down MS on a proteomic scale. PMID:20848673

  7. Automated detection of extended sources in radio maps: progress from the SCORPIO survey

    NASA Astrophysics Data System (ADS)

    Riggi, S.; Ingallinera, A.; Leto, P.; Cavallaro, F.; Bufano, F.; Schillirò, F.; Trigilio, C.; Umana, G.; Buemi, C. S.; Norris, R. P.

    2016-08-01

    Automated source extraction and parametrization represents a crucial challenge for the next-generation radio interferometer surveys, such as those performed with the Square Kilometre Array (SKA) and its precursors. In this paper, we present a new algorithm, called CAESAR (Compact And Extended Source Automated Recognition), to detect and parametrize extended sources in radio interferometric maps. It is based on a pre-filtering stage, allowing image denoising, compact source suppression and enhancement of diffuse emission, followed by an adaptive superpixel clustering stage for final source segmentation. A parametrization stage provides source flux information and a wide range of morphology estimators for post-processing analysis. We developed CAESAR in a modular software library, also including different methods for local background estimation and image filtering, along with alternative algorithms for both compact and diffuse source extraction. The method was applied to real radio continuum data collected at the Australian Telescope Compact Array (ATCA) within the SCORPIO project, a pathfinder of the Evolutionary Map of the Universe (EMU) survey at the Australian Square Kilometre Array Pathfinder (ASKAP). The source reconstruction capabilities were studied over different test fields in the presence of compact sources, imaging artefacts and diffuse emission from the Galactic plane and compared with existing algorithms. When compared to a human-driven analysis, the designed algorithm was found capable of detecting known target sources and regions of diffuse emission, outperforming alternative approaches over the considered fields.

  8. An automated and integrated framework for dust storm detection based on ogc web processing services

    NASA Astrophysics Data System (ADS)

    Xiao, F.; Shea, G. Y. K.; Wong, M. S.; Campbell, J.

    2014-11-01

    Dust storms are known to have adverse effects on public health. Atmospheric dust loading is also one of the major uncertainties in global climatic modelling as it is known to have a significant impact on the radiation budget and atmospheric stability. The complexity of building scientific dust storm models is coupled with the scientific computation advancement, ongoing computing platform development, and the development of heterogeneous Earth Observation (EO) networks. It is a challenging task to develop an integrated and automated scheme for dust storm detection that combines Geo-Processing frameworks, scientific models and EO data together to enable the dust storm detection and tracking processes in a dynamic and timely manner. This study develops an automated and integrated framework for dust storm detection and tracking based on the Web Processing Services (WPS) initiated by Open Geospatial Consortium (OGC). The presented WPS framework consists of EO data retrieval components, dust storm detecting and tracking component, and service chain orchestration engine. The EO data processing component is implemented based on OPeNDAP standard. The dust storm detecting and tracking component combines three earth scientific models, which are SBDART model (for computing aerosol optical depth (AOT) of dust particles), WRF model (for simulating meteorological parameters) and HYSPLIT model (for simulating the dust storm transport processes). The service chain orchestration engine is implemented based on Business Process Execution Language for Web Service (BPEL4WS) using open-source software. The output results, including horizontal and vertical AOT distribution of dust particles as well as their transport paths, were represented using KML/XML and displayed in Google Earth. A serious dust storm, which occurred over East Asia from 26 to 28 Apr 2012, is used to test the applicability of the proposed WPS framework. Our aim here is to solve a specific instance of a complex EO data

  9. Detection of abrupt changes in dynamic systems

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1984-01-01

    Some of the basic ideas associated with the detection of abrupt changes in dynamic systems are presented. Multiple filter-based techniques and residual-based method and the multiple model and generalized likelihood ratio methods are considered. Issues such as the effect of unknown onset time on algorithm complexity and structure and robustness to model uncertainty are discussed.

  10. Detecting Landscape Change: The View from Above

    ERIC Educational Resources Information Center

    Porter, Jess

    2008-01-01

    This article will demonstrate an approach for discovering and assessing local landscape change through the use of remotely sensed images. A brief introduction to remotely sensed imagery is followed by a discussion of relevant ways to introduce this technology into the college science classroom. The Map Detective activity demonstrates the…

  11. Automated lesion detection in dynamic contrast enhanced magnetic resonance imaging of breast

    NASA Astrophysics Data System (ADS)

    Liang, Xi; Kotagiri, Romamohanarao; Frazer, Helen; Yang, Qing

    2015-03-01

    We propose an automated method in detecting lesions to assist radiologists in interpreting dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) of breast. The aim is to highlight the suspicious regions of interest to reduce the searching time of the lesions and the possibility of radiologists overlooking small regions. In our method, we locate the suspicious regions by applying a threshold on essential features. The features are normalized to reduce the variation between patients. Support vector machine classifier is then applied to exclude normal tissues from these regions, using both kinetic and morphological features extracted in the lesions. In the evaluation of the system on 21 patients with 50 lesions, all lesions were successfully detected with 5.02 false positive regions per breast.

  12. Fully automated and adaptive detection of amyloid plaques in stained brain sections of Alzheimer transgenic mice.

    PubMed

    Feki, Abdelmonem; Teboul, Olivier; Dubois, Albertine; Bozon, Bruno; Faure, Alexis; Hantraye, Philippe; Dhenain, Marc; Delatour, Benoit; Delzescaux, Thierry

    2007-01-01

    Automated detection of amyloid plaques (AP) in post mortem brain sections of patients with Alzheimer disease (AD) or in mouse models of the disease is a major issue to improve quantitative, standardized and accurate assessment of neuropathological lesions as well as of their modulation by treatment. We propose a new segmentation method to automatically detect amyloid plaques in Congo Red stained sections based on adaptive thresholds and a dedicated amyloid plaque/tissue modelling. A set of histological sections focusing on anatomical structures was used to validate the method in comparison to expert segmentation. Original information concerning global amyloid load have been derived from 6 mouse brains which opens new perspectives for the extensive analysis of such a data in 3-D and the possibility to integrate in vivo-post mortem information for diagnosis purposes. PMID:18044661

  13. Monitoring gypsy moth defoliation by applying change detection techniques to Landsat imagery

    NASA Technical Reports Server (NTRS)

    Williams, D. L.; Stauffer, M. L.

    1978-01-01

    The overall objective of a research effort at NASA's Goddard Space Flight Center is to develop and evaluate digital image processing techniques that will facilitate the assessment of the intensity and spatial distribution of forest insect damage in Northeastern U.S. forests using remotely sensed data from Landsats 1, 2 and C. Automated change detection techniques are presently being investigated as a method of isolating the areas of change in the forest canopy resulting from pest outbreaks. In order to follow the change detection approach, Landsat scene correction and overlay capabilities are utilized to provide multispectral/multitemporal image files of 'defoliation' and 'nondefoliation' forest stand conditions.

  14. Noninvasive Real-Time Automated Skin Lesion Analysis System for Melanoma Early Detection and Prevention

    PubMed Central

    Abuzaghleh, Omar; Barkana, Buket D.

    2015-01-01

    Melanoma spreads through metastasis, and therefore, it has been proved to be very fatal. Statistical evidence has revealed that the majority of deaths resulting from skin cancer are as a result of melanoma. Further investigations have shown that the survival rates in patients depend on the stage of the cancer; early detection and intervention of melanoma implicate higher chances of cure. Clinical diagnosis and prognosis of melanoma are challenging, since the processes are prone to misdiagnosis and inaccuracies due to doctors’ subjectivity. Malignant melanomas are asymmetrical, have irregular borders, notched edges, and color variations, so analyzing the shape, color, and texture of the skin lesion is important for the early detection and prevention of melanoma. This paper proposes the two major components of a noninvasive real-time automated skin lesion analysis system for the early detection and prevention of melanoma. The first component is a real-time alert to help users prevent skinburn caused by sunlight; a novel equation to compute the time for skin to burn is thereby introduced. The second component is an automated image analysis module, which contains image acquisition, hair detection and exclusion, lesion segmentation, feature extraction, and classification. The proposed system uses PH2 Dermoscopy image database from Pedro Hispano Hospital for the development and testing purposes. The image database contains a total of 200 dermoscopy images of lesions, including benign, atypical, and melanoma cases. The experimental results show that the proposed system is efficient, achieving classification of the benign, atypical, and melanoma images with accuracy of 96.3%, 95.7%, and 97.5%, respectively. PMID:27170906

  15. Automatic change detection using mobile laser scanning

    NASA Astrophysics Data System (ADS)

    Hebel, M.; Hammer, M.; Gordon, M.; Arens, M.

    2014-10-01

    Automatic change detection in 3D environments requires the comparison of multi-temporal data. By comparing current data with past data of the same area, changes can be automatically detected and identified. Volumetric changes in the scene hint at suspicious activities like the movement of military vehicles, the application of camouflage nets, or the placement of IEDs, etc. In contrast to broad research activities in remote sensing with optical cameras, this paper addresses the topic using 3D data acquired by mobile laser scanning (MLS). We present a framework for immediate comparison of current MLS data to given 3D reference data. Our method extends the concept of occupancy grids known from robot mapping, which incorporates the sensor positions in the processing of the 3D point clouds. This allows extracting the information that is included in the data acquisition geometry. For each single range measurement, it becomes apparent that an object reflects laser pulses in the measured range distance, i.e., space is occupied at that 3D position. In addition, it is obvious that space is empty along the line of sight between sensor and the reflecting object. Everywhere else, the occupancy of space remains unknown. This approach handles occlusions and changes implicitly, such that the latter are identifiable by conflicts of empty space and occupied space. The presented concept of change detection has been successfully validated in experiments with recorded MLS data streams. Results are shown for test sites at which MLS data were acquired at different time intervals.

  16. A Method Detection Limit for Bacillus anthracis Spores in Water Using an Automated Waterborne Pathogen Concentrator.

    PubMed

    Humrighouse, Ben; Pemberton, Adin; Gallardo, Vicente; Lindquist, H D Alan; LaBudde, Robert

    2015-01-01

    The method detection limit (MDL, 99% chance of detecting a positive result in a single replicate), as per the United States Code of Federal Regulations, was determined for a protocol using an ultrafiltration based automated waterborne pathogen concentration device. Bacillus anthracis Sterne strain spores were seeded at low levels into 100 L reagent water samples. Suspect colonies were confirmed through morphological, chemical, and genetic tests. Samples of 100 L (n=14) of reagent water were seeded with five B. anthracis CFUs each. To confirm the estimated detection limit, a second set (n=19) of 100 L reagent water samples were seeded at a higher level (7 CFUs). The second estimate of the MDL could not be pooled with the first, due to significant difference in variance. A third trial (n=7) seeded with 10 CFUs produced an estimate of the MDL that could be pooled with the higher previous estimate. Another trial consisting of eight 100 L samples of tap water were seeded with approximately 7 CFUs. Recovery in these samples was not significantly different from the pooled MDL. Theoretically a concentration of 4.6 spores/100 L would be required for detection 95% of the time, based on a Poisson distribution. The calculated pooled MDL, based on experimental data was approximately 6 B. anthracis CFU/100 L (95% confidence interval 4.8 to 8.4). Detection at this level was achieved in municipal water samples. PMID:26268983

  17. Facile electrochemical method and corresponding automated instrument for the detection of furfural in insulation oil.

    PubMed

    Wang, Ruili; Huang, Xinjian; Wang, Lishi

    2016-02-01

    Determining the concentration of furfural contained in the insulation oil of a transformer has been established as a method to evaluate the health status of the transformer. However, the detection of furfural involves the employment of expensive instruments and/or time-consuming laboratorial operations. In this paper, we proposed a convenient electrochemical method to make the detection. The quantification of furfural was realized by extraction of furfural from oil phase to aqueous phase followed by reductive detection of furfural with differential pulse voltammetry (DPV) at a mercury electrode. This method is very sensitive and the limit of detection, corresponding to furfural contained in oil, is estimated to be 0.03 μg g(-1). Furthermore, excellent linearity can be obtained in the range of 0-10 μg g(-1). These features make the method very suitable for the determination of furfural in real situation. A fully automated instrument that can perform the operations of extraction and detection was developed, and this instrument enables the whole measurement to be finished within eight minutes. The methodology and the instrument were tested with real samples, and very favorable agreement between results obtained with this instrument and HPLC indicates that the proposed method along with instrument can be employed as a facile tool to diagnose the health status of aged transformers. PMID:26653467

  18. Low power multi-camera system and algorithms for automated threat detection

    NASA Astrophysics Data System (ADS)

    Huber, David J.; Khosla, Deepak; Chen, Yang; Van Buer, Darrel J.; Martin, Kevin

    2013-05-01

    A key to any robust automated surveillance system is continuous, wide field-of-view sensor coverage and high accuracy target detection algorithms. Newer systems typically employ an array of multiple fixed cameras that provide individual data streams, each of which is managed by its own processor. This array can continuously capture the entire field of view, but collecting all the data and back-end detection algorithm consumes additional power and increases the size, weight, and power (SWaP) of the package. This is often unacceptable, as many potential surveillance applications have strict system SWaP requirements. This paper describes a wide field-of-view video system that employs multiple fixed cameras and exhibits low SWaP without compromising the target detection rate. We cycle through the sensors, fetch a fixed number of frames, and process them through a modified target detection algorithm. During this time, the other sensors remain powered-down, which reduces the required hardware and power consumption of the system. We show that the resulting gaps in coverage and irregular frame rate do not affect the detection accuracy of the underlying algorithms. This reduces the power of an N-camera system by up to approximately N-fold compared to the baseline normal operation. This work was applied to Phase 2 of DARPA Cognitive Technology Threat Warning System (CT2WS) program and used during field testing.

  19. Radiologists' Performance for Detecting Lesions and the Interobserver Variability of Automated Whole Breast Ultrasound

    PubMed Central

    Kim, Sung Hun; Choi, Byung Gil; Choi, Jae Jung; Lee, Ji Hye; Song, Byung Joo; Choe, Byung Joo; Park, Sarah; Kim, Hyunbin

    2013-01-01

    Objective To compare the detection performance of the automated whole breast ultrasound (AWUS) with that of the hand-held breast ultrasound (HHUS) and to evaluate the interobserver variability in the interpretation of the AWUS. Materials and Methods AWUS was performed in 38 breast cancer patients. A total of 66 lesions were included: 38 breast cancers, 12 additional malignancies and 16 benign lesions. Three breast radiologists independently reviewed the AWUS data and analyzed the breast lesions according to the BI-RADS classification. Results The detection rate of malignancies was 98.0% for HHUS and 90.0%, 88.0% and 96.0% for the three readers of the AWUS. The sensitivity and the specificity were 98.0% and 62.5% in HHUS, 90.0% and 87.5% for reader 1, 88.0% and 81.3% for reader 2, and 96.0% and 93.8% for reader 3, in AWUS. There was no significant difference in the radiologists' detection performance, sensitivity and specificity (p > 0.05) between the two modalities. The interobserver agreement was fair to good for the ultrasonographic features, categorization, size, and the location of breast masses. Conclusion AWUS is thought to be useful for detecting breast lesions. In comparison with HHUS, AWUS shows no significant difference in the detection rate, sensitivity and the specificity, with high degrees of interobserver agreement. PMID:23482698

  20. Automated detection and analysis of depolarization events in human cardiomyocytes using MaDEC.

    PubMed

    Szymanska, Agnieszka F; Heylman, Christopher; Datta, Rupsa; Gratton, Enrico; Nenadic, Zoran

    2016-08-01

    Optical imaging-based methods for assessing the membrane electrophysiology of in vitro human cardiac cells allow for non-invasive temporal assessment of the effect of drugs and other stimuli. Automated methods for detecting and analyzing the depolarization events (DEs) in image-based data allow quantitative assessment of these different treatments. In this study, we use 2-photon microscopy of fluorescent voltage-sensitive dyes (VSDs) to capture the membrane voltage of actively beating human induced pluripotent stem cell-derived cardiomyocytes (hiPS-CMs). We built a custom and freely available Matlab software, called MaDEC, to detect, quantify, and compare DEs of hiPS-CMs treated with the β-adrenergic drugs, propranolol and isoproterenol. The efficacy of our software is quantified by comparing detection results against manual DE detection by expert analysts, and comparing DE analysis results to known drug-induced electrophysiological effects. The software accurately detected DEs with true positive rates of 98-100% and false positive rates of 1-2%, at signal-to-noise ratios (SNRs) of 5 and above. The MaDEC software was also able to distinguish control DEs from drug-treated DEs both immediately as well as 10min after drug administration. PMID:27281718

  1. Automated detection of asteroids in real-time with the Spacewatch telescope

    NASA Technical Reports Server (NTRS)

    Scotti, James Vernon; Gehrels, T.; Rabinowitz, David L.

    1992-01-01

    The Spacewatch telescope on Kitt Peak is being used to survey for near-earth asteroids using a Tektronix TK2048 CCD in scanning mode. We hope to identify suitable low delta v candidates amongst the near-earth asteroid population as possible exploration targets, to identify those objects which pose a danger to life on earth, and to study the physical properties of the objects in near-earth space. Between Sep. 1990 and Jun. 1991, 14 new earth-approaching asteroids including 1 Aten, 9 Apollo, and 4 Amor type asteroids were detected by automated software and discriminated by their angular rates from the rest of the detected asteroids in near-real time by the observer. The average of about 1.5 earth-approaching asteroids per month is comparable to the total number found by all other observatories combined. One other Apollo type asteroid was detected by the observer as a long trailed image. The positions of this last object were measured and the object was tracked by the observer in real time. This object was determined to be a 5-10 meter diameter object which passed within 170,000 kilometers of earth. Of the 14 automatically detected earth-approaching asteroids, 10 have been found at distances in excess of 0.5 AU from earth. An average of more than 2000 asteroids are detected each month. Positions, angular rates, and brightnesses are determined for each of these asteroids in real-time.

  2. Olfactory processing: detection of rapid changes.

    PubMed

    Croy, Ilona; Krone, Franziska; Walker, Susannah; Hummel, Thomas

    2015-06-01

    Changes in the olfactory environment have a rather poor chance of being detected. Aim of the present study was to determine, whether the same (cued) or different (uncued) odors can generally be detected at short inter stimulus intervals (ISI) below 2.5 s. Furthermore we investigated, whether inhibition of return, an attentional phenomenon facilitating the detection of new stimuli at longer ISI, is present in the domain of olfaction. Thirteen normosmic people (3 men, 10 women; age range 19-27 years; mean age 23 years) participated. Stimulation was performed using air-dilution olfactometry with 2 odors: phenylethylalcohol and hydrogen disulfide. Reaction time to target stimuli was assessed in cued and uncued conditions at ISIs of 1, 1.5, 2, and 2.5 s. There was a significant main effect of ISI, indicating that odors presented only 1 s apart are missed frequently. Uncued presentation facilitated detection at short ISIs, implying that changes of the olfactory environment are detected better than presentation of the same odor again. Effects in relation to "olfactory inhibition of return," on the other hand, are not supported by our results. This suggests that attention works different for the olfactory system compared with the visual and auditory systems. PMID:25911421

  3. Systematic comparison of automated geological feature detection methods for impact craters

    NASA Astrophysics Data System (ADS)

    Vinogradova, T.; Mjolsness, E.

    2001-12-01

    Accurate, automated crater counts will be essential in extrapolating from existing Mars crater catalogs to much larger catalogs of impact craters in high-resolution orbital imagery for use in relative dating of surfaces in such imagery. Once validated, automatic methods for performing crater counts could be integrated into tools such as the Planetary Image Atlas, which is designed to be a convenient interface through which a user can search for, display, and download images and other ancillary data for planetary Missions, and the Diamond Eye image mining system. Here we report on preliminary computational experiments in using a trainable feature detection algorithm [Burl et al. 2001] to detect craters in real and simulated Mars orbital imagery, and to derive approximate impact crater counts for geological use. In these experiments, we consider two uses of the trainable feature detector: first, directly as a crater detector, and second, as two detectors for sunlit and shadowed inner walls of craters which can then be assembled into a single crater detection based on multiple pieces of evidence. For both of these methods, we consider two data sources: one consisting of real Viking Orbiter imagery of Mars with human expert-supplied ground truth labels, and the other consisting of computer generated renderings of simplified, synthetic cratered terrain with 100% accurate ground truth labels and known, controllable crater density. Each detector reports out a numeric detection ``likelihood'' for every candidate crater. This likelihood must then be thresholded to produce a detection decision. For each combination of two data sources (one natural and one synthetic) and two crater detection methods (whole-crater and parts-model), we vary image complexity and finally measure detection accuracy. Detection accuracy is measured by a Receiver Operator Characteristic (ROC) curve in which detection efficiency (the fraction of true craters detected) and purity (the fraction of

  4. Automated Image Analysis for the Detection of Benthic Crustaceans and Bacterial Mat Coverage Using the VENUS Undersea Cabled Network

    PubMed Central

    Aguzzi, Jacopo; Costa, Corrado; Robert, Katleen; Matabos, Marjolaine; Antonucci, Francesca; Juniper, S. Kim; Menesatti, Paolo

    2011-01-01

    The development and deployment of sensors for undersea cabled observatories is presently biased toward the measurement of habitat variables, while sensor technologies for biological community characterization through species identification and individual counting are less common. The VENUS cabled multisensory network (Vancouver Island, Canada) deploys seafloor camera systems at several sites. Our objective in this study was to implement new automated image analysis protocols for the recognition and counting of benthic decapods (i.e., the galatheid squat lobster, Munida quadrispina), as well as for the evaluation of changes in bacterial mat coverage (i.e., Beggiatoa spp.), using a camera deployed in Saanich Inlet (103 m depth). For the counting of Munida we remotely acquired 100 digital photos at hourly intervals from 2 to 6 December 2009. In the case of bacterial mat coverage estimation, images were taken from 2 to 8 December 2009 at the same time frequency. The automated image analysis protocols for both study cases were created in MatLab 7.1. Automation for Munida counting incorporated the combination of both filtering and background correction (Median- and Top-Hat Filters) with Euclidean Distances (ED) on Red-Green-Blue (RGB) channels. The Scale-Invariant Feature Transform (SIFT) features and Fourier Descriptors (FD) of tracked objects were then extracted. Animal classifications were carried out with the tools of morphometric multivariate statistic (i.e., Partial Least Square Discriminant Analysis; PLSDA) on Mean RGB (RGBv) value for each object and Fourier Descriptors (RGBv+FD) matrices plus SIFT and ED. The SIFT approach returned the better results. Higher percentages of images were correctly classified and lower misclassification errors (an animal is present but not detected) occurred. In contrast, RGBv+FD and ED resulted in a high incidence of records being generated for non-present animals. Bacterial mat coverage was estimated in terms of Percent Coverage

  5. Automated image analysis for the detection of benthic crustaceans and bacterial mat coverage using the VENUS undersea cabled network.

    PubMed

    Aguzzi, Jacopo; Costa, Corrado; Robert, Katleen; Matabos, Marjolaine; Antonucci, Francesca; Juniper, S Kim; Menesatti, Paolo

    2011-01-01

    The development and deployment of sensors for undersea cabled observatories is presently biased toward the measurement of habitat variables, while sensor technologies for biological community characterization through species identification and individual counting are less common. The VENUS cabled multisensory network (Vancouver Island, Canada) deploys seafloor camera systems at several sites. Our objective in this study was to implement new automated image analysis protocols for the recognition and counting of benthic decapods (i.e., the galatheid squat lobster, Munida quadrispina), as well as for the evaluation of changes in bacterial mat coverage (i.e., Beggiatoa spp.), using a camera deployed in Saanich Inlet (103 m depth). For the counting of Munida we remotely acquired 100 digital photos at hourly intervals from 2 to 6 December 2009. In the case of bacterial mat coverage estimation, images were taken from 2 to 8 December 2009 at the same time frequency. The automated image analysis protocols for both study cases were created in MatLab 7.1. Automation for Munida counting incorporated the combination of both filtering and background correction (Median- and Top-Hat Filters) with Euclidean Distances (ED) on Red-Green-Blue (RGB) channels. The Scale-Invariant Feature Transform (SIFT) features and Fourier Descriptors (FD) of tracked objects were then extracted. Animal classifications were carried out with the tools of morphometric multivariate statistic (i.e., Partial Least Square Discriminant Analysis; PLSDA) on Mean RGB (RGBv) value for each object and Fourier Descriptors (RGBv+FD) matrices plus SIFT and ED. The SIFT approach returned the better results. Higher percentages of images were correctly classified and lower misclassification errors (an animal is present but not detected) occurred. In contrast, RGBv+FD and ED resulted in a high incidence of records being generated for non-present animals. Bacterial mat coverage was estimated in terms of Percent Coverage

  6. Automated detection of submerged navigational obstructions in freshwater impoundments with hull mounted sidescan sonar

    NASA Astrophysics Data System (ADS)

    Morris, Phillip A.

    The prevalence of low-cost side scanning sonar systems mounted on small recreational vessels has created improved opportunities to identify and map submerged navigational hazards in freshwater impoundments. However, these economical sensors also present unique challenges for automated techniques. This research explores related literature in automated sonar imagery processing and mapping technology, proposes and implements a framework derived from these sources, and evaluates the approach with video collected from a recreational grade sonar system. Image analysis techniques including optical character recognition and an unsupervised computer automated detection (CAD) algorithm are employed to extract the transducer GPS coordinates and slant range distance of objects protruding from the lake bottom. The retrieved information is formatted for inclusion into a spatial mapping model. Specific attributes of the sonar sensors are modeled such that probability profiles may be projected onto a three dimensional gridded map. These profiles are computed from multiple points of view as sonar traces crisscross or come near each other. As lake levels fluctuate over time so do the elevation points of view. With each sonar record, the probability of a hazard existing at certain elevations at the respective grid points is updated with Bayesian mechanics. As reinforcing data is collected, the confidence of the map improves. Given a lake's current elevation and a vessel draft, a final generated map can identify areas of the lake that have a high probability of containing hazards that threaten navigation. The approach is implemented in C/C++ utilizing OpenCV, Tesseract OCR, and QGIS open source software and evaluated in a designated test area at Lake Lavon, Collin County, Texas.

  7. Total least squares for anomalous change detection

    SciTech Connect

    Theiler, James P; Matsekh, Anna M

    2010-01-01

    A family of difference-based anomalous change detection algorithms is derived from a total least squares (TLSQ) framework. This provides an alternative to the well-known chronochrome algorithm, which is derived from ordinary least squares. In both cases, the most anomalous changes are identified with the pixels that exhibit the largest residuals with respect to the regression of the two images against each other. The family of TLSQ-based anomalous change detectors is shown to be equivalent to the subspace RX formulation for straight anomaly detection, but applied to the stacked space. However, this family is not invariant to linear coordinate transforms. On the other hand, whitened TLSQ is coordinate invariant, and furthermore it is shown to be equivalent to the optimized covariance equalization algorithm. What whitened TLSQ offers, in addition to connecting with a common language the derivations of two of the most popular anomalous change detection algorithms - chronochrome and covariance equalization - is a generalization of these algorithms with the potential for better performance.

  8. An Architecture for Automated Fire Detection Early Warning System Based on Geoprocessing Service Composition

    NASA Astrophysics Data System (ADS)

    Samadzadegan, F.; Saber, M.; Zahmatkesh, H.; Joze Ghazi Khanlou, H.

    2013-09-01

    Rapidly discovering, sharing, integrating and applying geospatial information are key issues in the domain of emergency response and disaster management. Due to the distributed nature of data and processing resources in disaster management, utilizing a Service Oriented Architecture (SOA) to take advantages of workflow of services provides an efficient, flexible and reliable implementations to encounter different hazardous situation. The implementation specification of the Web Processing Service (WPS) has guided geospatial data processing in a Service Oriented Architecture (SOA) platform to become a widely accepted solution for processing remotely sensed data on the web. This paper presents an architecture design based on OGC web services for automated workflow for acquisition, processing remotely sensed data, detecting fire and sending notifications to the authorities. A basic architecture and its building blocks for an automated fire detection early warning system are represented using web-based processing of remote sensing imageries utilizing MODIS data. A composition of WPS processes is proposed as a WPS service to extract fire events from MODIS data. Subsequently, the paper highlights the role of WPS as a middleware interface in the domain of geospatial web service technology that can be used to invoke a large variety of geoprocessing operations and chaining of other web services as an engine of composition. The applicability of proposed architecture by a real world fire event detection and notification use case is evaluated. A GeoPortal client with open-source software was developed to manage data, metadata, processes, and authorities. Investigating feasibility and benefits of proposed framework shows that this framework can be used for wide area of geospatial applications specially disaster management and environmental monitoring.

  9. Automated microfluidically controlled electrochemical biosensor for the rapid and highly sensitive detection of Francisella tularensis.

    PubMed

    Dulay, Samuel B; Gransee, Rainer; Julich, Sandra; Tomaso, Herbert; O'Sullivan, Ciara K

    2014-09-15

    Tularemia is a highly infectious zoonotic disease caused by a Gram-negative coccoid rod bacterium, Francisella tularensis. Tularemia is considered as a life-threatening potential biological warfare agent due to its high virulence, transmission, mortality and simplicity of cultivation. In the work reported here, different electrochemical immunosensor formats for the detection of whole F. tularensis bacteria were developed and their performance compared. An anti-Francisella antibody (FB11) was used for the detection that recognises the lipopolysaccharide found in the outer membrane of the bacteria. In the first approach, gold-supported self-assembled monolayers of a carboxyl terminated bipodal alkanethiol were used to covalently cross-link with the FB11 antibody. In an alternative second approach F(ab) fragments of the FB11 antibody were generated and directly chemisorbed onto the gold electrode surface. The second approach resulted in an increased capture efficiency and higher sensitivity. Detection limits of 4.5 ng/mL for the lipopolysaccharide antigen and 31 bacteria/mL for the F. tularensis bacteria were achieved. Having demonstrated the functionality of the immunosensor, an electrode array was functionalised with the antibody fragment and integrated with microfluidics and housed in a tester set-up that facilitated complete automation of the assay. The only end-user intervention is sample addition, requiring less than one-minute hands-on time. The use of the automated microfluidic set-up not only required much lower reagent volumes but also the required incubation time was considerably reduced and a notable increase of 3-fold in assay sensitivity was achieved with a total assay time from sample addition to read-out of less than 20 min. PMID:24747573

  10. Automated coronary artery calcification detection on low-dose chest CT images

    NASA Astrophysics Data System (ADS)

    Xie, Yiting; Cham, Matthew D.; Henschke, Claudia; Yankelevitz, David; Reeves, Anthony P.

    2014-03-01

    Coronary artery calcification (CAC) measurement from low-dose CT images can be used to assess the risk of coronary artery disease. A fully automatic algorithm to detect and measure CAC from low-dose non-contrast, non-ECG-gated chest CT scans is presented. Based on the automatically detected CAC, the Agatston score (AS), mass score and volume score were computed. These were compared with scores obtained manually from standard-dose ECG-gated scans and low-dose un-gated scans of the same patient. The automatic algorithm segments the heart region based on other pre-segmented organs to provide a coronary region mask. The mitral valve and aortic valve calcification is identified and excluded. All remaining voxels greater than 180HU within the mask region are considered as CAC candidates. The heart segmentation algorithm was evaluated on 400 non-contrast cases with both low-dose and regular dose CT scans. By visual inspection, 371 (92.8%) of the segmentations were acceptable. The automated CAC detection algorithm was evaluated on 41 low-dose non-contrast CT scans. Manual markings were performed on both low-dose and standard-dose scans for these cases. Using linear regression, the correlation of the automatic AS with the standard-dose manual scores was 0.86; with the low-dose manual scores the correlation was 0.91. Standard risk categories were also computed. The automated method risk category agreed with manual markings of gated scans for 24 cases while 15 cases were 1 category off. For low-dose scans, the automatic method agreed with 33 cases while 7 cases were 1 category off.

  11. Computer-aided interpretation of ICU portable chest images: automated detection of endotracheal tubes

    NASA Astrophysics Data System (ADS)

    Huo, Zhimin; Li, Simon; Chen, Minjie; Wandtke, John

    2008-03-01

    In intensive care units (ICU), endotracheal (ET) tubes are inserted to assist patients who may have difficulty breathing. A malpositioned ET tube could lead to a collapsed lung, which is life threatening. The purpose of this study is to develop a new method that automatically detects the positioning of ET tubes on portable chest X-ray images. The method determines a region of interest (ROI) in the image and processes the raw image to provide edge enhancement for further analysis. The search of ET tubes is performed within the ROI. The ROI is determined based upon the analysis of the positions of the detected lung area and the spine in the image. Two feature images are generated: a Haar-like image and an edge image. The Haar-like image is generated by applying a Haar-like template to the raw ROI or the enhanced version of the raw ROI. The edge image is generated by applying a direction-specific edge detector. Both templates are designed to represent the characteristics of the ET tubes. Thresholds are applied to the Haar-like image and the edge image to detect initial tube candidates. Region growing, combined with curve fitting of the initial detected candidates, is performed to detect the entire ET tube. The region growing or "tube growing" is guided by the fitted curve of the initial candidates. Merging of the detected tubes after tube growing is performed to combine the detected broken tubes. Tubes within a predefined space can be merged if they meet a set of criteria. Features, such as width, length of the detected tubes, tube positions relative to the lung and spine, and the statistics from the analysis of the detected tube lines, are extracted to remove the false-positive detections in the images. The method is trained and evaluated on two different databases. Preliminary results show that computer-aided detection of tubes in portable chest X-ray images is promising. It is expected that automated detection of ET tubes could lead to timely detection of

  12. Change detection based on integration of multi-scale mixed-resolution information

    NASA Astrophysics Data System (ADS)

    Wei, Li; Wang, Cheng; Wen, Chenglu

    2016-03-01

    In this paper, a new method of unsupervised change detection is proposed by modeling multi-scale change detector based on local mixed information and we present a method of automated threshold. A theoretical analysis is presented to demonstrate that more comprehensive information is taken into account by the integration of multi-scale information. The ROC curves show that change detector based on multi-scale mixed information(MSM) is more effective than based on mixed information(MIX). Experiments on artificial and real-world datasets indicate that the multi-scale change detection of mixed information can eliminate the pseudo-change part of the area. Therefore, the proposed algorithm MSM is an effective method for the application of change detection.

  13. Automated DNA mutation detection using universal conditions direct sequencing: application to ten muscular dystrophy genes

    PubMed Central

    2009-01-01

    Background One of the most common and efficient methods for detecting mutations in genes is PCR amplification followed by direct sequencing. Until recently, the process of designing PCR assays has been to focus on individual assay parameters rather than concentrating on matching conditions for a set of assays. Primers for each individual assay were selected based on location and sequence concerns. The two primer sequences were then iteratively adjusted to make the individual assays work properly. This generally resulted in groups of assays with different annealing temperatures that required the use of multiple thermal cyclers or multiple passes in a single thermal cycler making diagnostic testing time-consuming, laborious and expensive. These factors have severely hampered diagnostic testing services, leaving many families without an answer for the exact cause of a familial genetic disease. A search of GeneTests for sequencing analysis of the entire coding sequence for genes that are known to cause muscular dystrophies returns only a small list of laboratories that perform comprehensive gene panels. The hypothesis for the study was that a complete set of universal assays can be designed to amplify and sequence any gene or family of genes using computer aided design tools. If true, this would allow automation and optimization of the mutation detection process resulting in reduced cost and increased throughput. Results An automated process has been developed for the detection of deletions, duplications/insertions and point mutations in any gene or family of genes and has been applied to ten genes known to bear mutations that cause muscular dystrophy: DMD; CAV3; CAPN3; FKRP; TRIM32; LMNA; SGCA; SGCB; SGCG; SGCD. Using this process, mutations have been found in five DMD patients and four LGMD patients (one in the FKRP gene, one in the CAV3 gene, and two likely causative heterozygous pairs of variations in the CAPN3 gene of two other patients). Methods and assay

  14. In vitro comparison of two changeover methods for vasoactive drug infusion pumps: quick-change versus automated relay.

    PubMed

    Genay, Stéphanie; Décaudin, Bertrand; Lédé, Sébastien; Feutry, Frédéric; Barthélémy, Christine; Lebuffe, Gilles; Odou, Pascal

    2015-08-01

    This study aimed to compare in vitro two syringe changeover techniques to determine which was better at minimising variations in norepinephrine (NE) delivery: the manual quick-change or automated technique. NE concentration was measured continuously using a UV spectrophotometer, and infusion flow rate was monitored by an infusion pump tester. Syringe changeovers were made with either of the two techniques studied. Relays induced disturbances in drug delivery. The temporary increase in NE mass flow rate was significantly higher with manual relays than with automated ones. The automated relay offered a better control of the amounts of NE administered than the quick-change technique. PMID:26352352

  15. Automated Detection of Oil Depots from High Resolution Images: a New Perspective

    NASA Astrophysics Data System (ADS)

    Ok, A. O.; Başeski, E.

    2015-03-01

    This paper presents an original approach to identify oil depots from single high resolution aerial/satellite images in an automated manner. The new approach considers the symmetric nature of circular oil depots, and it computes the radial symmetry in a unique way. An automated thresholding method to focus on circular regions and a new measure to verify circles are proposed. Experiments are performed on six GeoEye-1 test images. Besides, we perform tests on 16 Google Earth images of an industrial test site acquired in a time series manner (between the years 1995 and 2012). The results reveal that our approach is capable of detecting circle objects in very different/difficult images. We computed an overall performance of 95.8% for the GeoEye-1 dataset. The time series investigation reveals that our approach is robust enough to locate oil depots in industrial environments under varying illumination and environmental conditions. The overall performance is computed as 89.4% for the Google Earth dataset, and this result secures the success of our approach compared to a state-of-the-art approach.

  16. Automated detection of the retinal from OCT spectral domain images of healthy eyes

    NASA Astrophysics Data System (ADS)

    Giovinco, Gaspare; Savastano, Maria Cristina; Ventre, Salvatore; Tamburrino, Antonello

    2015-06-01

    Optical coherence tomography (OCT) has become one of the most relevant diagnostic tools for retinal diseases. Besides being a non-invasive technique, one distinguished feature is its unique capability of providing (in vivo) cross-sectional view of the retinal. Specifically, OCT images show the retinal layers. From the clinical point of view, the identification of the retinal layers opens new perspectives to study the correlation between morphological and functional aspects of the retinal tissue. The main contribution of this paper is a new method/algorithm for the automated segmentation of cross-sectional images of the retina of healthy eyes, obtained by means of spectral domain optical coherence tomography (SD-OCT). Specifically, the proposed segmentation algorithm provides the automated detection of different retinal layers. Tests on experimental SD-OCT scans performed by three different instruments/manufacturers have been successfully carried out and compared to a manual segmentation made by an independent ophthalmologist, showing the generality and the effectiveness of the proposed method.

  17. Automated detection of retinal layers from OCT spectral-domain images of healthy eyes

    NASA Astrophysics Data System (ADS)

    Giovinco, Gaspare; Savastano, Maria Cristina; Ventre, Salvatore; Tamburrino, Antonello

    2015-12-01

    Optical coherence tomography (OCT) has become one of the most relevant diagnostic tools for retinal diseases. Besides being a non-invasive technique, one distinguished feature is its unique capability of providing (in vivo) cross-sectional view of the retina. Specifically, OCT images show the retinal layers. From the clinical point of view, the identification of the retinal layers opens new perspectives to study the correlation between morphological and functional aspects of the retinal tissue. The main contribution of this paper is a new method/algorithm for the automated segmentation of cross-sectional images of the retina of healthy eyes, obtained by means of spectral-domain optical coherence tomography (SD-OCT). Specifically, the proposed segmentation algorithm provides the automated detection of different retinal layers. Tests on experimental SD-OCT scans performed by three different instruments/manufacturers have been successfully carried out and compared to a manual segmentation made by an independent ophthalmologist, showing the generality and the effectiveness of the proposed method.

  18. Automated segmentation of subretinal layers for the detection of macular edema.

    PubMed

    Hassan, Taimur; Akram, M Usman; Hassan, Bilal; Syed, Adeel M; Bazaz, Shafaat Ahmed

    2016-01-20

    Macular edema (ME) is considered as one of the major indications of proliferative diabetic retinopathy and it is commonly caused due to diabetes. ME causes retinal swelling due to the accumulation of protein deposits within subretinal layers. Optical coherence tomography (OCT) imaging provides an early detection of ME by showing the cross-sectional view of macular pathology. Many researchers have worked on automated identification of macular edema from fundus images, but this paper proposes a fully automated method for extracting and analyzing subretinal layers from OCT images using coherent tensors. These subretinal layers are then used to predict ME from candidate images using a support vector machine (SVM) classifier. A total of 71 OCT images of 64 patients are collected locally in which 15 persons have ME and 49 persons are healthy. Our proposed system has an overall accuracy of 97.78% in correctly classifying ME patients and healthy persons. We have also tested our proposed implementation on spectral domain OCT (SD-OCT) images of the Duke dataset consisting of 109 images from 10 patients and it correctly classified all healthy and ME images in the dataset. PMID:26835917

  19. Competitive SWIFT cluster templates enhance detection of aging changes

    PubMed Central

    Rebhahn, Jonathan A.; Roumanes, David R.; Qi, Yilin; Khan, Atif; Thakar, Juilee; Rosenberg, Alex; Lee, F. Eun‐Hyung; Quataert, Sally A.; Sharma, Gaurav

    2015-01-01

    Abstract Clustering‐based algorithms for automated analysis of flow cytometry datasets have achieved more efficient and objective analysis than manual processing. Clustering organizes flow cytometry data into subpopulations with substantially homogenous characteristics but does not directly address the important problem of identifying the salient differences in subpopulations between subjects and groups. Here, we address this problem by augmenting SWIFT—a mixture model based clustering algorithm reported previously. First, we show that SWIFT clustering using a “template” mixture model, in which all subpopulations are represented, identifies small differences in cell numbers per subpopulation between samples. Second, we demonstrate that resolution of inter‐sample differences is increased by “competition” wherein a joint model is formed by combining the mixture model templates obtained from different groups. In the joint model, clusters from individual groups compete for the assignment of cells, sharpening differences between samples, particularly differences representing subpopulation shifts that are masked under clustering with a single template model. The benefit of competition was demonstrated first with a semisynthetic dataset obtained by deliberately shifting a known subpopulation within an actual flow cytometry sample. Single templates correctly identified changes in the number of cells in the subpopulation, but only the competition method detected small changes in median fluorescence. In further validation studies, competition identified a larger number of significantly altered subpopulations between young and elderly subjects. This enrichment was specific, because competition between templates from consensus male and female samples did not improve the detection of age‐related differences. Several changes between the young and elderly identified by SWIFT template competition were consistent with known alterations in the elderly, and additional

  20. Automated detection framework of the calcified plaque with acoustic shadowing in IVUS images.

    PubMed

    Gao, Zhifan; Guo, Wei; Liu, Xin; Huang, Wenhua; Zhang, Heye; Tan, Ning; Hau, William Kongto; Zhang, Yuan-Ting; Liu, Huafeng

    2014-01-01

    Intravascular Ultrasound (IVUS) is one ultrasonic imaging technology to acquire vascular cross-sectional images for the visualization of the inner vessel structure. This technique has been widely used for the diagnosis and treatment of coronary artery diseases. The detection of the calcified plaque with acoustic shadowing in IVUS images plays a vital role in the quantitative analysis of atheromatous plaques. The conventional method of the calcium detection is manual drawing by the doctors. However, it is very time-consuming, and with high inter-observer and intra-observer variability between different doctors. Therefore, the computer-aided detection of the calcified plaque is highly desired. In this paper, an automated method is proposed to detect the calcified plaque with acoustic shadowing in IVUS images by the Rayleigh mixture model, the Markov random field, the graph searching method and the prior knowledge about the calcified plaque. The performance of our method was evaluated over 996 in-vivo IVUS images acquired from eight patients, and the detected calcified plaques are compared with manually detected calcified plaques by one cardiology doctor. The experimental results are quantitatively analyzed separately by three evaluation methods, the test of the sensitivity and specificity, the linear regression and the Bland-Altman analysis. The first method is used to evaluate the ability to distinguish between IVUS images with and without the calcified plaque, and the latter two methods can respectively measure the correlation and the agreement between our results and manual drawing results for locating the calcified plaque in the IVUS image. High sensitivity (94.68%) and specificity (95.82%), good correlation and agreement (>96.82% results fall within the 95% confidence interval in the Student t-test) demonstrate the effectiveness of the proposed method in the detection of the calcified plaque with acoustic shadowing in IVUS images. PMID:25372784

  1. Automated detection of cells from immunohistochemically-stained tissues: application to Ki-67 nuclei staining

    NASA Astrophysics Data System (ADS)

    Cinar Akakin, Hatice; Kong, Hui; Elkins, Camille; Hemminger, Jessica; Miller, Barrie; Ming, Jin; Plocharczyk, Elizabeth; Roth, Rachel; Weinberg, Mitchell; Ziegler, Rebecca; Lozanski, Gerard; Gurcan, Metin N.

    2012-03-01

    An automated cell nuclei detection algorithm is described to be used for the quantification of immunohistochemicallystained tissues. Detection and segmentation of positively stained cells and their separation from the background and negatively-stained cells is crucial for fast, accurate, consistent and objective analysis of pathology images. One of the major challenges is the identification, hence accurate counting of individual cells, when these cells form clusters. To identify individual cell nuclei within clusters, we propose a new cell nuclei detection method based on the well-known watershed segmentation, which can lead to under- or over-segmentation for this problem. Our algorithm handles oversegmentation by combining H-minima transformed watershed algorithm with a novel region merging technique. To handle under-segmentation problem, we develop a Laplacian-of-Gaussian (LoG) filtering based blob detection algorithm, which estimates the range of the scales from the image adaptively. An SVM classifier was trained in order to separate non-touching single cells and touching cell clusters with five features representing connected region properties such as eccentricity, area, perimeter, convex area and perimeter-to-area ratio. Classified touching cell clusters are segmented with the H-minima based watershed algorithm. The resulting over-segmented regions are improved with the merging algorithm. The remaining under-segmented cell clusters are convolved with LoG filters to detect the cells within them. Cell-by-cell nucleus detection performance is evaluated by comparing computer detections with cell locations manually marked by eight pathology residents. The sensitivity is 89% when the cells are marked as positive at least by one resident and it increases to 99% when the evaluated cells are marked by all eight residents. In comparison, the average reader sensitivity varies between 70% +/- 18% and 95% +/- 11%.

  2. Knee x-ray image analysis method for automated detection of osteoarthritis.

    PubMed

    Shamir, Lior; Ling, Shari M; Scott, William W; Bos, Angelo; Orlov, Nikita; Macura, Tomasz J; Eckley, D Mark; Ferrucci, Luigi; Goldberg, Ilya G

    2009-02-01

    We describe a method for automated detection of radiographic osteoarthritis (OA) in knee X-ray images. The detection is based on the Kellgren-Lawrence (KL) classification grades, which correspond to the different stages of OA severity. The classifier was built using manually classified X-rays, representing the first four KL grades (normal, doubtful, minimal, and moderate). Image analysis is performed by first identifying a set of image content descriptors and image transforms that are informative for the detection of OA in the X-rays and assigning weights to these image features using Fisher scores. Then, a simple weighted nearest neighbor rule is used in order to predict the KL grade to which a given test X-ray sample belongs. The dataset used in the experiment contained 350 X-ray images classified manually by their KL grades. Experimental results show that moderate OA (KL grade 3) and minimal OA (KL grade 2) can be differentiated from normal cases with accuracy of 91.5% and 80.4%, respectively. Doubtful OA (KL grade 1) was detected automatically with a much lower accuracy of 57%. The source code developed and used in this study is available for free download at www.openmicroscopy.org. PMID:19342330

  3. Creating an automated chiller fault detection and diagnostics tool using a data fault library.

    PubMed

    Bailey, Margaret B; Kreider, Jan F

    2003-07-01

    Reliable, automated detection and diagnosis of abnormal behavior within vapor compression refrigeration cycle (VCRC) equipment is extremely desirable for equipment owners and operators. The specific type of VCRC equipment studied in this paper is a 70-ton helical rotary, air-cooled chiller. The fault detection and diagnostic (FDD) tool developed as part of this research analyzes chiller operating data and detects faults through recognizing trends or patterns existing within the data. The FDD method incorporates a neural network (NN) classifier to infer the current state given a vector of observables. Therefore the FDD method relies upon the availability of normal and fault empirical data for training purposes and therefore a fault library of empirical data is assembled. This paper presents procedures for conducting sophisticated fault experiments on chillers that simulate air-cooled condenser, refrigerant, and oil related faults. The experimental processes described here are not well documented in literature and therefore will provide the interested reader with a useful guide. In addition, the authors provide evidence, based on both thermodynamics and empirical data analysis, that chiller performance is significantly degraded during fault operation. The chiller's performance degradation is successfully detected and classified by the NN FDD classifier as discussed in the paper's final section. PMID:12858981

  4. Automated detection of pain from facial expressions: a rule-based approach using AAM

    NASA Astrophysics Data System (ADS)

    Chen, Zhanli; Ansari, Rashid; Wilkie, Diana J.

    2012-02-01

    In this paper, we examine the problem of using video analysis to assess pain, an important problem especially for critically ill, non-communicative patients, and people with dementia. We propose and evaluate an automated method to detect the presence of pain manifested in patient videos using a unique and large collection of cancer patient videos captured in patient homes. The method is based on detecting pain-related facial action units defined in the Facial Action Coding System (FACS) that is widely used for objective assessment in pain analysis. In our research, a person-specific Active Appearance Model (AAM) based on Project-Out Inverse Compositional Method is trained for each patient individually for the modeling purpose. A flexible representation of the shape model is used in a rule-based method that is better suited than the more commonly used classifier-based methods for application to the cancer patient videos in which pain-related facial actions occur infrequently and more subtly. The rule-based method relies on the feature points that provide facial action cues and is extracted from the shape vertices of AAM, which have a natural correspondence to face muscular movement. In this paper, we investigate the detection of a commonly used set of pain-related action units in both the upper and lower face. Our detection results show good agreement with the results obtained by three trained FACS coders who independently reviewed and scored the action units in the cancer patient videos.

  5. Time-Gated Orthogonal Scanning Automated Microscopy (OSAM) for High-speed Cell Detection and Analysis

    NASA Astrophysics Data System (ADS)

    Lu, Yiqing; Xi, Peng; Piper, James A.; Huo, Yujing; Jin, Dayong

    2012-11-01

    We report a new development of orthogonal scanning automated microscopy (OSAM) incorporating time-gated detection to locate rare-event organisms regardless of autofluorescent background. The necessity of using long-lifetime (hundreds of microseconds) luminescent biolabels for time-gated detection implies long integration (dwell) time, resulting in slow scan speed. However, here we achieve high scan speed using a new 2-step orthogonal scanning strategy to realise on-the-fly time-gated detection and precise location of 1-μm lanthanide-doped microspheres with signal-to-background ratio of 8.9. This enables analysis of a 15 mm × 15 mm slide area in only 3.3 minutes. We demonstrate that detection of only a few hundred photoelectrons within 100 μs is sufficient to distinguish a target event in a prototype system using ultraviolet LED excitation. Cytometric analysis of lanthanide labelled Giardia cysts achieved a signal-to-background ratio of two orders of magnitude. Results suggest that time-gated OSAM represents a new opportunity for high-throughput background-free biosensing applications.

  6. Enhancing implicit change detection through action.

    PubMed

    Tseng, Philip; Tuennermann, Jan; Roker-Knight, Nancy; Winter, Dorina; Scharlau, Ingrid; Bridgeman, Bruce

    2010-01-01

    Implicit change detection demonstrates how the visual system can benefit from stored information that is not immediately available to conscious awareness. We investigated the role of motor action in this context. In the first two experiments, using a one-shot implicit change-detection paradigm, participants responded to unperceived changes either with an action (jabbing the screen at the guessed location of a change) or with words (verbal report), and sat either 60 cm or 300 cm (with a laser pointer) away from the display. Our observers guessed the locations of changes at a reachable distance better with an action than with a verbal judgment. At 300 cm, beyond reach, the motor advantage disappeared. In experiment 3, this advantage was also unavailable when participants sat at a reachable distance but responded with hand-held laser pointers near their bodies. We conclude that a motor system specialized for real-time visually guided behavior has access to additional visual information. Importantly, this system is not activated by merely executing an action (experiment 2) or presenting stimuli in one's near space (experiment 3). It is activated only when both conditions are fulfilled, which implies that it is the actual contact that matters to the visual system. PMID:21180353

  7. Detecting changes during pregnancy with Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Vargis, Elizabeth; Robertson, Kesha; Al-Hendy, Ayman; Reese, Jeff; Mahadevan-Jansen, Anita

    2010-02-01

    Preterm labor is the second leading cause of neonatal mortality and leads to a myriad of complications like delayed development and cerebral palsy. Currently, there is no way to accurately predict preterm labor, making its prevention and treatment virtually impossible. While there are some at-risk patients, over half of all preterm births do not fall into any high-risk category. This study seeks to predict and prevent preterm labor by using Raman spectroscopy to detect changes in the cervix during pregnancy. Since Raman spectroscopy has been used to detect cancers in vivo in organs like the cervix and skin, it follows that spectra will change over the course of pregnancy. Previous studies have shown that fluorescence decreased during pregnancy and increased during post-partum exams to pre-pregnancy levels. We believe significant changes will occur in the Raman spectra obtained during the course of pregnancy. In this study, Raman spectra from the cervix of pregnant mice and women will be acquired. Specific changes that occur due to cervical softening or changes in hormonal levels will be observed to understand the likelihood that a female mouse or a woman will enter labor.

  8. Detecting hydrological changes through conceptual model

    NASA Astrophysics Data System (ADS)

    Viola, Francesco; Caracciolo, Domenico; Pumo, Dario; Francipane, Antonio; Valerio Noto, Leonardo

    2015-04-01

    Natural changes and human modifications in hydrological systems coevolve and interact in a coupled and interlinked way. If, on one hand, climatic changes are stochastic, non-steady, and affect the hydrological systems, on the other hand, human-induced changes due to over-exploitation of soils and water resources modifies the natural landscape, water fluxes and its partitioning. Indeed, the traditional assumption of static systems in hydrological analysis, which has been adopted for long time, fails whenever transient climatic conditions and/or land use changes occur. Time series analysis is a way to explore environmental changes together with societal changes; unfortunately, the not distinguishability between causes restrict the scope of this method. In order to overcome this limitation, it is possible to couple time series analysis with an opportune hydrological model, such as a conceptual hydrological model, which offers a schematization of complex dynamics acting within a basin. Assuming that model parameters represent morphological basin characteristics and that calibration is a way to detect hydrological signature at a specific moment, it is possible to argue that calibrating the model over different time windows could be a method for detecting potential hydrological changes. In order to test the capabilities of a conceptual model in detecting hydrological changes, this work presents different "in silico" experiments. A synthetic-basin is forced with an ensemble of possible future scenarios generated with a stochastic weather generator able to simulate steady and non-steady climatic conditions. The experiments refer to Mediterranean climate, which is characterized by marked seasonality, and consider the outcomes of the IPCC 5th report for describing climate evolution in the next century. In particular, in order to generate future climate change scenarios, a stochastic downscaling in space and time is carried out using realizations of an ensemble of General

  9. Ischemia detection from morphological QRS angle changes.

    PubMed

    Romero, Daniel; Martínez, Juan Pablo; Laguna, Pablo; Pueyo, Esther

    2016-07-01

    In this paper, an ischemia detector is presented based on the analysis of QRS-derived angles. The detector has been developed by modeling ischemic effects on the QRS angles as a gradual change with a certain transition time and assuming a Laplacian additive modeling error contaminating the angle series. Both standard and non-standard leads were used for analysis. Non-standard leads were obtained by applying the PCA technique over specific lead subsets to represent different potential locations of the ischemic zone. The performance of the proposed detector was tested over a population of 79 patients undergoing percutaneous coronary intervention in one of the major coronary arteries (LAD (n  =  25), RCA (n  =  16) and LCX (n  =  38)). The best detection performance, obtained for standard ECG leads, was achieved in the LAD group with values of sensitivity and specificity of [Formula: see text], [Formula: see text], followed by the RCA group with [Formula: see text], Sp  =  94.4 and the LCX group with [Formula: see text], [Formula: see text], notably outperforming detection based on the ST series in all cases, with the same detector structure. The timing of the detected ischemic events ranged from 30 s up to 150 s (mean  =  66.8 s) following the start of occlusion. We conclude that changes in the QRS angles can be used to detect acute myocardial ischemia. PMID:27243441

  10. Scene change detection based on multimodal integration

    NASA Astrophysics Data System (ADS)

    Zhu, Yingying; Zhou, Dongru

    2003-09-01

    Scene change detection is an essential step to automatic and content-based video indexing, retrieval and browsing. In this paper, a robust scene change detection and classification approach is presented, which analyzes audio, visual and textual sources and accounts for their inter-relations and coincidence to semantically identify and classify video scenes. Audio analysis focuses on the segmentation of audio stream into four types of semantic data such as silence, speech, music and environmental sound. Further processing on speech segments aims at locating speaker changes. Video analysis partitions visual stream into shots. Text analysis can provide a supplemental source of clues for scene classification and indexing information. We integrate the video and audio analysis results to identify video scenes and use the text information detected by the video OCR technology or derived from transcripts available to refine scene classification. Results from single source segmentation are in some cases suboptimal. By combining visual, aural features adn the accessorial text information, the scence extraction accuracy is enhanced, and more semantic segmentations are developed. Experimental results are proven to rather promising.

  11. Seabed change detection in challenging environments

    NASA Astrophysics Data System (ADS)

    Matthews, Cameron A.; Sternlicht, Daniel D.

    2011-06-01

    Automatic Change Detection (ACD) compares new and stored terrain images for alerting to changes occurring over time. These techniques, long used in airborne radar, are just beginning to be applied to sidescan sonar. Under the right conditions ACD by image correlation-comparing multi-temporal image data at the pixel or parcel level-can be used to detect new objects on the seafloor. Synthetic aperture sonars (SAS)-coherent sensors that produce fine-scale, range-independent resolution seafloor images-are well suited for this approach; however, dynamic seabed environments can introduce "clutter" to the process. This paper explores an ACD method that uses salience mapping in a global-to-local analysis architecture. In this method, termed Temporally Invariant Saliency (TIS), variance ratios of median-filtered repeat-pass images are used to detect new objects, while deemphasizing modest environmental or radiometric-induced changes in the background. Successful tests with repeat-pass data from two SAS systems mounted on autonomous undersea vehicles (AUV) demonstrate the feasibility of the technique.

  12. First Science Results from Solar Data Mining Using Automated Feature Detection

    NASA Astrophysics Data System (ADS)

    Martens, P. C.

    2014-12-01

    The SDO Feature Finding Team (FFT) has produced 16 automated feature tracking modules for data from SDO, LASCO, and ground-based H-alpha observatories. The metadata produced by those modules and others are available from the Heliophysics Events Knowledgebase (HEK) and the Virtual Solar Observatory (VSO). Having metadata available for large amounts of events and phenomena, obtained with consistent detection criteria unlike catalogs produced by human observers, allows researchers to effectively search solar data for patterns. I will show a number of science results obtained recently. Not surprisingly several of the patterns are well known (e.g. flares occur mostly in active regions), but some really surprising new trends have been discovered as well, in at least one case upending scientific consensus. These results show the power and promise that systematic feature recognition and data mining holds for solar physics.

  13. An automated system for lung nodule detection in low-dose computed tomography

    NASA Astrophysics Data System (ADS)

    Gori, I.; Fantacci, M. E.; Preite Martinez, A.; Retico, A.

    2007-03-01

    A computer-aided detection (CAD) system for the identification of pulmonary nodules in low-dose multi-detector helical Computed Tomography (CT) images was developed in the framework of the MAGIC-5 Italian project. One of the main goals of this project is to build a distributed database of lung CT scans in order to enable automated image analysis through a data and cpu GRID infrastructure. The basic modules of our lung-CAD system, a dot-enhancement filter for nodule candidate selection and a neural classifier for false-positive finding reduction, are described. The system was designed and tested for both internal and sub-pleural nodules. The results obtained on the collected database of low-dose thin-slice CT scans are shown in terms of free response receiver operating characteristic (FROC) curves and discussed.

  14. Automation of disbond detection in aircraft fuselage through thermal image processing

    NASA Technical Reports Server (NTRS)

    Prabhu, D. R.; Winfree, W. P.

    1992-01-01

    A procedure for interpreting thermal images obtained during the nondestructive evaluation of aircraft bonded joints is presented. The procedure operates on time-derivative thermal images and resulted in a disbond image with disbonds highlighted. The size of the 'black clusters' in the output disbond image is a quantitative measure of disbond size. The procedure is illustrated using simulation data as well as data obtained through experimental testing of fabricated samples and aircraft panels. Good results are obtained, and, except in pathological cases, 'false calls' in the cases studied appeared only as noise in the output disbond image which was easily filtered out. The thermal detection technique coupled with an automated image interpretation capability will be a very fast and effective method for inspecting bonded joints in an aircraft structure.

  15. Selecting Valid Correlation Areas for Automated Bullet Identification System Based on Striation Detection

    PubMed Central

    Chu, Wei; Song, John; Vorburger, Theodore V.; Thompson, Robert; Silver, Richard

    2011-01-01

    Some automated bullet identification systems calculate a correlation score between two land impressions to measure their similarity. When extracting a compressed profile from the land impression of a fired bullet, inclusion of areas that do not contain valid individual striation information may lead to sub-optimal extraction and therefore may deteriorate the correlation result. In this paper, an edge detection algorithm and selection process are used together to locate the edge points of all tool-mark features and filter out those not corresponding to striation marks. Edge points of the resulting striation marks are reserved and expanded to generate a mask image. By imposing the mask image on the topography image, the weakly striated area(s) are removed from the expressed profile extraction. Using this method, 48 bullets fired from 12 gun barrels of six manufacturers resulted in a higher matching rate than previous studies. PMID:26989589

  16. Automated infrasound signal detection algorithms implemented in MatSeis - Infra Tool.

    SciTech Connect

    Hart, Darren

    2004-07-01

    MatSeis's infrasound analysis tool, Infra Tool, uses frequency slowness processing to deconstruct the array data into three outputs per processing step: correlation, azimuth and slowness. Until now, an experienced analyst trained to recognize a pattern observed in outputs from signal processing manually accomplished infrasound signal detection. Our goal was to automate the process of infrasound signal detection. The critical aspect of infrasound signal detection is to identify consecutive processing steps where the azimuth is constant (flat) while the time-lag correlation of the windowed waveform is above background value. These two statements describe the arrival of a correlated set of wavefronts at an array. The Hough Transform and Inverse Slope methods are used to determine the representative slope for a specified number of azimuth data points. The representative slope is then used in conjunction with associated correlation value and azimuth data variance to determine if and when an infrasound signal was detected. A format for an infrasound signal detection output file is also proposed. The detection output file will list the processed array element names, followed by detection characteristics for each method. Each detection is supplied with a listing of frequency slowness processing characteristics: human time (YYYY/MM/DD HH:MM:SS.SSS), epochal time, correlation, fstat, azimuth (deg) and trace velocity (km/s). As an example, a ground truth event was processed using the four-element DLIAR infrasound array located in New Mexico. The event is known as the Watusi chemical explosion, which occurred on 2002/09/28 at 21:25:17 with an explosive yield of 38,000 lb TNT equivalent. Knowing the source and array location, the array-to-event distance was computed to be approximately 890 km. This test determined the station-to-event azimuth (281.8 and 282.1 degrees) to within 1.6 and 1.4 degrees for the Inverse Slope and Hough Transform detection algorithms, respectively, and

  17. Automated detection of colorectal lesions with dual-energy CT colonography

    NASA Astrophysics Data System (ADS)

    Näppi, Janne J.; Kim, Se Hyung; Yoshida, Hiroyuki

    2012-03-01

    Conventional single-energy computed tomography colonography (CTC) tends to miss polyps 6 - 9 mm in size and flat lesions. Dual-energy CTC (DE-CTC) provides more complete information about the chemical composition of tissue than does conventional CTC. We developed an automated computer-aided detection (CAD) scheme for detecting colorectal lesions in which dual-energy features were used to identify different bowel materials and their partial-volume artifacts. Based on these features, the dual-energy CAD (DE-CAD) scheme extracted the region of colon by use of a lumen-tracking method, detected lesions by use of volumetric shape features, and reduced false positives by use of a statistical classifier. For validation, 20 patients were prepared for DE-CTC by use of reduced bowel cleansing and orally administered fecal tagging with iodine and/or barium. The DE-CTC was performed in dual positions by use of a dual-energy CT scanner (SOMATOM Definition, Siemens) at 140 kVp and 80 kVp energy levels. The lesions identified by subsequent same-day colonoscopy were correlated with the DE-CTC data. The detection accuracies of the DE-CAD and conventional CAD schemes were compared by use of leave-one-patient-out evaluation and a bootstrap analysis. There were 25 colonoscopy-confirmed lesions: 22 were 6 - 9 mm and 3 were flat lesions >=10 mm in size. The DE-CAD scheme detected the large flat lesions and 95% of the 6 - 9 mm lesions with 9.9 false positives per patient. The improvement in detection accuracy by the DE-CAD was statistically significant.

  18. Exploring representations of protein structure for automated remote homology detection and mapping of protein structure space

    PubMed Central

    2014-01-01

    Background Due to rapid sequencing of genomes, there are now millions of deposited protein sequences with no known function. Fast sequence-based comparisons allow detecting close homologs for a protein of interest to transfer functional information from the homologs to the given protein. Sequence-based comparison cannot detect remote homologs, in which evolution has adjusted the sequence while largely preserving structure. Structure-based comparisons can detect remote homologs but most methods for doing so are too expensive to apply at a large scale over structural databases of proteins. Recently, fragment-based structural representations have been proposed that allow fast detection of remote homologs with reasonable accuracy. These representations have also been used to obtain linearly-reducible maps of protein structure space. It has been shown, as additionally supported from analysis in this paper that such maps preserve functional co-localization of the protein structure space. Methods Inspired by a recent application of the Latent Dirichlet Allocation (LDA) model for conducting structural comparisons of proteins, we propose higher-order LDA-obtained topic-based representations of protein structures to provide an alternative route for remote homology detection and organization of the protein structure space in few dimensions. Various techniques based on natural language processing are proposed and employed to aid the analysis of topics in the protein structure domain. Results We show that a topic-based representation is just as effective as a fragment-based one at automated detection of remote homologs and organization of protein structure space. We conduct a detailed analysis of the information content in the topic-based representation, showing that topics have semantic meaning. The fragment-based and topic-based representations are also shown to allow prediction of superfamily membership. Conclusions This work opens exciting venues in designing novel

  19. Automated detection and tracking of individual and clustered cell surface low density lipoprotein receptor molecules.

    PubMed

    Ghosh, R N; Webb, W W

    1994-05-01

    We have developed a technique to detect, recognize, and track each individual low density lipoprotein receptor (LDL-R) molecule and small receptor clusters on the surface of human skin fibroblasts. Molecular recognition and high precision (30 nm) simultaneous automatic tracking of all of the individual receptors in the cell surface population utilize quantitative time-lapse low light level digital video fluorescence microscopy analyzed by purpose-designed algorithms executed on an image processing work station. The LDL-Rs are labeled with the biologically active, fluorescent LDL derivative dil-LDL. Individual LDL-Rs and unresolved small clusters are identified by measuring the fluorescence power radiated by the sub-resolution fluorescent spots in the image; identification of single particles is ascertained by four independent techniques. An automated tracking routine was developed to track simultaneously, and without user intervention, a multitude of fluorescent particles through a sequence of hundreds of time-lapse image frames. The limitations on tracking precision were found to depend on the signal-to-noise ratio of the tracked particle image and mechanical drift of the microscope system. We describe the methods involved in (i) time-lapse acquisition of the low-light level images, (ii) simultaneous automated tracking of the fluorescent diffraction limited punctate images, (iii) localizing particles with high precision and limitations, and (iv) detecting and identifying single and clustered LDL-Rs. These methods are generally applicable and provide a powerful tool to visualize and measure dynamics and interactions of individual integral membrane proteins on living cell surfaces. PMID:8061186

  20. Automated detection and tracking of individual and clustered cell surface low density lipoprotein receptor molecules.

    PubMed Central

    Ghosh, R N; Webb, W W

    1994-01-01

    We have developed a technique to detect, recognize, and track each individual low density lipoprotein receptor (LDL-R) molecule and small receptor clusters on the surface of human skin fibroblasts. Molecular recognition and high precision (30 nm) simultaneous automatic tracking of all of the individual receptors in the cell surface population utilize quantitative time-lapse low light level digital video fluorescence microscopy analyzed by purpose-designed algorithms executed on an image processing work station. The LDL-Rs are labeled with the biologically active, fluorescent LDL derivative dil-LDL. Individual LDL-Rs and unresolved small clusters are identified by measuring the fluorescence power radiated by the sub-resolution fluorescent spots in the image; identification of single particles is ascertained by four independent techniques. An automated tracking routine was developed to track simultaneously, and without user intervention, a multitude of fluorescent particles through a sequence of hundreds of time-lapse image frames. The limitations on tracking precision were found to depend on the signal-to-noise ratio of the tracked particle image and mechanical drift of the microscope system. We describe the methods involved in (i) time-lapse acquisition of the low-light level images, (ii) simultaneous automated tracking of the fluorescent diffraction limited punctate images, (iii) localizing particles with high precision and limitations, and (iv) detecting and identifying single and clustered LDL-Rs. These methods are generally applicable and provide a powerful tool to visualize and measure dynamics and interactions of individual integral membrane proteins on living cell surfaces. Images FIGURE 1 FIGURE 6 FIGURE 7 FIGURE 8 FIGURE 9 FIGURE 10 PMID:8061186

  1. Fully automated detection of the counting area in blood smears for computer aided hematology.

    PubMed

    Rupp, Stephan; Schlarb, Timo; Hasslmeyer, Erik; Zerfass, Thorsten

    2011-01-01

    For medical diagnosis, blood is an indispensable indicator for a wide variety of diseases, i.e. hemic, parasitic and sexually transmitted diseases. A robust detection and exact segmentation of white blood cells (leukocytes) in stained blood smears of the peripheral blood provides the base for a fully automated, image based preparation of the so called differential blood cell count in the context of medical laboratory diagnostics. Especially for the localization of the blood cells and in particular for the segmentation of the cells it is necessary to detect the working area of the blood smear. In this contribution we present an approach for locating the so called counting area on stained blood smears that is the region where cells are predominantly separated and do not interfere with each other. For this multiple images of a blood smear are taken and analyzed in order to select the image corresponding to this area. The analysis involves the computation of an unimodal function from image content that serves as indicator for the corresponding image. This requires a prior segmentation of the cells that is carried out by a binarization in the HSV color space. Finally, the indicator function is derived from the number of cells and the cells' surface area. Its unimodality guarantees to find a maximum value that corresponds to the counting areas image index. By this, a fast lookup of the counting area is performed enabling a fully automated analysis of blood smears for medical diagnosis. For an evaluation the algorithm's performance on a number of blood smears was compared with the ground truth information that has been defined by an adept hematologist. PMID:22256137

  2. Automated wavelet denoising of photoacoustic signals for circulating melanoma cell detection and burn image reconstruction.

    PubMed

    Holan, Scott H; Viator, John A

    2008-06-21

    Photoacoustic image reconstruction may involve hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a biological sample with laser light are used to produce an image of the acoustic source. Each of these measurements must undergo some signal processing, such as denoising or system deconvolution. In order to process the numerous signals, we have developed an automated wavelet algorithm for denoising signals. We appeal to the discrete wavelet transform for denoising photoacoustic signals generated in a dilute melanoma cell suspension and in thermally coagulated blood. We used 5, 9, 45 and 270 melanoma cells in the laser beam path as test concentrations. For the burn phantom, we used coagulated blood in 1.6 mm silicon tube submerged in Intralipid. Although these two targets were chosen as typical applications for photoacoustic detection and imaging, they are of independent interest. The denoising employs level-independent universal thresholding. In order to accommodate nonradix-2 signals, we considered a maximal overlap discrete wavelet transform (MODWT). For the lower melanoma cell concentrations, as the signal-to-noise ratio approached 1, denoising allowed better peak finding. For coagulated blood, the signals were denoised to yield a clean photoacoustic resulting in an improvement of 22% in the reconstructed image. The entire signal processing technique was automated so that minimal user intervention was needed to reconstruct the images. Such an algorithm may be used for image reconstruction and signal extraction for applications such as burn depth imaging, depth profiling of vascular lesions in skin and the detection of single cancer cells in blood samples. PMID:18495977

  3. Time series change detection: Algorithms for land cover change

    NASA Astrophysics Data System (ADS)

    Boriah, Shyam

    can be used for decision making and policy planning purposes. In particular, previous change detection studies have primarily relied on examining differences between two or more satellite images acquired on different dates. Thus, a technological solution that detects global land cover change using high temporal resolution time series data will represent a paradigm-shift in the field of land cover change studies. To realize these ambitious goals, a number of computational challenges in spatio-temporal data mining need to be addressed. Specifically, analysis and discovery approaches need to be cognizant of climate and ecosystem data characteristics such as seasonality, non-stationarity/inter-region variability, multi-scale nature, spatio-temporal autocorrelation, high-dimensionality and massive data size. This dissertation, a step in that direction, translates earth science challenges to computer science problems, and provides computational solutions to address these problems. In particular, three key technical capabilities are developed: (1) Algorithms for time series change detection that are effective and can scale up to handle the large size of earth science data; (2) Change detection algorithms that can handle large numbers of missing and noisy values present in satellite data sets; and (3) Spatio-temporal analysis techniques to identify the scale and scope of disturbance events.

  4. Automated high-pressure titration system with in situ infrared spectroscopic detection

    SciTech Connect

    Thompson, Christopher J. Martin, Paul F.; Chen, Jeffrey; Schaef, Herbert T.; Rosso, Kevin M.; Felmy, Andrew R.; Loring, John S.; Benezeth, Pascale

    2014-04-15

    A fully automated titration system with infrared detection was developed for investigating interfacial chemistry at high pressures. The apparatus consists of a high-pressure fluid generation and delivery system coupled to a high-pressure cell with infrared optics. A manifold of electronically actuated valves is used to direct pressurized fluids into the cell. Precise reagent additions to the pressurized cell are made with calibrated tubing loops that are filled with reagent and placed in-line with the cell and a syringe pump. The cell's infrared optics facilitate both transmission and attenuated total reflection (ATR) measurements to monitor bulk-fluid composition and solid-surface phenomena such as adsorption, desorption, complexation, dissolution, and precipitation. Switching between the two measurement modes is accomplished with moveable mirrors that direct the light path of a Fourier transform infrared spectrometer into the cell along transmission or ATR light paths. The versatility of the high-pressure IR titration system was demonstrated with three case studies. First, we titrated water into supercritical CO{sub 2} (scCO{sub 2}) to generate an infrared calibration curve and determine the solubility of water in CO{sub 2} at 50 °C and 90 bar. Next, we characterized the partitioning of water between a montmorillonite clay and scCO{sub 2} at 50 °C and 90 bar. Transmission-mode spectra were used to quantify changes in the clay's sorbed water concentration as a function of scCO{sub 2} hydration, and ATR measurements provided insights into competitive residency of water and CO{sub 2} on the clay surface and in the interlayer. Finally, we demonstrated how time-dependent studies can be conducted with the system by monitoring the carbonation reaction of forsterite (Mg{sub 2}SiO{sub 4}) in water-bearing scCO{sub 2} at 50 °C and 90 bar. Immediately after water dissolved in the scCO{sub 2}, a thin film of adsorbed water formed on the mineral surface, and the film

  5. A Novel Method for the Separation of Overlapping Pollen Species for Automated Detection and Classification

    PubMed Central

    Flores, Francisco

    2016-01-01

    The identification of pollen in an automated way will accelerate different tasks and applications of palynology to aid in, among others, climate change studies, medical allergies calendar, and forensic science. The aim of this paper is to develop a system that automatically captures a hundred microscopic images of pollen and classifies them into the 12 different species from Lagunera Region, Mexico. Many times, the pollen is overlapping on the microscopic images, which increases the difficulty for its automated identification and classification. This paper focuses on a method to segment the overlapping pollen. First, the proposed method segments the overlapping pollen. Second, the method separates the pollen based on the mean shift process (100% segmentation) and erosion by H-minima based on the Fibonacci series. Thus, pollen is characterized by its shape, color, and texture for training and evaluating the performance of three classification techniques: random tree forest, multilayer perceptron, and Bayes net. Using the newly developed system, we obtained segmentation results of 100% and classification on top of 96.2% and 96.1% in recall and precision using multilayer perceptron in twofold cross validation. PMID:27034710

  6. A Novel Method for the Separation of Overlapping Pollen Species for Automated Detection and Classification.

    PubMed

    Tello-Mijares, Santiago; Flores, Francisco

    2016-01-01

    The identification of pollen in an automated way will accelerate different tasks and applications of palynology to aid in, among others, climate change studies, medical allergies calendar, and forensic science. The aim of this paper is to develop a system that automatically captures a hundred microscopic images of pollen and classifies them into the 12 different species from Lagunera Region, Mexico. Many times, the pollen is overlapping on the microscopic images, which increases the difficulty for its automated identification and classification. This paper focuses on a method to segment the overlapping pollen. First, the proposed method segments the overlapping pollen. Second, the method separates the pollen based on the mean shift process (100% segmentation) and erosion by H-minima based on the Fibonacci series. Thus, pollen is characterized by its shape, color, and texture for training and evaluating the performance of three classification techniques: random tree forest, multilayer perceptron, and Bayes net. Using the newly developed system, we obtained segmentation results of 100% and classification on top of 96.2% and 96.1% in recall and precision using multilayer perceptron in twofold cross validation. PMID:27034710

  7. Automated detection and characterization of microstructural features: application to eutectic particles in single crystal Ni-based superalloys

    NASA Astrophysics Data System (ADS)

    Tschopp, M. A.; Groeber, M. A.; Fahringer, R.; Simmons, J. P.; Rosenberger, A. H.; Woodward, C.

    2010-03-01

    Serial sectioning methods continue to produce an abundant amount of image data for quantifying the three-dimensional nature of material microstructures. Here, we discuss a methodology to automate detecting and characterizing eutectic particles taken from serial images of a production turbine blade made of a heat-treated single crystal Ni-based superalloy (PWA 1484). This method includes two important steps for unassisted eutectic particle characterization: automatically identifying a seed point within each particle and segmenting the particle using a region growing algorithm with an automated stop point. Once detected, the segmented eutectic particles are used to calculate microstructural statistics for characterizing and reconstructing statistically representative synthetic microstructures for single crystal Ni-based superalloys. The significance of this work is its ability to automate characterization for analysing the 3D nature of eutectic particles.

  8. Automated real-time detection of defects during machining of ceramics

    DOEpatents

    Ellingson, W.A.; Sun, J.

    1997-11-18

    Apparatus for the automated real-time detection and classification of defects during the machining of ceramic components employs an elastic optical scattering technique using polarized laser light. A ceramic specimen is continuously moved while being machined. Polarized laser light is directed onto the ceramic specimen surface at a fixed position just aft of the machining tool for examination of the newly machined surface. Any foreign material near the location of the laser light on the ceramic specimen is cleared by an air blast. As the specimen is moved, its surface is continuously scanned by the polarized laser light beam to provide a two-dimensional image presented in real-time on a video display unit, with the motion of the ceramic specimen synchronized with the data acquisition speed. By storing known ``feature masks`` representing various surface and sub-surface defects and comparing measured defects with the stored feature masks, detected defects may be automatically characterized. Using multiple detectors, various types of defects may be detected and classified. 14 figs.

  9. Automated real-time detection of defects during machining of ceramics

    DOEpatents

    Ellingson, William A.; Sun, Jiangang

    1997-01-01

    Apparatus for the automated real-time detection and classification of defects during the machining of ceramic components employs an elastic optical scattering technique using polarized laser light. A ceramic specimen is continuously moved while being machined. Polarized laser light is directed onto the ceramic specimen surface at a fixed position just aft of the machining tool for examination of the newly machined surface. Any foreign material near the location of the laser light on the ceramic specimen is cleared by an air blast. As the specimen is moved, its surface is continuously scanned by the polarized laser light beam to provide a two-dimensional image presented in real-time on a video display unit, with the motion of the ceramic specimen synchronized with the data acquisition speed. By storing known "feature masks" representing various surface and sub-surface defects and comparing measured defects with the stored feature masks, detected defects may be automatically characterized. Using multiple detectors, various types of defects may be detected and classified.

  10. Automated Detection of Synapses in Serial Section Transmission Electron Microscopy Image Stacks

    PubMed Central

    Kreshuk, Anna; Koethe, Ullrich; Pax, Elizabeth; Bock, Davi D.; Hamprecht, Fred A.

    2014-01-01

    We describe a method for fully automated detection of chemical synapses in serial electron microscopy images with highly anisotropic axial and lateral resolution, such as images taken on transmission electron microscopes. Our pipeline starts from classification of the pixels based on 3D pixel features, which is followed by segmentation with an Ising model MRF and another classification step, based on object-level features. Classifiers are learned on sparse user labels; a fully annotated data subvolume is not required for training. The algorithm was validated on a set of 238 synapses in 20 serial 7197×7351 pixel images (4.5×4.5×45 nm resolution) of mouse visual cortex, manually labeled by three independent human annotators and additionally re-verified by an expert neuroscientist. The error rate of the algorithm (12% false negative, 7% false positive detections) is better than state-of-the-art, even though, unlike the state-of-the-art method, our algorithm does not require a prior segmentation of the image volume into cells. The software is based on the ilastik learning and segmentation toolkit and the vigra image processing library and is freely available on our website, along with the test data and gold standard annotations (http://www.ilastik.org/synapse-detection/sstem). PMID:24516550

  11. Rapid, multiplex-tandem PCR assay for automated detection and differentiation of toxigenic cyanobacterial blooms.

    PubMed

    Baker, Louise; Sendall, Barbara C; Gasser, Robin B; Menjivar, Toni; Neilan, Brett A; Jex, Aaron R

    2013-01-01

    Cyanobacterial blooms are a major water quality issue and potential public health risk in freshwater, marine and estuarine ecosystems globally, because of their potential to produce cyanotoxins. To date, a significant challenge in the effective management of cyanobacterial has been an inability of classical microscopy-based approaches to consistently and reliably detect and differentiate toxic from non-toxic blooms. The potential of cyanobacteria to produce toxins has been linked to the presence of specific biosynthetic gene clusters. Here, we describe the application of a robotic PCR-based assay for the semi-automated and simultaneous detection of toxin biosynthesis genes of each of the toxin classes characterized to date for cyanobacteria [i.e., microcystins (MCYs), nodularins (NODs), cylindrospermopsins (CYNs) and paralytic shellfish toxins (PSTs)/saxitoxins (SXTs)]. We demonstrated high sensitivity and specificity for each assay using well-characterized, cultured isolates, and establish its utility as a quantitative PCR using DNA, clone and cell-based dilution series. In addition, we used 206 field-collected samples and 100 known negative controls to compare the performance of each assay with conventional PCR and direct toxin detection. We report a diagnostic specificity of 100% and a sensitivity of ≥97.7% for each assay. PMID:23850895

  12. Automated x-ray detection of contaminants in continuous food streams

    NASA Astrophysics Data System (ADS)

    Penman, David W.

    1996-10-01

    As an inspection technology, x-rays have been used in food product inspection for a number of years. Despite this, in contrast with the use of image processing techniques in medical applications of x-rays, food inspection systems have remained relatively rudimentary. SOme of our earlier work in this area has been stimulate by specific x-ray inspection tasks, where we have been required to locate contaminants in batches of particular packaged products. We have developed techniques for contaminant detection in both canned and bagged products. This paper gives an overview of work undertaken by Industrial Research Limited on the development of automated machine vision techniques for the inspection of food products for contaminants. Our recent work has concentrated on the development of more generic techniques for detecting contaminants in a wide range of continuously produced products, with no requirement for product singulation. A particular emphasis in our work has been the real-time detection of contaminants appearing indistinctly in x-ray images in the presence of noise and major product variability.

  13. Automated Detection of Microaneurysms Using Scale-Adapted Blob Analysis and Semi-Supervised Learning

    SciTech Connect

    Adal, Kedir M.; Sidebe, Desire; Ali, Sharib; Chaum, Edward; Karnowski, Thomas Paul; Meriaudeau, Fabrice

    2014-01-07

    Despite several attempts, automated detection of microaneurysm (MA) from digital fundus images still remains to be an open issue. This is due to the subtle nature of MAs against the surrounding tissues. In this paper, the microaneurysm detection problem is modeled as finding interest regions or blobs from an image and an automatic local-scale selection technique is presented. Several scale-adapted region descriptors are then introduced to characterize these blob regions. A semi-supervised based learning approach, which requires few manually annotated learning examples, is also proposed to train a classifier to detect true MAs. The developed system is built using only few manually labeled and a large number of unlabeled retinal color fundus images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. A competition performance measure (CPM) of 0.364 shows the competitiveness of the proposed system against state-of-the art techniques as well as the applicability of the proposed features to analyze fundus images.

  14. Evaluation of change detection techniques for monitoring coastal zone environments

    NASA Technical Reports Server (NTRS)

    Weismiller, R. A. (Principal Investigator); Kristof, S. J.; Scholz, D. K.; Anuta, P. E.; Momin, S. M.

    1977-01-01

    The author has identified the following significant results. Four change detection techniques were designed and implemented for evaluation: (1) post classification comparison change detection, (2) delta data change detection, (3) spectral/temporal change classification, and (4) layered spectral/temporal change classification. The post classification comparison technique reliably identified areas of change and was used as the standard for qualitatively evaluating the other three techniques. The layered spectral/temporal change classification and the delta data change detection results generally agreed with the post classification comparison technique results; however, many small areas of change were not identified. Major discrepancies existed between the post classification comparison and spectral/temporal change detection results.

  15. Enhanced pulsar and single pulse detection via automated radio frequency interference detection in multipixel feeds

    NASA Astrophysics Data System (ADS)

    Kocz, J.; Bailes, M.; Barnes, D.; Burke-Spolaor, S.; Levin, L.

    2012-02-01

    Single pixel feeds on large aperture radio telescopes have the ability to detect weak (˜10 mJy) impulsive bursts of radio emission and sub-mJy radio pulsars. Unfortunately, in large-scale blind surveys, radio frequency interference (RFI) mimics both radio bursts and radio pulsars, greatly reducing the sensitivity to new discoveries as real signals of astronomical origin get lost among the millions of false candidates. In this paper a technique that takes advantage of multipixel feeds to use eigenvector decomposition of common signals is used to greatly facilitate radio burst and pulsar discovery. Since the majority of RFI occurs with zero dispersion, the method was tested on the total power present in the 13 beams of the Parkes multibeam receiver using data from archival intermediate-latitude surveys. The implementation of this method greatly reduced the number of false candidates and led to the discovery of one new rotating radio transient or RRAT, six new pulsars and five new pulses that shared the swept-frequency characteristics similar in nature to the `Lorimer burst'. These five new signals occurred within minutes of 11 previous detections of a similar type. When viewed together, they display temporal characteristics related to integer seconds, with non-random distributions and characteristic 'gaps' between them, suggesting they are not from a naturally occurring source. Despite the success in removing RFI, false candidates present in the data that are only visible after integrating in time or at non-zero dispersion remained. It is demonstrated that with some computational penalty, the method can be applied iteratively at all trial dispersions and time resolutions to remove the vast majority of spurious candidates.

  16. Results of Automated Retinal Image Analysis for Detection of Diabetic Retinopathy from the Nakuru Study, Kenya

    PubMed Central

    2015-01-01

    Objective Digital retinal imaging is an established method of screening for diabetic retinopathy (DR). It has been established that currently about 1% of the world’s blind or visually impaired is due to DR. However, the increasing prevalence of diabetes mellitus and DR is creating an increased workload on those with expertise in grading retinal images. Safe and reliable automated analysis of retinal images may support screening services worldwide. This study aimed to compare the Iowa Detection Program (IDP) ability to detect diabetic eye diseases (DED) to human grading carried out at Moorfields Reading Centre on the population of Nakuru Study from Kenya. Participants Retinal images were taken from participants of the Nakuru Eye Disease Study in Kenya in 2007/08 (n = 4,381 participants [NW6 Topcon Digital Retinal Camera]). Methods First, human grading was performed for the presence or absence of DR, and for those with DR this was sub-divided in to referable or non-referable DR. The automated IDP software was deployed to identify those with DR and also to categorize the severity of DR. Main Outcome Measures The primary outcomes were sensitivity, specificity, and positive and negative predictive value of IDP versus the human grader as reference standard. Results Altogether 3,460 participants were included. 113 had DED, giving a prevalence of 3.3% (95% CI, 2.7–3.9%). Sensitivity of the IDP to detect DED as by the human grading was 91.0% (95% CI, 88.0–93.4%). The IDP ability to detect DED gave an AUC of 0.878 (95% CI 0.850–0.905). It showed a negative predictive value of 98%. The IDP missed no vision threatening retinopathy in any patients and none of the false negative cases met criteria for treatment. Conclusions In this epidemiological sample, the IDP’s grading was comparable to that of human graders’. It therefore might be feasible to consider inclusion into usual epidemiological grading. PMID:26425849

  17. Performance evaluation of an automated single-channel sleep–wake detection algorithm

    PubMed Central

    Kaplan, Richard F; Wang, Ying; Loparo, Kenneth A; Kelly, Monica R; Bootzin, Richard R

    2014-01-01

    Background A need exists, from both a clinical and a research standpoint, for objective sleep measurement systems that are both easy to use and can accurately assess sleep and wake. This study evaluates the output of an automated sleep–wake detection algorithm (Z-ALG) used in the Zmachine (a portable, single-channel, electroencephalographic [EEG] acquisition and analysis system) against laboratory polysomnography (PSG) using a consensus of expert visual scorers. Methods Overnight laboratory PSG studies from 99 subjects (52 females/47 males, 18–60 years, median age 32.7 years), including both normal sleepers and those with a variety of sleep disorders, were assessed. PSG data obtained from the differential mastoids (A1–A2) were assessed by Z-ALG, which determines sleep versus wake every 30 seconds using low-frequency, intermediate-frequency, and high-frequency and time domain EEG features. PSG data were independently scored by two to four certified PSG technologists, using standard Rechtschaffen and Kales guidelines, and these score files were combined on an epoch-by-epoch basis, using a majority voting rule, to generate a single score file per subject to compare against the Z-ALG output. Both epoch-by-epoch and standard sleep indices (eg, total sleep time, sleep efficiency, latency to persistent sleep, and wake after sleep onset) were compared between the Z-ALG output and the technologist consensus score files. Results Overall, the sensitivity and specificity for detecting sleep using the Z-ALG as compared to the technologist consensus are 95.5% and 92.5%, respectively, across all subjects, and the positive predictive value and the negative predictive value for detecting sleep are 98.0% and 84.2%, respectively. Overall κ agreement is 0.85 (approaching the level of agreement observed among sleep technologists). These results persist when the sleep disorder subgroups are analyzed separately. Conclusion This study demonstrates that the Z-ALG automated sleep

  18. Zone-based analysis for automated detection of abnormalities in chest radiographs

    SciTech Connect

    Kao, E-Fong; Kuo, Yu-Ting; Hsu, Jui-Sheng; Chou, Ming-Chung; Liu, Gin-Chung

    2011-07-15

    Purpose: The aim of this study was to develop an automated method for detection of local texture-based and density-based abnormalities in chest radiographs. Methods: The method was based on profile analysis to detect abnormalities in chest radiographs. In the method, one density-based feature, Density Symmetry Index, and two texture-based features, Roughness Maximum Index and Roughness Symmetry Index, were used to detect abnormalities in the lung fields. In each chest radiograph, the lung fields were divided into four zones initially and then the method was applied to each zone separately. For each zone, Density Symmetry Index was obtained from the projection profile of each zone, and Roughness Maximum Index and Roughness Symmetry Index were obtained by measuring the roughness of the horizontal profiles via moving average technique. Linear discriminant analysis was used to classify normal and abnormal cases based on the three indices. The discriminant performance of the method was evaluated using ROC analysis. Results: The method was evaluated on a database of 250 normal and 250 abnormal chest images. In the optimized conditions, the zone-based performance Az of the method for zones 1, 2, 3, and 4 were 0.917, 0.897, 0.892, and 0.814, respectively, and the case-based performance Az of the method was 0.842. Our previous method for detection of gross abnormalities was also evaluated on the same database. The case-based performance of our previous method was 0.689. Conclusions: In comparing the previous method and the new method proposed in this study, there was a great improvement by the new method for detection of local texture-based and density-based abnormalities. The new method combined with the previous one has potential for screening abnormalities in chest radiographs.

  19. Machine Learning Techniques for the Detection of Shockable Rhythms in Automated External Defibrillators

    PubMed Central

    Irusta, Unai; Morgado, Eduardo; Aramendi, Elisabete; Ayala, Unai; Wik, Lars; Kramer-Johansen, Jo; Eftestøl, Trygve; Alonso-Atienza, Felipe

    2016-01-01

    Early recognition of ventricular fibrillation (VF) and electrical therapy are key for the survival of out-of-hospital cardiac arrest (OHCA) patients treated with automated external defibrillators (AED). AED algorithms for VF-detection are customarily assessed using Holter recordings from public electrocardiogram (ECG) databases, which may be different from the ECG seen during OHCA events. This study evaluates VF-detection using data from both OHCA patients and public Holter recordings. ECG-segments of 4-s and 8-s duration were analyzed. For each segment 30 features were computed and fed to state of the art machine learning (ML) algorithms. ML-algorithms with built-in feature selection capabilities were used to determine the optimal feature subsets for both databases. Patient-wise bootstrap techniques were used to evaluate algorithm performance in terms of sensitivity (Se), specificity (Sp) and balanced error rate (BER). Performance was significantly better for public data with a mean Se of 96.6%, Sp of 98.8% and BER 2.2% compared to a mean Se of 94.7%, Sp of 96.5% and BER 4.4% for OHCA data. OHCA data required two times more features than the data from public databases for an accurate detection (6 vs 3). No significant differences in performance were found for different segment lengths, the BER differences were below 0.5-points in all cases. Our results show that VF-detection is more challenging for OHCA data than for data from public databases, and that accurate VF-detection is possible with segments as short as 4-s. PMID:27441719

  20. Improved Detection of Hepatitis B Virus Surface Antigen by a New Rapid Automated Assay

    PubMed Central

    Weber, Bernard; Bayer, Anja; Kirch, Peter; Schlüter, Volker; Schlieper, Dietmar; Melchior, Walter

    1999-01-01

    The performance of hepatitis B virus (HBV) surface antigen (HBsAg) screening assays is continuously improved in order to reduce the residual risk of transfusion-associated hepatitis B. In a multicenter study, a new automated rapid screening assay, Elecsys HBsAg (Roche Diagnostics), was compared to well-established tests (Auszyme Monoclonal [overnight incubation] version B and IMx HBsAg [Abbott]). Included in the evaluation were 23 seroconversion panels; sera from the acute and chronic phases of infection; dilution series of various HBsAg standards, HBV subtypes, and S gene mutants; and isolated anti-HBV core antigen-positive samples. To challenge the specificity of the new assay, sera from HBsAg-negative blood donors, pregnant women, and dialysis and hospitalized patients and potentially cross-reactive samples were investigated. Elecsys HBsAg showed a higher sensitivity for HBsAg subtypes ad, ay, adw2, adw4, ayw1, ayw2, ayw4, and adr detection in dilution series of different standards or sera than Auszyme Monoclonal version B and/or IMx HBsAg. Acute hepatitis B was detected in 11 to 16 of 23 seroconversion panels between 2 and 16 days earlier with Elecsys HBsAg than with the alternative assays. Elecsys HBsAg and Auszyme Monoclonal version B detected HBsAg surface mutants with equal sensitivity. The sensitivity and specificity of Elecsys HBsAg were 100%. Auszyme Monoclonal version B had a 99.9% specificity, and its sensitivity was 96.6%. IMx HBsAg showed a poorer sensitivity and specificity than the other assays. In conclusion, Elecsys HBsAg permits earlier detection of acute hepatitis B and different HBV subtypes than the alternative assays. By using highly sensitive HBsAg screening assays, low-level HBsAg carriers among isolated anti-HBV core antigen-positive individuals can be detected. PMID:10405414

  1. Machine Learning Techniques for the Detection of Shockable Rhythms in Automated External Defibrillators.

    PubMed

    Figuera, Carlos; Irusta, Unai; Morgado, Eduardo; Aramendi, Elisabete; Ayala, Unai; Wik, Lars; Kramer-Johansen, Jo; Eftestøl, Trygve; Alonso-Atienza, Felipe

    2016-01-01

    Early recognition of ventricular fibrillation (VF) and electrical therapy are key for the survival of out-of-hospital cardiac arrest (OHCA) patients treated with automated external defibrillators (AED). AED algorithms for VF-detection are customarily assessed using Holter recordings from public electrocardiogram (ECG) databases, which may be different from the ECG seen during OHCA events. This study evaluates VF-detection using data from both OHCA patients and public Holter recordings. ECG-segments of 4-s and 8-s duration were analyzed. For each segment 30 features were computed and fed to state of the art machine learning (ML) algorithms. ML-algorithms with built-in feature selection capabilities were used to determine the optimal feature subsets for both databases. Patient-wise bootstrap techniques were used to evaluate algorithm performance in terms of sensitivity (Se), specificity (Sp) and balanced error rate (BER). Performance was significantly better for public data with a mean Se of 96.6%, Sp of 98.8% and BER 2.2% compared to a mean Se of 94.7%, Sp of 96.5% and BER 4.4% for OHCA data. OHCA data required two times more features than the data from public databases for an accurate detection (6 vs 3). No significant differences in performance were found for different segment lengths, the BER differences were below 0.5-points in all cases. Our results show that VF-detection is more challenging for OHCA data than for data from public databases, and that accurate VF-detection is possible with segments as short as 4-s. PMID:27441719

  2. Lake Chapala change detection using time series

    NASA Astrophysics Data System (ADS)

    López-Caloca, Alejandra; Tapia-Silva, Felipe-Omar; Escalante-Ramírez, Boris

    2008-10-01

    The Lake Chapala is the largest natural lake in Mexico. It presents a hydrological imbalance problem caused by diminishing intakes from the Lerma River, pollution from said volumes, native vegetation and solid waste. This article presents a study that allows us to determine with high precision the extent of the affectation in both extension and volume reduction of the Lake Chapala in the period going from 1990 to 2007. Through satellite images this above-mentioned period was monitored. Image segmentation was achieved through a Markov Random Field model, extending the application towards edge detection. This allows adequately defining the lake's limits as well as determining new zones within the lake, both changes pertaining the Lake Chapala. Detected changes are related to a hydrological balance study based on measuring variables such as storage volumes, evapotranspiration and water balance. Results show that the changes in the Lake Chapala establish frail conditions which pose a future risk situation. Rehabilitation of the lake requires a hydrologic balance in its banks and aquifers.

  3. Nationwide Hybrid Change Detection of Buildings

    NASA Astrophysics Data System (ADS)

    Hron, V.; Halounova, L.

    2016-06-01

    The Fundamental Base of Geographic Data of the Czech Republic (hereinafter FBGD) is a national 2D geodatabase at a 1:10,000 scale with more than 100 geographic objects. This paper describes the design of the permanent updating mechanism of buildings in FBGD. The proposed procedure belongs to the category of hybrid change detection (HCD) techniques which combine pixel-based and object-based evaluation. The main sources of information for HCD are cadastral information and bi-temporal vertical digital aerial photographs. These photographs have great information potential because they contain multispectral, position and also elevation information. Elevation information represents a digital surface model (DSM) which can be obtained using the image matching technique. Pixel-based evaluation of bi-temporal DSMs enables fast localization of places with potential building changes. These coarse results are subsequently classified through the object-based image analysis (OBIA) using spectral, textural and contextual features and GIS tools. The advantage of the two-stage evaluation is the pre-selection of locations where image segmentation (a computationally demanding part of OBIA) is performed. It is not necessary to apply image segmentation to the entire scene, but only to the surroundings of detected changes, which contributes to significantly faster processing and lower hardware requirements. The created technology is based on open-source software solutions that allow easy portability on multiple computers and parallelization of processing. This leads to significant savings of financial resources which can be expended on the further development of FBGD.

  4. Immunohistochemical Detection of Changes in Tumor Hypoxia

    SciTech Connect

    Russell, James Carlin, Sean; Burke, Sean A.; Wen Bixiu; Yang, Kwang Mo; Ling, C. Clifton

    2009-03-15

    Purpose: Although hypoxia is a known prognostic factor, its effect will be modified by the rate of reoxygenation and the extent to which the cells are acutely hypoxic. We tested the ability of exogenous and endogenous markers to detect reoxygenation in a xenograft model. Our technique might be applicable to stored patient samples. Methods and Materials: The human colorectal carcinoma line, HT29, was grown in nude mice. Changes in tumor hypoxia were examined by injection of pimonidazole, followed 24 hours later by EF5. Cryosections were stained for these markers and for carbonic anhydrase IX (CAIX) and hypoxia-inducible factor 1{alpha} (HIF1{alpha}). Tumor hypoxia was artificially manipulated by carbogen exposure. Results: In unstressed tumors, all four markers showed very similar spatial distributions. After carbogen treatment, pimonidazole and EF5 could detect decreased hypoxia. HIF1{alpha} staining was also decreased relative to CAIX, although the effect was less pronounced than for EF5. Control tumors displayed small regions that had undergone spontaneous changes in tumor hypoxia, as judged by pimonidazole relative to EF5; most of these changes were reflected by CAIX and HIF1{alpha}. Conclusion: HIF1{alpha} can be compared with either CAIX or a previously administered nitroimidazole to provide an estimate of reoxygenation.

  5. Immunohistochemical Detection of Changes in Tumor Hypoxia

    PubMed Central

    Russell, James; Carlin, Sean; Burke, Sean A.; Wen, Bixiu; Yang, Kwang Mo; Ling, C Clifton

    2009-01-01

    Purpose Although hypoxia is a known prognostic factor, its impact will be modified by the rate of reoxygenation and the extent to which cells are acutely hypoxic. We tested the ability of exogenous and endogenous markers to detect reoxygenation in a xenograft model. Our technique may be applicable to stored patient samples. Methods and Materials The human colorectal carcinoma line, HT29 was grown in nude mice. Changes in tumor hypoxia were examined by injection of pimonidazole followed 24 hours later by EF5. Cryosections were stained for these markers and for CAIX and HIF1α. Tumor hypoxia was artificially manipulated by carbogen exposure. Results In unstressed tumors, all four markers showed very similar spatial distributions. After carbogen treatment, pimonidazole and EF5 could detect decreased hypoxia. HIF1α staining was also decreased relative to CAIX, though the effect was less pronounced than for EF5. Control tumors displayed small regions that had undergone spontaneous changes in tumor hypoxia, as judged by pimonidazole relative to EF5; most of these changes were reflected by CAIX and HIF1α Conclusions HIF1α can be compared to either CAIX or a previously administered nitroimidazole to provide an estimate of reoxygenation. PMID:19251089

  6. Automated Immunomagnetic Separation and Microarray Detection of E. coli O157:H7 from Poultry Carcass Rinse

    SciTech Connect

    Chandler, Darrell P. ); Brown, Jeremy D.; Call, Douglas R. ); Wunschel, Sharon C. ); Grate, Jay W. ); Holman, David A.; Olson, Lydia G.; Stottlemyer, Mark S.; Bruckner-Lea, Cindy J. )

    2001-09-01

    We describe the development and application of a novel electromagnetic flow cell and fluidics system for automated immunomagnetic separation of E. coli directly from unprocessed poultry carcass rinse, and the biochemical coupling of automated sample preparation with nucleic acid microarrays without cell growth. Highly porous nickel foam was used as a magnetic flux conductor. Up to 32% recovery efficiency of 'total' E. coli was achieved within the automated system with 6 sec contact times and 15 minute protocol (from sample injection through elution), statistically similar to cell recovery efficiencies in > 1 hour 'batch' captures. The electromagnet flow cell allowed complete recovery of 2.8 mm particles directly from unprocessed poultry carcass rinse whereas the batch system did not. O157:H7 cells were reproducibly isolated directly from unprocessed poultry rinse with 39% recovery efficiency at 103 cells ml-1 inoculum. Direct plating of washed beads showed positive recovery of O 157:H7 directly from carcass rinse at an inoculum of 10 cells ml-1. Recovered beads were used for direct PCR amplification and microarray detection, with a process-level detection limit (automated cell concentration through microarray detection) of < 103 cells ml-1 carcass rinse. The fluidic system and analytical approach described here are generally applicable to most microbial detection problems and applications.

  7. Imaging, object detection, and change detection with a polarized multistatic GPR array

    SciTech Connect

    Beer, N. Reginald; Paglieroni, David W.

    2015-07-21

    A polarized detection system performs imaging, object detection, and change detection factoring in the orientation of an object relative to the orientation of transceivers. The polarized detection system may operate on one of several modes of operation based on whether the imaging, object detection, or change detection is performed separately for each transceiver orientation. In combined change mode, the polarized detection system performs imaging, object detection, and change detection separately for each transceiver orientation, and then combines changes across polarizations. In combined object mode, the polarized detection system performs imaging and object detection separately for each transceiver orientation, and then combines objects across polarizations and performs change detection on the result. In combined image mode, the polarized detection system performs imaging separately for each transceiver orientation, and then combines images across polarizations and performs object detection followed by change detection on the result.

  8. A Neurocomputational Method for Fully Automated 3D Dendritic Spine Detection and Segmentation of Medium-sized Spiny Neurons

    PubMed Central

    Zhang, Yong; Chen, Kun; Baron, Matthew; Teylan, Merilee A.; Kim, Yong; Song, Zhihuan; Greengard, Paul

    2010-01-01

    Acquisition and quantitative analysis of high resolution images of dendritic spines are challenging tasks but are necessary for the study of animal models of neurological and psychiatric diseases. Currently available methods for automated dendritic spine detection are for the most part customized for 2D image slices, not volumetric 3D images. In this work, a fully automated method is proposed to detect and segment dendritic spines from 3D confocal microscopy images of medium-sized spiny neurons (MSNs). MSNs constitute a major neuronal population in striatum, and abnormalities in their function are associated with several neurological and psychiatric diseases. Such automated detection is critical for the development of new 3D neuronal assays which can be used for the screening of drugs and the studies of their therapeutic effects. The proposed method utilizes a generalized gradient vector flow (GGVF) with a new smoothing constraint and then detects feature points near the central regions of dendrites and spines. Then, the central regions are refined and separated based on eigen-analysis and multiple shape measurements. Finally, the spines are segmented in 3D space using the fast marching algorithm, taking the detected central regions of spines as initial points. The proposed method is compared with three popular existing methods for centerline extraction and also with manual results for dendritic spine detection in 3D space. The experimental results and comparisons show that the proposed method is able to automatically and accurately detect, segment, and quantitate dendritic spines in 3D images of MSNs. PMID:20100579

  9. Reproducibility of In Vivo Corneal Confocal Microscopy Using an Automated Analysis Program for Detection of Diabetic Sensorimotor Polyneuropathy

    PubMed Central

    Ostrovski, Ilia; Lovblom, Leif E.; Farooqi, Mohammed A.; Scarr, Daniel; Boulet, Genevieve; Hertz, Paul; Wu, Tong; Halpern, Elise M.; Ngo, Mylan; Ng, Eduardo; Orszag, Andrej; Bril, Vera; Perkins, Bruce A.

    2015-01-01

    Objective In vivo Corneal Confocal Microscopy (IVCCM) is a validated, non-invasive test for diabetic sensorimotor polyneuropathy (DSP) detection, but its utility is limited by the image analysis time and expertise required. We aimed to determine the inter- and intra-observer reproducibility of a novel automated analysis program compared to manual analysis. Methods In a cross-sectional diagnostic study, 20 non-diabetes controls (mean age 41.4±17.3y, HbA1c 5.5±0.4%) and 26 participants with type 1 diabetes (42.8±16.9y, 8.0±1.9%) underwent two separate IVCCM examinations by one observer and a third by an independent observer. Along with nerve density and branch density, corneal nerve fibre length (CNFL) was obtained by manual analysis (CNFLMANUAL), a protocol in which images were manually selected for automated analysis (CNFLSEMI-AUTOMATED), and one in which selection and analysis were performed electronically (CNFLFULLY-AUTOMATED). Reproducibility of each protocol was determined using intraclass correlation coefficients (ICC) and, as a secondary objective, the method of Bland and Altman was used to explore agreement between protocols. Results Mean CNFLManual was 16.7±4.0, 13.9±4.2 mm/mm2 for non-diabetes controls and diabetes participants, while CNFLSemi-Automated was 10.2±3.3, 8.6±3.0 mm/mm2 and CNFLFully-Automated was 12.5±2.8, 10.9 ± 2.9 mm/mm2. Inter-observer ICC and 95% confidence intervals (95%CI) were 0.73(0.56, 0.84), 0.75(0.59, 0.85), and 0.78(0.63, 0.87), respectively (p = NS for all comparisons). Intra-observer ICC and 95%CI were 0.72(0.55, 0.83), 0.74(0.57, 0.85), and 0.84(0.73, 0.91), respectively (p<0.05 for CNFLFully-Automated compared to others). The other IVCCM parameters had substantially lower ICC compared to those for CNFL. CNFLSemi-Automated and CNFLFully-Automated underestimated CNFLManual by mean and 95%CI of 35.1(-4.5, 67.5)% and 21.0(-21.6, 46.1)%, respectively. Conclusions Despite an apparent measurement (underestimation) bias in

  10. A gradient-based approach for automated crest-line detection and analysis of sand dune patterns on planetary surfaces

    NASA Astrophysics Data System (ADS)

    Lancaster, N.; LeBlanc, D.; Bebis, G.; Nicolescu, M.

    2015-12-01

    Dune-field patterns are believed to behave as self-organizing systems, but what causes the patterns to form is still poorly understood. The most obvious (and in many cases the most significant) aspect of a dune system is the pattern of dune crest lines. Extracting meaningful features such as crest length, orientation, spacing, bifurcations, and merging of crests from image data can reveal important information about the specific dune-field morphological properties, development, and response to changes in boundary conditions, but manual methods are labor-intensive and time-consuming. We are developing the capability to recognize and characterize patterns of sand dunes on planetary surfaces. Our goal is to develop a robust methodology and the necessary algorithms for automated or semi-automated extraction of dune morphometric information from image data. Our main approach uses image processing methods to extract gradient information from satellite images of dune fields. Typically, the gradients have a dominant magnitude and orientation. In many cases, the images have two major dominant gradient orientations, for the sunny and shaded side of the dunes. A histogram of the gradient orientations is used to determine the dominant orientation. A threshold is applied to the image based on gradient orientations which agree with the dominant orientation. The contours of the binary image can then be used to determine the dune crest-lines, based on pixel intensity values. Once the crest-lines have been extracted, the morphological properties can be computed. We have tested our approach on a variety of images of linear and crescentic (transverse) dunes and compared dune detection algorithms with manually-digitized dune crest lines, achieving true positive values of 0.57-0.99; and false positives values of 0.30-0.67, indicating that out approach is generally robust.

  11. SAR change detection based on intensity and texture changes

    NASA Astrophysics Data System (ADS)

    Gong, Maoguo; Li, Yu; Jiao, Licheng; Jia, Meng; Su, Linzhi

    2014-07-01

    In this paper, a novel change detection approach is proposed for multitemporal synthetic aperture radar (SAR) images. The approach is based on two difference images, which are constructed through intensity and texture information, respectively. In the extraction of the texture differences, robust principal component analysis technique is used to separate irrelevant and noisy elements from Gabor responses. Then graph cuts are improved by a novel energy function based on multivariate generalized Gaussian model for more accurately fitting. The effectiveness of the proposed method is proved by the experiment results obtained on several real SAR images data sets.

  12. Automated Detection of Soma Location and Morphology in Neuronal Network Cultures

    PubMed Central

    Ozcan, Burcin; Negi, Pooran; Laezza, Fernanda; Papadakis, Manos; Labate, Demetrio

    2015-01-01

    Automated identification of the primary components of a neuron and extraction of its sub-cellular features are essential steps in many quantitative studies of neuronal networks. The focus of this paper is the development of an algorithm for the automated detection of the location and morphology of somas in confocal images of neuronal network cultures. This problem is motivated by applications in high-content screenings (HCS), where the extraction of multiple morphological features of neurons on large data sets is required. Existing algorithms are not very efficient when applied to the analysis of confocal image stacks of neuronal cultures. In addition to the usual difficulties associated with the processing of fluorescent images, these types of stacks contain a small number of images so that only a small number of pixels are available along the z-direction and it is challenging to apply conventional 3D filters. The algorithm we present in this paper applies a number of innovative ideas from the theory of directional multiscale representations and involves the following steps: (i) image segmentation based on support vector machines with specially designed multiscale filters; (ii) soma extraction and separation of contiguous somas, using a combination of level set method and directional multiscale filters. We also present an approach to extract the soma’s surface morphology using the 3D shearlet transform. Extensive numerical experiments show that our algorithms are computationally efficient and highly accurate in segmenting the somas and separating contiguous ones. The algorithms presented in this paper will facilitate the development of a high-throughput quantitative platform for the study of neuronal networks for HCS applications. PMID:25853656

  13. Automated detection of optic disk in retinal fundus images using intuitionistic fuzzy histon segmentation.

    PubMed

    Mookiah, Muthu Rama Krishnan; Acharya, U Rajendra; Chua, Chua Kuang; Min, Lim Choo; Ng, E Y K; Mushrif, Milind M; Laude, Augustinus

    2013-01-01

    The human eye is one of the most sophisticated organs, with perfectly interrelated retina, pupil, iris cornea, lens, and optic nerve. Automatic retinal image analysis is emerging as an important screening tool for early detection of eye diseases. Uncontrolled diabetic retinopathy (DR) and glaucoma may lead to blindness. The identification of retinal anatomical regions is a prerequisite for the computer-aided diagnosis of several retinal diseases. The manual examination of optic disk (OD) is a standard procedure used for detecting different stages of DR and glaucoma. In this article, a novel automated, reliable, and efficient OD localization and segmentation method using digital fundus images is proposed. General-purpose edge detection algorithms often fail to segment the OD due to fuzzy boundaries, inconsistent image contrast, or missing edge features. This article proposes a novel and probably the first method using the Attanassov intuitionistic fuzzy histon (A-IFSH)-based segmentation to detect OD in retinal fundus images. OD pixel intensity and column-wise neighborhood operation are employed to locate and isolate the OD. The method has been evaluated on 100 images comprising 30 normal, 39 glaucomatous, and 31 DR images. Our proposed method has yielded precision of 0.93, recall of 0.91, F-score of 0.92, and mean segmentation accuracy of 93.4%. We have also compared the performance of our proposed method with the Otsu and gradient vector flow (GVF) snake methods. Overall, our result shows the superiority of proposed fuzzy segmentation technique over other two segmentation methods. PMID:23516954

  14. Automated Mapping of Rapid Arctic Ocean Coastal Change Over Large Spans of Time and Geography

    NASA Astrophysics Data System (ADS)

    Hulslander, D.

    2012-12-01

    While climate change is global in scope, its impacts vary greatly from region to region. The dynamic Arctic Ocean coastline often shows greater sensitivity to climate change and more obvious impacts. Current longer ice-free conditions, rising sea level, thawing permafrost, and melting of larger ice bodies combine to produce extremely rapid coastal change and erosion. Anderson et al. (2009; Geology News) have measured erosion rates at sites along the Alaskan Arctic Ocean coast of 15 m per year and greater. Completely understanding coastal change in the Arctic requires mapping both current erosional regimes as well as changes in erosional rates over several decades. Studying coastal change and trends in the Arctic, however, presents several significant difficulties. The study area is enormous, with over 45,000 km of coastline; it is also one of the most remote, inaccessible, and hostile environments on Earth. Moreover, the region has little to no historical data from which to start. Thus, any study of the area must be able to construct its own baseline. Remote sensing offers the best solution given these difficulties. Spaceborne platforms allow for regular global coverage at temporal and spatial scales sufficient for mapping coastal erosion and deposition. The Landsat family of instruments (MSS, TM, and ETM) has data available as frequently as every 16 days and starting as early as 1972. The data are freely available from the USGS through earthexplorer.usgs.gov and are well calibrated both geometrically and spectrally, eliminating expensive pre-processing steps and making them analysis-ready. Finally, because manual coastline delineation of the quantity of data involved would be prohibitive in both budget and labor, an automated processing chain must be used. ENVI Feature Extraction can provide results in line with those generated by expert analysts (Hulslander, et al., 2008; GEOBIA 2008 Proceedings). Previous studies near Drew Point, Alaska have shown that feature

  15. Automated detection of anesthetic depth levels using chaotic features with artificial neural networks.

    PubMed

    Lalitha, V; Eswaran, C

    2007-12-01

    Monitoring the depth of anesthesia (DOA) during surgery is very important in order to avoid patients' interoperative awareness. Since the traditional methods of assessing DOA which involve monitoring the heart rate, pupil size, sweating etc, may vary from patient to patient depending on the type of surgery and the type of drug administered, modern methods based on electroencephalogram (EEG) are preferred. EEG being a nonlinear signal, it is appropriate to use nonlinear chaotic parameters to identify the anesthetic depth levels. This paper discusses an automated detection method of anesthetic depth levels based on EEG recordings using non-linear chaotic features and neural network classifiers. Three nonlinear parameters, namely, correlation dimension (CD), Lyapunov exponent (LE) and Hurst exponent (HE) are used as features and two neural network models, namely, multi-layer perceptron network (feed forward model) and Elman network (feedback model) are used for classification. The neural network models are trained and tested with single and multiple features derived from chaotic parameters and the performances are evaluated in terms of sensitivity, specificity and overall accuracy. It is found from the experimental results that the Lyapunov exponent feature with Elman network yields an overall accuracy of 99% in detecting the anesthetic depth levels. PMID:18041276

  16. Comparison of probability statistics for automated ship detection in SAR imagery

    NASA Astrophysics Data System (ADS)

    Henschel, Michael D.; Rey, Maria T.; Campbell, J. W. M.; Petrovic, D.

    1998-12-01

    This paper discuses the initial results of a recent operational trial of the Ocean Monitoring Workstation's (OMW) ship detection algorithm which is essentially a Constant False Alarm Rate filter applied to Synthetic Aperture Radar data. The choice of probability distribution and methodologies for calculating scene specific statistics are discussed in some detail. An empirical basis for the choice of probability distribution used is discussed. We compare the results using a l-look, k-distribution function with various parameter choices and methods of estimation. As a special case of sea clutter statistics the application of a (chi) 2-distribution is also discussed. Comparisons are made with reference to RADARSAT data collected during the Maritime Command Operation Training exercise conducted in Atlantic Canadian Waters in June 1998. Reference is also made to previously collected statistics. The OMW is a commercial software suite that provides modules for automated vessel detection, oil spill monitoring, and environmental monitoring. This work has been undertaken to fine tune the OMW algorithm's, with special emphasis on the false alarm rate of each algorithm.

  17. Detection of Pharmacovigilance-Related adverse Events Using Electronic Health Records and automated Methods

    PubMed Central

    Haerian, K; Varn, D; Vaidya, S; Ena, L; Chase, HS; Friedman, C

    2013-01-01

    Electronic health records (EHRs) are an important source of data for detection of adverse drug reactions (ADRs). However, adverse events are frequently due not to medications but to the patients’ underlying conditions. Mining to detect ADRs from EHR data must account for confounders. We developed an automated method using natural-language processing (NLP) and a knowledge source to differentiate cases in which the patient’s disease is responsible for the event rather than a drug. Our method was applied to 199,920 hospitalization records, concentrating on two serious ADRs: rhabdomyolysis (n = 687) and agranulocytosis (n = 772). Our method automatically identified 75% of the cases, those with disease etiology. The sensitivity and specificity were 93.8% (confidence interval: 88.9-96.7%) and 91.8% (confidence interval: 84.0-96.2%), respectively. The method resulted in considerable saving of time: for every 1 h spent in development, there was a saving of at least 20 h in manual review. The review of the remaining 25% of the cases therefore became more feasible, allowing us to identify the medications that had caused the ADRs. PMID:22713699

  18. Automated Detection and Segmentation of Synaptic Contacts in Nearly Isotropic Serial Electron Microscopy Images

    PubMed Central

    Kreshuk, Anna; Straehle, Christoph N.; Sommer, Christoph; Koethe, Ullrich; Cantoni, Marco; Knott, Graham; Hamprecht, Fred A.

    2011-01-01

    We describe a protocol for fully automated detection and segmentation of asymmetric, presumed excitatory, synapses in serial electron microscopy images of the adult mammalian cerebral cortex, taken with the focused ion beam, scanning electron microscope (FIB/SEM). The procedure is based on interactive machine learning and only requires a few labeled synapses for training. The statistical learning is performed on geometrical features of 3D neighborhoods of each voxel and can fully exploit the high z-resolution of the data. On a quantitative validation dataset of 111 synapses in 409 images of 1948×1342 pixels with manual annotations by three independent experts the error rate of the algorithm was found to be comparable to that of the experts (0.92 recall at 0.89 precision). Our software offers a convenient interface for labeling the training data and the possibility to visualize and proofread the results in 3D. The source code, the test dataset and the ground truth annotation are freely available on the website http://www.ilastik.org/synapse-detection. PMID:22031814

  19. Automated Detection of Postoperative Surgical Site Infections Using Supervised Methods with Electronic Health Record Data.

    PubMed

    Hu, Zhen; Simon, Gyorgy J; Arsoniadis, Elliot G; Wang, Yan; Kwaan, Mary R; Melton, Genevieve B

    2015-01-01

    The National Surgical Quality Improvement Project (NSQIP) is widely recognized as "the best in the nation" surgical quality improvement resource in the United States. In particular, it rigorously defines postoperative morbidity outcomes, including surgical adverse events occurring within 30 days of surgery. Due to its manual yet expensive construction process, the NSQIP registry is of exceptionally high quality, but its high cost remains a significant bottleneck to NSQIP's wider dissemination. In this work, we propose an automated surgical adverse events detection tool, aimed at accelerating the process of extracting postoperative outcomes from medical charts. As a prototype system, we combined local EHR data with the NSQIP gold standard outcomes and developed machine learned models to retrospectively detect Surgical Site Infections (SSI), a particular family of adverse events that NSQIP extracts. The built models have high specificity (from 0.788 to 0.988) as well as very high negative predictive values (>0.98), reliably eliminating the vast majority of patients without SSI, thereby significantly reducing the NSQIP extractors' burden. PMID:26262143

  20. Automated detection of remineralization in simulated enamel lesions with PS-OCT

    NASA Astrophysics Data System (ADS)

    Lee, Robert C.; Darling, Cynthia L.; Fried, Daniel

    2014-02-01

    Previous in vitro and in vivo studies have demonstrated that polarization-sensitive optical coherence tomography (PS-OCT) can be used to nondestructively image the subsurface structure and measure the thickness of the highly mineralized transparent surface zone of caries lesions. There are structural differences between active lesions and arrested lesions, and the surface layer thickness may correlate with activity of the lesion. The purpose of this study was to develop a method that can be used to automatically detect and measure the thickness of the transparent surface layer in PS-OCT images. Automated methods of analysis were used to measure the thickness of the transparent layer and the depth of the bovine enamel lesions produced using simulated caries models that emulate demineralization in the mouth. The transparent layer thickness measured with PS-OCT correlated well with polarization light microscopy (PLM) measurements of all regions (r2=0.9213). This study demonstrates that PS-OCT can automatically detect and measure thickness of the transparent layer formed due to remineralization in simulated caries lesions.

  1. High-throughput screening of zebrafish embryos using automated heart detection and imaging.

    PubMed

    Spomer, Waldemar; Pfriem, Alexander; Alshut, Rüdiger; Just, Steffen; Pylatiuk, Christian

    2012-12-01

    Over the past decade, the zebrafish has become a key model organism in genetic screenings and drug discovery. A number of genes have been identified to affect the development of the shape and functioning of the heart, leading to zebrafish mutants with heart defects. The development of semiautomated microscopy systems has allowed for the investigation of drugs that reverse a disease phenotype on a larger scale. However, there is a lack of automated feature detection, and commercially available computer-aided microscopes are expensive. Screening of the zebrafish heart for drug discovery typically includes the identification of heart parameters, such as the frequency or fractional shortening. Until now, screening processes have been characterized by manual handling of the larvae and manual microscopy. Here, an intelligent robotic microscope is presented, which automatically identifies the orientation of a zebrafish in a micro well. A predefined region of interest, such as the heart, is detected automatically, and a video with higher magnification is recorded. Screening of a 96-well plate takes 35 to 55 min, depending on the length of the videos. Of the zebrafish hearts, 75% are recorded accurately without any user interaction. A description of the system, including the graphical user interface, is given. PMID:23053930

  2. Performance of the Automated Neuropsychological Assessment Metrics (ANAM) in Detecting Cognitive Impairment in Heart Failure Patients

    PubMed Central

    Xie, Susan S.; Goldstein, Carly M.; Gathright, Emily C.; Gunstad, John; Dolansky, Mary A.; Redle, Joseph; Hughes, Joel W.

    2015-01-01

    Objective Evaluate capacity of the Automated Neuropsychological Assessment Metrics (ANAM) to detect cognitive impairment (CI) in heart failure (HF) patients. Background CI is a key prognostic marker in HF. Though the most widely used cognitive screen in HF, the Mini-Mental State Examination (MMSE) is insufficiently sensitive. The ANAM has demonstrated sensitivity to cognitive domains affected by HF, but has not been assessed in this population. Methods Investigators administered the ANAM and MMSE to 57 HF patients, compared against a composite model of cognitive function. Results ANAM efficiency (p < .05) and accuracy scores (p < .001) successfully differentiated CI and non-CI. ANAM efficiency and accuracy scores classified 97.7% and 93.0% of non-CI patients, and 14.3% and 21.4% with CI, respectively. Conclusions The ANAM is more effective than the MMSE for detecting CI, but further research is needed to develop a more optimal cognitive screen for routine use in HF patients. PMID:26354858

  3. Detecting past changes of effective population size

    PubMed Central

    Nikolic, Natacha; Chevalet, Claude

    2014-01-01

    Understanding and predicting population abundance is a major challenge confronting scientists. Several genetic models have been developed using microsatellite markers to estimate the present and ancestral effective population sizes. However, to get an overview on the evolution of population requires that past fluctuation of population size be traceable. To address the question, we developed a new model estimating the past changes of effective population size from microsatellite by resolving coalescence theory and using approximate likelihoods in a Monte Carlo Markov Chain approach. The efficiency of the model and its sensitivity to gene flow and to assumptions on the mutational process were checked using simulated data and analysis. The model was found especially useful to provide evidence of transient changes of population size in the past. The times at which some past demographic events cannot be detected because they are too ancient and the risk that gene flow may suggest the false detection of a bottleneck are discussed considering the distribution of coalescence times. The method was applied on real data sets from several Atlantic salmon populations. The method called VarEff (Variation of Effective size) was implemented in the R package VarEff and is made available at https://qgsp.jouy.inra.fr and at http://cran.r-project.org/web/packages/VarEff. PMID:25067949

  4. Point pattern match-based change detection in a constellation of previously detected objects

    DOEpatents

    Paglieroni, David W.

    2016-06-07

    A method and system is provided that applies attribute- and topology-based change detection to objects that were detected on previous scans of a medium. The attributes capture properties or characteristics of the previously detected objects, such as location, time of detection, detection strength, size, elongation, orientation, etc. The locations define a three-dimensional network topology forming a constellation of previously detected objects. The change detection system stores attributes of the previously detected objects in a constellation database. The change detection system detects changes by comparing the attributes and topological consistency of newly detected objects encountered during a new scan of the medium to previously detected objects in the constellation database. The change detection system may receive the attributes of the newly detected objects as the objects are detected by an object detection system in real time.

  5. Analysis of the disagreement between automated bioluminescence-based and culture methods for detecting significant bacteriuria, with proposals for standardizing evaluations of bacteriuria detection methods.

    PubMed Central

    Nichols, W W; Curtis, G D; Johnston, H H

    1982-01-01

    A fully automated method for detecting significant bacteriuria is described which uses firefly luciferin and luciferase to detect bacterial ATP in urine. The automated method was calibrated and evaluated, using 308 urine specimens, against two reference culture methods. We obtained a specificity of 0.79 and sensitivity of 0.75 using a quantitative pour plate reference test and a specificity of 0.79 and a sensitivity of 0.90 using a semiquantitative standard loop reference test. The majority of specimens negative by the automated test but positive by the pour plate reference test were specimens which grew several bacterial species. We suggest that such disagreement was most likely for urine containing around 10(5) colony-forming units per ml (the culture threshold of positivity) and that these specimens were ones contaminated by urethral or vaginal flora. We propose standard procedures for calibrating and evaluating rapid or automated methods for the detection of significant bacteriuria and have analyzed our results using these procedures. We recommend that identical analyses should be reported for other evaluations of bacteriuria detection methods. PMID:6808012

  6. Automated Indexing of Internet Stories for Health Behavior Change: Weight Loss Attitude Pilot Study

    PubMed Central

    Manuvinakurike, Ramesh; Velicer, Wayne F

    2014-01-01

    Background Automated health behavior change interventions show promise, but suffer from high attrition and disuse. The Internet abounds with thousands of personal narrative accounts of health behavior change that could not only provide useful information and motivation for others who are also trying to change, but an endless source of novel, entertaining stories that may keep participants more engaged than messages authored by interventionists. Objective Given a collection of relevant personal health behavior change stories gathered from the Internet, the aim of this study was to develop and evaluate an automated indexing algorithm that could select the best possible story to provide to a user to have the greatest possible impact on their attitudes toward changing a targeted health behavior, in this case weight loss. Methods An indexing algorithm was developed using features informed by theories from behavioral medicine together with text classification and machine learning techniques. The algorithm was trained using a crowdsourced dataset, then evaluated in a 2×2 between-subjects randomized pilot study. One factor compared the effects of participants reading 2 indexed stories vs 2 randomly selected stories, whereas the second factor compared the medium used to tell the stories: text or animated conversational agent. Outcome measures included changes in self-efficacy and decisional balance for weight loss before and after the stories were read. Results Participants were recruited from a crowdsourcing website (N=103; 53.4%, 55/103 female; mean age 35, SD 10.8 years; 65.0%, 67/103 precontemplation; 19.4%, 20/103 contemplation for weight loss). Participants who read indexed stories exhibited a significantly greater increase in self-efficacy for weight loss compared to the control group (F 1,107=5.5, P=.02). There were no significant effects of indexing on change in decisional balance (F 1,97=0.05, P=.83) and no significant effects of medium on change in self

  7. Automated monitoring of early neurobehavioral changes in mice following traumatic brain injury.

    PubMed

    Qu, Wenrui; Liu, Nai-Kui; Xie, Xin-Min Simon; Li, Rui; Xu, Xiao-Ming

    2016-02-01

    Traumatic brain injury often causes a variety of behavioral and emotional impairments that can develop into chronic disorders. Therefore, there is a need to shift towards identifying early symptoms that can aid in the prediction of traumatic brain injury outcomes and behavioral endpoints in patients with traumatic brain injury after early interventions. In this study, we used the SmartCage system, an automated quantitative approach to assess behavior alterations in mice during an early phase of traumatic brain injury in their home cages. Female C57BL/6 adult mice were subjected to moderate controlled cortical impact (CCI) injury. The mice then received a battery of behavioral assessments including neurological score, locomotor activity, sleep/wake states, and anxiety-like behaviors on days 1, 2, and 7 after CCI. Histological analysis was performed on day 7 after the last assessment. Spontaneous activities on days 1 and 2 after injury were significantly decreased in the CCI group. The average percentage of sleep time spent in both dark and light cycles were significantly higher in the CCI group than in the sham group. For anxiety-like behaviors, the time spent in a light compartment and the number of transitions between the dark/light compartments were all significantly reduced in the CCI group than in the sham group. In addition, the mice suffering from CCI exhibited a preference of staying in the dark compartment of a dark/light cage. The CCI mice showed reduced neurological score and histological abnormalities, which are well correlated to the automated behavioral assessments. Our findings demonstrate that the automated SmartCage system provides sensitive and objective measures for early behavior changes in mice following traumatic brain injury. PMID:27073377

  8. Automated monitoring of early neurobehavioral changes in mice following traumatic brain injury

    PubMed Central

    Qu, Wenrui; Liu, Nai-kui; Xie, Xin-min (Simon); Li, Rui; Xu, Xiao-ming

    2016-01-01

    Traumatic brain injury often causes a variety of behavioral and emotional impairments that can develop into chronic disorders. Therefore, there is a need to shift towards identifying early symptoms that can aid in the prediction of traumatic brain injury outcomes and behavioral endpoints in patients with traumatic brain injury after early interventions. In this study, we used the SmartCage system, an automated quantitative approach to assess behavior alterations in mice during an early phase of traumatic brain injury in their home cages. Female C57BL/6 adult mice were subjected to moderate controlled cortical impact (CCI) injury. The mice then received a battery of behavioral assessments including neurological score, locomotor activity, sleep/wake states, and anxiety-like behaviors on days 1, 2, and 7 after CCI. Histological analysis was performed on day 7 after the last assessment. Spontaneous activities on days 1 and 2 after injury were significantly decreased in the CCI group. The average percentage of sleep time spent in both dark and light cycles were significantly higher in the CCI group than in the sham group. For anxiety-like behaviors, the time spent in a light compartment and the number of transitions between the dark/light compartments were all significantly reduced in the CCI group than in the sham group. In addition, the mice suffering from CCI exhibited a preference of staying in the dark compartment of a dark/light cage. The CCI mice showed reduced neurological score and histological abnormalities, which are well correlated to the automated behavioral assessments. Our findings demonstrate that the automated SmartCage system provides sensitive and objective measures for early behavior changes in mice following traumatic brain injury. PMID:27073377

  9. Sink detection on tilted terrain for automated identification of glacial cirques

    NASA Astrophysics Data System (ADS)

    Prasicek, Günther; Robl, Jörg; Lang, Andreas

    2016-04-01

    Glacial cirques are morphologically distinct but complex landforms and represent a vital part of high mountain topography. Their distribution, elevation and relief are expected to hold information on (1) the extent of glacial occupation, (2) the mechanism of glacial cirque erosion, and (3) how glacial in concert with periglacial processes can limit peak altitude and mountain range height. While easily detectably for the expert's eye both in nature and on various representations of topography, their complicated nature makes them a nemesis for computer algorithms. Consequently, manual mapping of glacial cirques is commonplace in many mountain landscapes worldwide, but consistent datasets of cirque distribution and objectively mapped cirques and their morphometrical attributes are lacking. Among the biggest problems for algorithm development are the complexity in shape and the great variability of cirque size. For example, glacial cirques can be rather circular or longitudinal in extent, exist as individual and composite landforms, show prominent topographic depressions or can entirely be filled with water or sediment. For these reasons, attributes like circularity, size, drainage area and topology of landform elements (e.g. a flat floor surrounded by steep walls) have only a limited potential for automated cirque detection. Here we present a novel, geomorphometric method for automated identification of glacial cirques on digital elevation models that exploits their genetic bowl-like shape. First, we differentiate between glacial and fluvial terrain employing an algorithm based on a moving window approach and multi-scale curvature, which is also capable of fitting the analysis window to valley width. We then fit a plane to the valley stretch clipped by the analysis window and rotate the terrain around the center cell until the plane is level. Doing so, we produce sinks of considerable size if the clipped terrain represents a cirque, while no or only very small sinks

  10. Topographic attributes as a guide for automated detection or highlighting of geological features

    NASA Astrophysics Data System (ADS)

    Viseur, Sophie; Le Men, Thibaud; Guglielmi, Yves

    2015-04-01

    Photogrammetry or LIDAR technology combined with photography allow geoscientists to obtain 3D high-resolution numerical representations of outcrops, generally termed as Digital Outcrop Models (DOM). For over a decade, these 3D numerical outcrops serve as support for precise and accurate interpretations of geological features such as fracture traces or plans, strata, facies mapping, etc. These interpretations have the benefit to be directly georeferenced and embedded into the 3D space. They are then easily integrated into GIS or geomodeler softwares for modelling in 3D the subsurface geological structures. However, numerical outcrops generally represent huge data sets that are heavy to manipulate and hence to interpret. This may be particularly tedious as soon as several scales of geological features must be investigated or as geological features are very dense and imbricated. Automated tools for interpreting geological features from DOMs would be then a significant help to process these kinds of data. Such technologies are commonly used for interpreting seismic or medical data. However, it may be noticed that even if many efforts have been devoted to easily and accurately acquire 3D topographic point clouds and photos and to visualize accurate 3D textured DOMs, few attentions have been paid to the development of algorithms for automated detection of the geological structures from DOMs. The automatic detection of objects on numerical data generally assumes that signals or attributes computed from this data allows the recognition of the targeted object boundaries. The first step consists then in defining attributes that highlight the objects or their boundaries. For DOM interpretations, some authors proposed to use differential operators computed on the surface such as normal or curvatures. These methods generally extract polylines corresponding to fracture traces or bed limits. Other approaches rely on the PCA technology to segregate different topographic plans

  11. Robust background subtraction for automated detection and tracking of targets in wide area motion imagery

    NASA Astrophysics Data System (ADS)

    Kent, Phil; Maskell, Simon; Payne, Oliver; Richardson, Sean; Scarff, Larry

    2012-10-01

    Performing persistent surveillance of large populations of targets is increasingly important in both the defence and security domains. In response to this, Wide Area Motion Imagery (WAMI) sensors with Wide FoVs are growing in popularity. Such WAMI sensors simultaneously provide high spatial and temporal resolutions, giving extreme pixel counts over large geographical areas. The ensuing data rates are such that either very bandwidth data links are required (e.g. for human interpretation) or close-to-sensor automation is required to down-select salient information. For the latter case, we use an iterative quad-tree optical-flow algorithm to efficiently estimate the parameters of a perspective deformation of the background. We then use a robust estimator to simultaneously detect foreground pixels and infer the parameters of each background pixel in the current image. The resulting detections are referenced to the coordinates of the first frame and passed to a multi-target tracker. The multi-target tracker uses a Kalman filter per target and a Global Nearest Neighbour approach to multi-target data association, thereby including statistical models for missed detections and false alarms. We use spatial data structures to ensure that the tracker can scale to analysing thousands of targets. We demonstrate that real-time processing (on modest hardware) is feasible on an unclassified WAMI infra-red dataset consisting of 4096 by 4096 pixels at 1Hz simulating data taken from a Wide FoV sensor on a UAV. With low latency and despite intermittent obscuration and false alarms, we demonstrate persistent tracking of all but one (low-contrast) vehicular target, with no false tracks.

  12. Automated laser-based barely visible impact damage detection in honeycomb sandwich composite structures

    NASA Astrophysics Data System (ADS)

    Girolamo, D.; Girolamo, L.; Yuan, F. G.

    2015-03-01

    Nondestructive evaluation (NDE) for detection and quantification of damage in composite materials is fundamental in the assessment of the overall structural integrity of modern aerospace systems. Conventional NDE systems have been extensively used to detect the location and size of damages by propagating ultrasonic waves normal to the surface. However they usually require physical contact with the structure and are time consuming and labor intensive. An automated, contactless laser ultrasonic imaging system for barely visible impact damage (BVID) detection in advanced composite structures has been developed to overcome these limitations. Lamb waves are generated by a Q-switched Nd:YAG laser, raster scanned by a set of galvano-mirrors over the damaged area. The out-of-plane vibrations are measured through a laser Doppler Vibrometer (LDV) that is stationary at a point on the corner of the grid. The ultrasonic wave field of the scanned area is reconstructed in polar coordinates and analyzed for high resolution characterization of impact damage in the composite honeycomb panel. Two methodologies are used for ultrasonic wave-field analysis: scattered wave field analysis (SWA) and standing wave energy analysis (SWEA) in the frequency domain. The SWA is employed for processing the wave field and estimate spatially dependent wavenumber values, related to discontinuities in the structural domain. The SWEA algorithm extracts standing waves trapped within damaged areas and, by studying the spectrum of the standing wave field, returns high fidelity damage imaging. While the SWA can be used to locate the impact damage in the honeycomb panel, the SWEA produces damage images in good agreement with X-ray computed tomographic (X-ray CT) scans. The results obtained prove that the laser-based nondestructive system is an effective alternative to overcome limitations of conventional NDI technologies.

  13. Automated laser-based barely visible impact damage detection in honeycomb sandwich composite structures

    SciTech Connect

    Girolamo, D. Yuan, F. G.; Girolamo, L.

    2015-03-31

    Nondestructive evaluation (NDE) for detection and quantification of damage in composite materials is fundamental in the assessment of the overall structural integrity of modern aerospace systems. Conventional NDE systems have been extensively used to detect the location and size of damages by propagating ultrasonic waves normal to the surface. However they usually require physical contact with the structure and are time consuming and labor intensive. An automated, contactless laser ultrasonic imaging system for barely visible impact damage (BVID) detection in advanced composite structures has been developed to overcome these limitations. Lamb waves are generated by a Q-switched Nd:YAG laser, raster scanned by a set of galvano-mirrors over the damaged area. The out-of-plane vibrations are measured through a laser Doppler Vibrometer (LDV) that is stationary at a point on the corner of the grid. The ultrasonic wave field of the scanned area is reconstructed in polar coordinates and analyzed for high resolution characterization of impact damage in the composite honeycomb panel. Two methodologies are used for ultrasonic wave-field analysis: scattered wave field analysis (SWA) and standing wave energy analysis (SWEA) in the frequency domain. The SWA is employed for processing the wave field and estimate spatially dependent wavenumber values, related to discontinuities in the structural domain. The SWEA algorithm extracts standing waves trapped within damaged areas and, by studying the spectrum of the standing wave field, returns high fidelity damage imaging. While the SWA can be used to locate the impact damage in the honeycomb panel, the SWEA produces damage images in good agreement with X-ray computed tomographic (X-ray CT) scans. The results obtained prove that the laser-based nondestructive system is an effective alternative to overcome limitations of conventional NDI technologies.

  14. Semi-automated scar detection in delayed enhanced cardiac magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Morisi, Rita; Donini, Bruno; Lanconelli, Nico; Rosengarden, James; Morgan, John; Harden, Stephen; Curzen, Nick

    2015-06-01

    Late enhancement cardiac magnetic resonance images (MRI) has the ability to precisely delineate myocardial scars. We present a semi-automated method for detecting scars in cardiac MRI. This model has the potential to improve routine clinical practice since quantification is not currently offered due to time constraints. A first segmentation step was developed for extracting the target regions for potential scar and determining pre-candidate objects. Pattern recognition methods are then applied to the segmented images in order to detect the position of the myocardial scar. The database of late gadolinium enhancement (LE) cardiac MR images consists of 111 blocks of images acquired from 63 patients at the University Hospital Southampton NHS Foundation Trust (UK). At least one scar was present for each patient, and all the scars were manually annotated by an expert. A group of images (around one third of the entire set) was used for training the system which was subsequently tested on all the remaining images. Four different classifiers were trained (Support Vector Machine (SVM), k-nearest neighbor (KNN), Bayesian and feed-forward neural network) and their performance was evaluated by using Free response Receiver Operating Characteristic (FROC) analysis. Feature selection was implemented for analyzing the importance of the various features. The segmentation method proposed allowed the region affected by the scar to be extracted correctly in 96% of the blocks of images. The SVM was shown to be the best classifier for our task, and our system reached an overall sensitivity of 80% with less than 7 false positives per patient. The method we present provides an effective tool for detection of scars on cardiac MRI. This may be of value in clinical practice by permitting routine reporting of scar quantification.

  15. Imaging and automated detection of Sitophilus oryzae (Coleoptera: Curculionidae) pupae in hard red winter wheat.

    PubMed

    Toews, Michael D; Pearson, Tom C; Campbell, James F

    2006-04-01

    Computed tomography, an imaging technique commonly used for diagnosing internal human health ailments, uses multiple x-rays and sophisticated software to recreate a cross-sectional representation of a subject. The use of this technique to image hard red winter wheat, Triticum aestivm L., samples infested with pupae of Sitophilus oryzae (L.) was investigated. A software program was developed to rapidly recognize and quantify the infested kernels. Samples were imaged in a 7.6-cm (o.d.) plastic tube containing 0, 50, or 100 infested kernels per kg of wheat. Interkernel spaces were filled with corn oil so as to increase the contrast between voids inside kernels and voids among kernels. Automated image processing, using a custom C language software program, was conducted separately on each 100 g portion of the prepared samples. The average detection accuracy in the five infested kernels per 100-g samples was 94.4 +/- 7.3% (mean +/- SD, n = 10), whereas the average detection accuracy in the 10 infested kernels per 100-g sample was 87.3 +/- 7.9% (n = 10). Detection accuracy in the 10 infested kernels per 100-g samples was slightly less than the five infested kernels per 100-g samples because of some infested kernels overlapping with each other or air bubbles in the oil. A mean of 1.2 +/- 0.9 (n = 10) bubbles (per tube) was incorrectly classed as infested kernels in replicates containing no infested kernels. In light of these positive results, future studies should be conducted using additional grains, insect species, and life stages. PMID:16686163

  16. Automated laser scatter detection of surface and subsurface defects in Si{sub 3}N{sub 4} components

    SciTech Connect

    Steckenrider, J.S.

    1995-06-01

    Silicon Nitride (Si{sub 3}N{sub 4}) ceramics are currently a primary material of choice to replace conventional materials in many structural applications because of their oxidation resistance and desirable mechanical and thermal properties at elevated temperatures. However, surface or near-subsurface defects, such as cracks, voids, or inclusions, significantly affect component lifetimes. These defects are currently difficult to detect, so a technique is desired for the rapid automated detection and quantification of both surface and subsurface defects. To address this issue, the authors have developed an automated system based on the detection of scattered laser light which provides a 2-D map of surface or subsurface defects. This system has been used for the analysis of flexure bars and button-head tensile rods of several Si{sub 3}N{sub 4} materials. Mechanical properties of these bars have also been determined and compared with the laser scatter results.

  17. Census cities experiment in urban change detection

    NASA Technical Reports Server (NTRS)

    Wray, J. R. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. Work continues on mapping of 1970 urban land use from 1970 census contemporaneous aircraft photography. In addition, change detection analysis from 1972 aircraft photography is underway for several urban test sites. Land use maps, mosaics, and census overlays for the two largest urban test sites are nearing publication readiness. Preliminary examinations of ERTS-1 imagery of San Francisco Bay have been conducted which show that tracts of land of more than 10 acres in size which are undergoing development in an urban setting can be identified. In addition, each spectral band is being evaluated as to its utility for urban analyses. It has been found that MSS infrared band 7 helps to differentiate intra-urban land use details not found in other MSS bands or in the RBV coverage of the same scene. Good quality false CIR composites have been generated from 9 x 9 inch positive MSS bands using the Diazo process.

  18. Intelligent Transportation Systems: Automated Guided Vehicle Systems in Changing Logistics Environments

    NASA Astrophysics Data System (ADS)

    Schulze, L.; Behling, S.; Buhrs, S.

    2008-06-01

    The usage of Automated Guided Vehicle Systems (AGVS) is growing. This has not always been the case in the past. A new record of the sells numbers is the result of inventive developments, new applications and modern thinking. One market that AGVS were not able to thoroughly conquer yet were rapidly changing logistics environments. The advantages in recurrent transportation with AGVS used to be hindered by the needs of flexibility. When nowadays managers talk about Flexible Manufacturing Systems (FMS) there is no reason not to consider AGVS. Fixed guidelines, permanent transfer stations and static routes are no necessity for most AGVS producers. Flexible Manufacturing Systems can raise profitability with AGVS. When robots start saving billions in production costs, the next step at same plants are automated materials handling systems. Today, there are hundreds of instances of computer-controlled systems designed to handle and transport materials, many of which have replaced conventional human-driven platform trucks. Reduced costs due to damages and failures, tracking and tracing as well as improved production scheduling on top of fewer personnel needs are only some of the advantages.

  19. Selective Automated Perimetry Under Photopic, Mesopic, and Scotopic Conditions: Detection Mechanisms and Testing Strategies

    PubMed Central

    Simunovic, Matthew P.; Moore, Anthony T.; MacLaren, Robert E.

    2016-01-01

    Purpose Automated scotopic, mesopic, and photopic perimetry are likely to be important paradigms in the assessment of emerging treatments of retinal diseases, yet our knowledge of the photoreceptor mechanisms detecting targets under these conditions remains largely dependent on simian data. We therefore aimed to establish the photoreceptor/postreceptoral mechanisms detecting perimetric targets in humans under photopic, mesopic, and scotopic conditions and to make recommendations for suitable clinical testing strategies for selective perimetry. Methods Perimetric sensitivities within 30° of fixation were determined for eight wavelengths (410, 440, 480, 520, 560, 600, 640, and 680 nm) under scotopic, mesopic (1.3 cd.m−2) and photopic (10 cd.m−2) conditions. Data were fitted with vector combinations of rod, S-cone, nonopponent M+L-cone mechanism, and opponent M- versus L-cone mechanism templates. Results Scotopicperimetric sensitivity was determined by rods peripherally and by a combination of rods and cones at, and immediately around, fixation. Mesopic perimetric sensitivity was mediated by M+L-cones and S-cones centrally and by M+L-cones and rods more peripherally. Photopic perimetric sensitivity was determined by an opponent M- versus L-cone, a nonopponent M+L-cone, and an S-cone mechanism centrally and by a combination of an S-cone and an M+L-cone mechanism peripherally. Conclusions Under scotopic conditions, a 480-nm stimulus provides adequate isolation (≥28 dB) of the rod mechanism. Several mechanisms contribute to mesopic sensitivity: this redundancy in detection may cause both insensitivity to broadband white targets and ambiguity in determining which mechanism is being probed with short-wavelength stimuli. M- and L-cone–derived mechanisms are well isolated at 10 cd.m−2: these may be selectively probed by a stimulus at 640 nm (≥ 20 dB isolation). Translation Relevance In human observers, multiple mechanisms contribute to the detection of Goldmann

  20. Antiscatter stationary-grid artifacts automated detection and removal in projection radiography images

    NASA Astrophysics Data System (ADS)

    Belykh, Igor; Cornelius, Craig W.

    2001-07-01

    Antiscatter grids absorb scattered radiation and increase X- ray image contrast. Stationary grids leave line artifacts or create Moire patterns on resized digital images. Various grid designs were investigated to determine relevant physical properties that affect an image. A detection algorithm is based on grid peak determination in the image's averaged 1D Fourier spectrum. Grid artifact removal is based on frequency filtering in a spatial dimension orthogonal to the grid stripes. Different filter design algorithms were investigated to choose the transfer function that maximizes the suppression of grid artifacts with minimal image distortion. Algorithms were tested on synthetic data containing a variety of SNRs and grid spatial inclinations, on radiographic data containing phantoms with and without grids, and on a set of real CR images. Detector and filter performance were optimized by using Intel Signal Processing Library, resulting in a time of about 3 sec to process a 2Kx2.5K CR image on a Pentium II PC> There are no grid artifacts and no image blur revealed on processed images as evaluated by third party technical and medical experts. This automated grid artifact suppression method is built into a new version of Kodak PACS Link Medical Image Manager.

  1. Semi-Automated Detection of Surface Degradation on Bridges Based on a Level Set Method

    NASA Astrophysics Data System (ADS)

    Masiero, A.; Guarnieri, A.; Pirotti, F.; Vettore, A.

    2015-08-01

    Due to the effect of climate factors, natural phenomena and human usage, buildings and infrastructures are subject of progressive degradation. The deterioration of these structures has to be monitored in order to avoid hazards for human beings and for the natural environment in their neighborhood. Hence, on the one hand, monitoring such infrastructures is of primarily importance. On the other hand, unfortunately, nowadays this monitoring effort is mostly done by expert and skilled personnel, which follow the overall data acquisition, analysis and result reporting process, making the whole monitoring procedure quite expensive for the public (and private, as well) agencies. This paper proposes the use of a partially user-assisted procedure in order to reduce the monitoring cost and to make the obtained result less subjective as well. The developed method relies on the use of images acquired with standard cameras by even inexperienced personnel. The deterioration on the infrastructure surface is detected by image segmentation based on a level sets method. The results of the semi-automated analysis procedure are remapped on a 3D model of the infrastructure obtained by means of a terrestrial laser scanning acquisition. The proposed method has been successfully tested on a portion of a road bridge in Perarolo di Cadore (BL), Italy.

  2. Automated detection of the left ventricular region in gated nuclear cardiac imaging.

    PubMed

    Boudraa, A E; Arzi, M; Sau, J; Champier, J; Hadj-Moussa, S; Besson, J E; Sappey-Marinier, D; Itti, R; Mallet, J J

    1996-04-01

    An approach to automated outlining the left ventricular contour and its bounded area in gated isotopic ventriculography is proposed. Its purpose is to determine the ejection fraction (EF), an important parameter for measuring cardiac function. The method uses a modified version of the fuzzy C-means (MFCM) algorithm and a labeling technique. The MFCM algorithm is applied to the end diastolic (ED) frame and then the (FCM) is applied to the remaining images in a "box" of interest. The MFCM generates a number of fuzzy clusters. Each cluster is a substructure of the heart (left ventricle,...). A cluster validity index to estimate the optimum clusters number present in image data point is used. This index takes account of the homogeneity in each cluster and is connected to the geometrical property of data set. The labeling is only performed to achieve the detection process in the ED frame. Since the left ventricle (LV) cluster has the greatest area of the cardiac images sequence in ED phase, a framing operation is performed to obtain, automatically, the "box" enclosing the LV cluster. THe EF assessed in 50 patients by the proposed method and a semi-automatic one, routinely used, are presented. A good correlation between the two methods EF values is obtained (R = 0.93). The LV contour found has been judged very satisfactory by a team of trained clinicians. PMID:8626193

  3. ECO fill: automated fill modification to support late-stage design changes

    NASA Astrophysics Data System (ADS)

    Davis, Greg; Wilson, Jeff; Yu, J. J.; Chiu, Anderson; Chuang, Yao-Jen; Yang, Ricky

    2014-03-01

    One of the most critical factors in achieving a positive return for a design is ensuring the design not only meets performance specifications, but also produces sufficient yield to meet the market demand. The goal of design for manufacturability (DFM) technology is to enable designers to address manufacturing requirements during the design process. While new cell-based, DP-aware, and net-aware fill technologies have emerged to provide the designer with automated fill engines that support these new fill requirements, design changes that arrive late in the tapeout process (as engineering change orders, or ECOs) can have a disproportionate effect on tapeout schedules, due to the complexity of replacing fill. If not handled effectively, the impacts on file size, run time, and timing closure can significantly extend the tapeout process. In this paper, the authors examine changes to design flow methodology, supported by new fill technology, that enable efficient, fast, and accurate adjustments to metal fill late in the design process. We present an ECO fill methodology coupled with the support of advanced fill tools that can quickly locate the portion of the design affected by the change, remove and replace only the fill in that area, while maintaining the fill hierarchy. This new fill approach effectively reduces run time, contains fill file size, minimizes timing impact, and minimizes mask costs due to ECO-driven fill changes, all of which are critical factors to ensuring time-to-market schedules are maintained.

  4. Detection of coronary calcifications from computed tomography scans for automated risk assessment of coronary artery disease

    SciTech Connect

    Isgum, Ivana; Rutten, Annemarieke; Prokop, Mathias; Ginneken, Bram van

    2007-04-15

    A fully automated method for coronary calcification detection from non-contrast-enhanced, ECG-gated multi-slice computed tomography (CT) data is presented. Candidates for coronary calcifications are extracted by thresholding and component labeling. These candidates include coronary calcifications, calcifications in the aorta and in the heart, and other high-density structures such as noise and bone. A dedicated set of 64 features is calculated for each candidate object. They characterize the object's spatial position relative to the heart and the aorta, for which an automatic segmentation scheme was developed, its size and shape, and its appearance, which is described by a set of approximated Gaussian derivatives for which an efficient computational scheme is presented. Three classification strategies were designed. The first one tested direct classification without feature selection. The second approach also utilized direct classification, but with feature selection. Finally, the third scheme employed two-stage classification. In a computationally inexpensive first stage, the most easily recognizable false positives were discarded. The second stage discriminated between more difficult to separate coronary calcium and other candidates. Performance of linear, quadratic, nearest neighbor, and support vector machine classifiers was compared. The method was tested on 76 scans containing 275 calcifications in the coronary arteries and 335 calcifications in the heart and aorta. The best performance was obtained employing a two-stage classification system with a k-nearest neighbor (k-NN) classifier and a feature selection scheme. The method detected 73.8% of coronary calcifications at the expense of on average 0.1 false positives per scan. A calcium score was computed for each scan and subjects were assigned one of four risk categories based on this score. The method assigned the correct risk category to 93.4% of all scans.

  5. Automated determinations of selenium in thermal power plant wastewater by sequential hydride generation and chemiluminescence detection.

    PubMed

    Ezoe, Kentaro; Ohyama, Seiichi; Hashem, Md Abul; Ohira, Shin-Ichi; Toda, Kei

    2016-02-01

    After the Fukushima disaster, power generation from nuclear power plants in Japan was completely stopped and old coal-based power plants were re-commissioned to compensate for the decrease in power generation capacity. Although coal is a relatively inexpensive fuel for power generation, it contains high levels (mgkg(-1)) of selenium, which could contaminate the wastewater from thermal power plants. In this work, an automated selenium monitoring system was developed based on sequential hydride generation and chemiluminescence detection. This method could be applied to control of wastewater contamination. In this method, selenium is vaporized as H2Se, which reacts with ozone to produce chemiluminescence. However, interference from arsenic is of concern because the ozone-induced chemiluminescence intensity of H2Se is much lower than that of AsH3. This problem was successfully addressed by vaporizing arsenic and selenium individually in a sequential procedure using a syringe pump equipped with an eight-port selection valve and hot and cold reactors. Oxidative decomposition of organoselenium compounds and pre-reduction of the selenium were performed in the hot reactor, and vapor generation of arsenic and selenium were performed separately in the cold reactor. Sample transfers between the reactors were carried out by a pneumatic air operation by switching with three-way solenoid valves. The detection limit for selenium was 0.008 mg L(-1) and calibration curve was linear up to 1.0 mg L(-1), which provided suitable performance for controlling selenium in wastewater to around the allowable limit (0.1 mg L(-1)). This system consumes few chemicals and is stable for more than a month without any maintenance. Wastewater samples from thermal power plants were collected, and data obtained by the proposed method were compared with those from batchwise water treatment followed by hydride generation-atomic fluorescence spectrometry. PMID:26653491

  6. Automated Visual Event Detection, Tracking, and Data Management System for Cabled- Observatory Video

    NASA Astrophysics Data System (ADS)

    Edgington, D. R.; Cline, D. E.; Schlining, B.; Raymond, E.

    2008-12-01

    Ocean observatories and underwater video surveys have the potential to unlock important discoveries with new and existing camera systems. Yet the burden of video management and analysis often requires reducing the amount of video recorded through time-lapse video or similar methods. It's unknown how many digitized video data sets exist in the oceanographic community, but we suspect that many remain under analyzed due to lack of good tools or human resources to analyze the video. To help address this problem, the Automated Visual Event Detection (AVED) software and The Video Annotation and Reference System (VARS) have been under development at MBARI. For detecting interesting events in the video, the AVED software has been developed over the last 5 years. AVED is based on a neuromorphic-selective attention algorithm, modeled on the human vision system. Frames are decomposed into specific feature maps that are combined into a unique saliency map. This saliency map is then scanned to determine the most salient locations. The candidate salient locations are then segmented from the scene using algorithms suitable for the low, non-uniform light and marine snow typical of deep underwater video. For managing the AVED descriptions of the video, the VARS system provides an interface and database for describing, viewing, and cataloging the video. VARS was developed by the MBARI for annotating deep-sea video data and is currently being used to describe over 3000 dives by our remotely operated vehicles (ROV), making it well suited to this deepwater observatory application with only a few modifications. To meet the compute and data intensive job of video processing, a distributed heterogeneous network of computers is managed using the Condor workload management system. This system manages data storage, video transcoding, and AVED processing. Looking to the future, we see high-speed networks and Grid technology as an important element in addressing the problem of processing and

  7. Automated Detection and Classification of Rockfall Induced Seismic Signals with Hidden-Markov-Models

    NASA Astrophysics Data System (ADS)

    Zeckra, M.; Hovius, N.; Burtin, A.; Hammer, C.

    2015-12-01

    Originally introduced in speech recognition, Hidden Markov Models are applied in different research fields of pattern recognition. In seismology, this technique has recently been introduced to improve common detection algorithms, like STA/LTA ratio or cross-correlation methods. Mainly used for the monitoring of volcanic activity, this study is one of the first applications to seismic signals induced by geomorphologic processes. With an array of eight broadband seismometers deployed around the steep Illgraben catchment (Switzerland) with high-level erosion, we studied a sequence of landslides triggered over a period of several days in winter. A preliminary manual classification led us to identify three main seismic signal classes that were used as a start for the HMM automated detection and classification: (1) rockslide signal, including a failure source and the debris mobilization along the slope, (2) rockfall signal from the remobilization of debris along the unstable slope, and (3) single cracking signal from the affected cliff observed before the rockslide events. Besides the ability to classify the whole dataset automatically, the HMM approach reflects the origin and the interactions of the three signal classes, which helps us to understand this geomorphic crisis and the possible triggering mechanisms for slope processes. The temporal distribution of crack events (duration > 5s, frequency band [2-8] Hz) follows an inverse Omori law, leading to the catastrophic behaviour of the failure mechanisms and the interest for warning purposes in rockslide risk assessment. Thanks to a dense seismic array and independent weather observations in the landslide area, this dataset also provides information about the triggering mechanisms, which exhibit a tight link between rainfall and freezing level fluctuations.

  8. Advanced Automated Solar Filament Detection and Characterization Code: Description, Performance, and Results

    NASA Astrophysics Data System (ADS)

    Bernasconi, P. N.; Rust, D. M.

    2004-12-01

    We have developed a code for automated detection and classification of solar filaments in full-disk H-alpha images that can contribute to Living With a Star science investigations and space weather forecasting. The program can reliably identify filaments, determine their chirality and other relevant parameters like the filaments area and their average orientation with respect to the equator, and is capable of tracking the day-by-day evolution of filaments while they travel across the visible disk. Detecting the filaments when they appear and tracking their evolution can provide not only early warnings of potentially hazardous conditions but also improve our understanding of solar filaments and their implications for space weather at 1 AU. The code was recently tested by analyzing daily H-alpha images taken at the Big Bear Solar Observatory during a period of four years (from mid 2000 until mid 2004). It identified and established the chirality of more than 5000 filaments without human intervention. We compared the results with the filament list manually compiled by Pevtsov et al. (2003) over the same period of time. The computer list matches the Pevtsov et al. list fairly well. The code results confirm the hemispherical chirality rule: dextral filaments predominate in the north and sinistral ones predominate in the south. The main difference between the two lists is that the code finds significantly more filaments without an identifiable chirality. This may be due to a tendency of human operators to be biased, thereby assigning a chirality in less clear cases, while the code is totally unbiased. We also have found evidence that filaments with definite chirality tend to be larger and last longer than the ones without a clear chirality signature. We will describe the major code characteristics and present and discuss the tests results.

  9. A completely automated CAD system for mass detection in a large mammographic database

    SciTech Connect

    Bellotti, R.; De Carlo, F.; Tangaro, S.

    2006-08-15

    Mass localization plays a crucial role in computer-aided detection (CAD) systems for the classification of suspicious regions in mammograms. In this article we present a completely automated classification system for the detection of masses in digitized mammographic images. The tool system we discuss consists in three processing levels: (a) Image segmentation for the localization of regions of interest (ROIs). This step relies on an iterative dynamical threshold algorithm able to select iso-intensity closed contours around gray level maxima of the mammogram. (b) ROI characterization by means of textural features computed from the gray tone spatial dependence matrix (GTSDM), containing second-order spatial statistics information on the pixel gray level intensity. As the images under study were recorded in different centers and with different machine settings, eight GTSDM features were selected so as to be invariant under monotonic transformation. In this way, the images do not need to be normalized, as the adopted features depend on the texture only, rather than on the gray tone levels, too. (c) ROI classification by means of a neural network, with supervision provided by the radiologist's diagnosis. The CAD system was evaluated on a large database of 3369 mammographic images [2307 negative, 1062 pathological (or positive), containing at least one confirmed mass, as diagnosed by an expert radiologist]. To assess the performance of the system, receiver operating characteristic (ROC) and free-response ROC analysis were employed. The area under the ROC curve was found to be A{sub z}=0.783{+-}0.008 for the ROI-based classification. When evaluating the accuracy of the CAD against the radiologist-drawn boundaries, 4.23 false positives per image are found at 80% of mass sensitivity.

  10. Attribute and topology based change detection in a constellation of previously detected objects

    DOEpatents

    Paglieroni, David W.; Beer, Reginald N.

    2016-01-19

    A system that applies attribute and topology based change detection to networks of objects that were detected on previous scans of a structure, roadway, or area of interest. The attributes capture properties or characteristics of the previously detected objects, such as location, time of detection, size, elongation, orientation, etc. The topology of the network of previously detected objects is maintained in a constellation database that stores attributes of previously detected objects and implicitly captures the geometrical structure of the network. A change detection system detects change by comparing the attributes and topology of new objects detected on the latest scan to the constellation database of previously detected objects.

  11. Automated detection and measurement of isolated retinal arterioles by a combination of edge enhancement and cost analysis.

    PubMed

    Fernández, José A; Bankhead, Peter; Zhou, Huiyu; McGeown, J Graham; Curtis, Tim M

    2014-01-01

    Pressure myography studies have played a crucial role in our understanding of vascular physiology and pathophysiology. Such studies depend upon the reliable measurement of changes in the diameter of isolated vessel segments over time. Although several software packages are available to carry out such measurements on small arteries and veins, no such software exists to study smaller vessels (<50 µm in diameter). We provide here a new, freely available open-source algorithm, MyoTracker, to measure and track changes in the diameter of small isolated retinal arterioles. The program has been developed as an ImageJ plug-in and uses a combination of cost analysis and edge enhancement to detect the vessel walls. In tests performed on a dataset of 102 images, automatic measurements were found to be comparable to those of manual ones. The program was also able to track both fast and slow constrictions and dilations during intraluminal pressure changes and following application of several drugs. Variability in automated measurements during analysis of videos and processing times were also investigated and are reported. MyoTracker is a new software to assist during pressure myography experiments on small isolated retinal arterioles. It provides fast and accurate measurements with low levels of noise and works with both individual images and videos. Although the program was developed to work with small arterioles, it is also capable of tracking the walls of other types of microvessels, including venules and capillaries. It also works well with larger arteries, and therefore may provide an alternative to other packages developed for larger vessels when its features are considered advantageous. PMID:24626349

  12. Automated detection of rare-event pathogens through time-gated luminescence scanning microscopy.

    PubMed

    Lu, Yiqing; Jin, Dayong; Leif, Robert C; Deng, Wei; Piper, James A; Yuan, Jingli; Duan, Yusheng; Huo, Yujing

    2011-05-01

    Many microorganisms have a very low threshold (<10 cells) to trigger infectious diseases, and, in these cases, it is important to determine the absolute cell count in a low-cost and speedy fashion. Fluorescent microscopy is a routine method; however, one fundamental problem has been associated with the existence in the sample of large numbers of nontarget particles, which are naturally autofluorescent, thereby obscuring the visibility of target organisms. This severely affects both direct visual inspection and the automated microscopy based on computer pattern recognition. We report a novel strategy of time-gated luminescent scanning for accurate counting of rare-event cells, which exploits the large difference in luminescence lifetimes between the lanthanide biolabels, >100 μs, and the autofluorescence backgrounds, <0.1 μs, to render background autofluorescence invisible to the detector. Rather than having to resort to sophisticated imaging analysis, the background-free feature allows a single-element photomultiplier to locate rare-event cells, so that requirements for data storage and analysis are minimized to the level of image confirmation only at the final step. We have evaluated this concept in a prototype instrument using a 2D scanning stage and applied it to rare-event Giardia detection labeled by a europium complex. For a slide area of 225 mm(2) , the time-gated scanning method easily reduced the original 40,000 adjacent elements (0.075 mm × 0.075 mm) down to a few "elements of interest" containing the Giardia cysts. We achieved an averaged signal-to-background ratio of 41.2 (minimum ratio of 12.1). Such high contrasts ensured the accurate mapping of all the potential Giardia cysts free of false positives or negatives. This was confirmed by the automatic retrieving and time-gated luminescence bioimaging of these Giardia cysts. Such automated microscopy based on time-gated scanning can provide novel solutions for quantitative diagnostics in advanced

  13. The Impacts of Changes in Snowfall on Soil Greenhouse Gas Emissions Using an Automated Chamber System

    NASA Astrophysics Data System (ADS)

    Ruan, L.; Kahmark, K.; Robertson, G.

    2012-12-01

    Snow cover has decreased in many regions of the northern hemisphere and is projected to decrease further in most. The reduced snow cover may enhance soil freezing and increase the depth of frost. The frequency of freeze-thaw cycles is likely to increase due to the reduction of snowpack thickness. Freeze and thaw cycles can strongly affect soil C and N dynamics. The pulses of N2O and CO2 emissions from soil after thawing have been reported in various studies. However, most studies were based on the controlled laboratory conditions or low resolution static chamber methods in situ. Near-continuous automated chambers provide the temporal resolution needed for capturing short-lived pulses of greenhouse gases after intermittent melting events. We investigated the winter and spring response of soil greenhouse gas emissions (CO2, CH4 and N2O) to changes of snow depth using an automated chamber system. This study was established in 2010 at the Kellogg Biological Station (KBS) in southwest Michigan. The plot was no till rotational (corn-soybean-wheat) cropland, most recently in corn. The experiment was a completely randomized design (CRD) with three levels of snow depth: ambient, double, and no snow. Each level had four replicates. Twelve automated chambers were randomly assigned to treatments and greenhouse gas fluxes measured 4 times per day in each plot. There were more freeze-thaw cycles in the no snow treatment than in the ambient and double snow treatments. Soil temperature at 5 cm depth was more variable in the no snow treatment than in the ambient and double snow treatments. CH4 fluxes were uniformly low with no significant difference across three treatments. CO2 showed expected seasonal changes with the highest emission in spring and lowest emissions through the winter. N2O peaks were higher in spring due to freeze thaw effects and cumulative N2O fluxes were substantially higher in the no snow treatment than in the ambient and double snow treatments.

  14. Automated detection and analysis of particle beams in laser-plasma accelerator simulations

    SciTech Connect

    Ushizima, Daniela Mayumi; Geddes, C.G.; Cormier-Michel, E.; Bethel, E. Wes; Jacobsen, J.; Prabhat, ,; R.ubel, O.; Weber, G,; Hamann, B.

    2010-05-21

    scientific data mining is increasingly considered. In plasma simulations, Bagherjeiran et al. presented a comprehensive report on applying graph-based techniques for orbit classification. They used the KAM classifier to label points and components in single and multiple orbits. Love et al. conducted an image space analysis of coherent structures in plasma simulations. They used a number of segmentation and region-growing techniques to isolate regions of interest in orbit plots. Both approaches analyzed particle accelerator data, targeting the system dynamics in terms of particle orbits. However, they did not address particle dynamics as a function of time or inspected the behavior of bunches of particles. Ruebel et al. addressed the visual analysis of massive laser wakefield acceleration (LWFA) simulation data using interactive procedures to query the data. Sophisticated visualization tools were provided to inspect the data manually. Ruebel et al. have integrated these tools to the visualization and analysis system VisIt, in addition to utilizing efficient data management based on HDF5, H5Part, and the index/query tool FastBit. In Ruebel et al. proposed automatic beam path analysis using a suite of methods to classify particles in simulation data and to analyze their temporal evolution. To enable researchers to accurately define particle beams, the method computes a set of measures based on the path of particles relative to the distance of the particles to a beam. To achieve good performance, this framework uses an analysis pipeline designed to quickly reduce the amount of data that needs to be considered in the actual path distance computation. As part of this process, region-growing methods are utilized to detect particle bunches at single time steps. Efficient data reduction is essential to enable automated analysis of large data sets as described in the next section, where data reduction methods are steered to the particular requirements of our clustering analysis

  15. A method for automated control of belt velocity changes with an instrumented treadmill.

    PubMed

    Hinkel-Lipsker, Jacob W; Hahn, Michael E

    2016-01-01

    Increased practice difficulty during asymmetrical split-belt treadmill rehabilitation has been shown to improve gait outcomes during retention and transfer tests. However, research in this area has been limited by manual treadmill operation. In the case of variable practice, which requires stride-by-stride changes to treadmill belt velocities, the treadmill control must be automated. This paper presents a method for automation of asymmetrical split-belt treadmill walking, and evaluates how well this method performs with regards to timing of gait events. One participant walked asymmetrically for 100 strides, where the non-dominant limb was driven at their self-selected walking speed, while the other limb was driven randomly on a stride-by-stride basis. In the control loop, the key factors to insure that the treadmill belt had accelerated to its new velocity safely during the swing phase were the sampling rate of the A/D converter, processing time within the controller software, and acceleration of the treadmill belt. The combination of these three factors resulted in a total control loop time during each swing phase that satisfied these requirements with a factor of safety that was greater than 4. Further, a polynomial fit indicated that belt acceleration was the largest contributor to changes in this total time. This approach appears to be safe and reliable for stride-by-stride adjustment of treadmill belt speed, making it suitable for future asymmetrical split-belt walking studies. Further, it can be incorporated into virtual reality rehabilitation paradigms that utilize split-belt treadmill walking. PMID:26654110

  16. Change detection from very high resolution satellite time series with variable off-nadir angle

    NASA Astrophysics Data System (ADS)

    Barazzetti, Luigi; Brumana, Raffaella; Cuca, Branka; Previtali, Mattia

    2015-06-01

    Very high resolution (VHR) satellite images have the potential for revealing changes occurred overtime with a superior level of detail. However, their use for metric purposes requires accurate geo-localization with ancillary DEMs and GCPs to achieve sub-pixel terrain correction, in order to obtain images useful for mapping applications. Change detection with a time series of VHS images is not a simple task because images acquired with different off-nadir angles have a lack of pixel-to-pixel image correspondence, even after accurate geo-correction. This paper presents a procedure for automatic change detection able to deal with variable off-nadir angles. The case study concerns the identification of damaged buildings from pre- and post-event images acquired on the historic center of L'Aquila (Italy), which was struck by an earthquake in April 2009. The developed procedure is a multi-step approach where (i) classes are assigned to both images via object-based classification, (ii) an initial alignment is provided with an automated tile-based rubber sheeting interpolation on the extracted layers, and (iii) change detection is carried out removing residual mis-registration issues resulting in elongated features close to building edges. The method is fully automated except for some thresholds that can be interactively set to improve the visualization of the damaged buildings. The experimental results proved that damages can be automatically found without additional information, such as digital surface models, SAR data, or thematic vector layers.

  17. Ensembles of detectors for online detection of transient changes

    NASA Astrophysics Data System (ADS)

    Artemov, Alexey; Burnaev, Evgeny

    2015-12-01

    Classical change-point detection procedures assume a change-point model to be known and a change consisting in establishing a new observations regime, i.e. the change lasts infinitely long. These modeling assumptions contradicts applied problems statements. Therefore, even theoretically optimal statistics in practice very often fail when detecting transient changes online. In this work in order to overcome limitations of classical change-point detection procedures we consider approaches to constructing ensembles of change-point detectors, i.e. algorithms that use many detectors to reliably identify a change-point. We propose a learning paradigm and specific implementations of ensembles for change detection of short-term (transient) changes in observed time series. We demonstrate by means of numerical experiments that the performance of an ensemble is superior to that of the conventional change-point detection procedures.

  18. Impact of an Automated Surveillance to Detect Surgical-Site Infections in Patients Undergoing Total Hip and Knee Arthroplasty in Brazil.

    PubMed

    Perdiz, Luciana B; Yokoe, Deborah S; Furtado, Guilherme H; Medeiros, Eduardo A S

    2016-08-01

    In this retrospective study, we compared automated surveillance with conventional surveillance to detect surgical site infection after primary total hip or knee arthroplasty. Automated surveillance demonstrated better efficacy than routine surveillance in SSI diagnosis, sensitivity, and predictive negative value in hip and knee arthroplasty. Infect Control Hosp Epidemiol 2016;37:991-993. PMID:27072598

  19. Macrothrombocytopenia in north India: role of automated platelet data in the detection of an under diagnosed entity.

    PubMed

    Kakkar, Naveen; John, M Joseph; Mathew, Amrith

    2015-03-01

    Congenital macrothrombocytopenia is being increasingly recognised because of the increasing availability of automated platelet counts during routine complete blood count. If not recognised, these patients may be unnecessarily investigated or treated. The study was done to assess the occurrence of macrothrombocytopenia in the North Indian population and the role of automated platelet parameters in its detection. This prospective study was done on patients whose blood samples were sent for CBC to the hematology laboratory of a tertiary care hospital. Samples were run on Advia-120, a 5-part differential automated analyzer. Routine blood parameters including platelet count, mean platelet volume (MPV), platelet cytogram pattern and platelet flagging was studied along with peripheral blood smear examination. ANOVA was used to compare difference in mean MPV in patients with macrothrombocytopenia, and those with secondary thrombocytopenia and ITP. Seventy five (0.6 %) patients with CBC evaluation were detected to have macrothrombocytopenia, majority (96 %) of North Indian origin. The MPV (fl) in the 75 patients ranged from 10.9 to 23.3 (mean 15.1 ± 3.1 fl) with a dispersed cytogram pattern distinct from that seen in patients with normal platelet count, raised platelet count or low platelets due to secondary thrombocytopenia (MPV-10.9 ± 2.6) or ITP (10.8 ± 3.5). The difference in mean MPV in these patients was statistically significant (p < 0.00001). Macrothrombocytopenia is an under diagnosed condition and may be initially suspected on automated blood counts. Along with a blood smear examination, automated data (MPV and platelet cytogram pattern) aids the diagnosis and can avoid unnecessary investigations and interventions for these patients. PMID:25548447

  20. Erratum to: Automated Sample Preparation Method for Suspension Arrays using Renewable Surface Separations with Multiplexed Flow Cytometry Fluorescence Detection

    SciTech Connect

    Grate, Jay W.; Bruckner-Lea, Cindy J.; Jarrell, Ann E.; Chandler, Darrell P.

    2003-04-10

    In this paper we describe a new method of automated sample preparation for multiplexed biological analysis systems that use flow cytometry fluorescence detection. In this approach, color-encoded microspheres derivatized to capture particular biomolecules are temporarily trapped in a renewable surface separation column to enable perfusion with sample and reagents prior to delivery to the detector. This method provides for separation of the biomolecules of interest from other sample matrix components as well as from labeling solutions.

  1. Automated image-based colon cleansing for laxative-free CT colonography computer-aided polyp detection

    SciTech Connect

    Linguraru, Marius George; Panjwani, Neil; Fletcher, Joel G.; Summer, Ronald M.

    2011-12-15

    Purpose: To evaluate the performance of a computer-aided detection (CAD) system for detecting colonic polyps at noncathartic computed tomography colonography (CTC) in conjunction with an automated image-based colon cleansing algorithm. Methods: An automated colon cleansing algorithm was designed to detect and subtract tagged-stool, accounting for heterogeneity and poor tagging, to be used in conjunction with a colon CAD system. The method is locally adaptive and combines intensity, shape, and texture analysis with probabilistic optimization. CTC data from cathartic-free bowel preparation were acquired for testing and training the parameters. Patients underwent various colonic preparations with barium or Gastroview in divided doses over 48 h before scanning. No laxatives were administered and no dietary modifications were required. Cases were selected from a polyp-enriched cohort and included scans in which at least 90% of the solid stool was visually estimated to be tagged and each colonic segment was distended in either the prone or supine view. The CAD system was run comparatively with and without the stool subtraction algorithm. Results: The dataset comprised 38 CTC scans from prone and/or supine scans of 19 patients containing 44 polyps larger than 10 mm (22 unique polyps, if matched between prone and supine scans). The results are robust on fine details around folds, thin-stool linings on the colonic wall, near polyps and in large fluid/stool pools. The sensitivity of the CAD system is 70.5% per polyp at a rate of 5.75 false positives/scan without using the stool subtraction module. This detection improved significantly (p = 0.009) after automated colon cleansing on cathartic-free data to 86.4% true positive rate at 5.75 false positives/scan. Conclusions: An automated image-based colon cleansing algorithm designed to overcome the challenges of the noncathartic colon significantly improves the sensitivity of colon CAD by approximately 15%.

  2. Building Change Detection from LIDAR Point Cloud Data Based on Connected Component Analysis

    NASA Astrophysics Data System (ADS)

    Awrangjeb, M.; Fraser, C. S.; Lu, G.

    2015-08-01

    Building data are one of the important data types in a topographic database. Building change detection after a period of time is necessary for many applications, such as identification of informal settlements. Based on the detected changes, the database has to be updated to ensure its usefulness. This paper proposes an improved building detection technique, which is a prerequisite for many building change detection techniques. The improved technique examines the gap between neighbouring buildings in the building mask in order to avoid under segmentation errors. Then, a new building change detection technique from LIDAR point cloud data is proposed. Buildings which are totally new or demolished are directly added to the change detection output. However, for demolished or extended building parts, a connected component analysis algorithm is applied and for each connected component its area, width and height are estimated in order to ascertain if it can be considered as a demolished or new building part. Finally, a graphical user interface (GUI) has been developed to update detected changes to the existing building map. Experimental results show that the improved building detection technique can offer not only higher performance in terms of completeness and correctness, but also a lower number of undersegmentation errors as compared to its original counterpart. The proposed change detection technique produces no omission errors and thus it can be exploited for enhanced automated building information updating within a topographic database. Using the developed GUI, the user can quickly examine each suggested change and indicate his/her decision with a minimum number of mouse clicks.

  3. Eye Movements and Display Change Detection during Reading

    ERIC Educational Resources Information Center

    Slattery, Timothy J.; Angele, Bernhard; Rayner, Keith

    2011-01-01

    In the boundary change paradigm (Rayner, 1975), when a reader's eyes cross an invisible boundary location, a preview word is replaced by a target word. Readers are generally unaware of such changes due to saccadic suppression. However, some readers detect changes on a few trials and a small percentage of them detect many changes. Two experiments…

  4. Automated detection of case clusters of waterborne acute gastroenteritis from health insurance data - pilot study in three French districts.

    PubMed

    Rambaud, Loïc; Galey, Catherine; Beaudeau, Pascal

    2016-04-01

    This pilot study was conducted to assess the utility of using a health insurance database for the automated detection of waterborne outbreaks of acute gastroenteritis (AGE). The weekly number of AGE cases for which the patient consulted a doctor (cAGE) was derived from this database for 1,543 towns in three French districts during the 2009-2012 period. The method we used is based on a spatial comparison of incidence rates and of their time trends between the target town and the district. Each municipality was tested, week by week, for the entire study period. Overall, 193 clusters were identified, 10% of the municipalities were involved in at least one cluster and less than 2% in several. We can infer that nationwide more than 1,000 clusters involving 30,000 cases of cAGE each year may be linked to tap water. The clusters discovered with this automated detection system will be reported to local operators for investigation of the situations at highest risk. This method will be compared with others before automated detection is implemented on a national level. PMID:27105415

  5. Evaluation of a CLEIA automated assay system for the detection of a panel of tumor markers.

    PubMed

    Falzarano, Renato; Viggiani, Valentina; Michienzi, Simona; Longo, Flavia; Tudini, Silvestra; Frati, Luigi; Anastasi, Emanuela

    2013-10-01

    Tumor markers are commonly used to detect a relapse of disease in oncologic patients during follow-up. It is important to evaluate new assay systems for a better and more precise assessment, as a standardized method is currently lacking. The aim of this study was to assess the concordance between an automated chemiluminescent enzyme immunoassay system (LUMIPULSE® G1200) and our reference methods using seven tumor markers. Serum samples from 787 subjects representing a variety of diagnoses, including oncologic, were analyzed using LUMIPULSE® G1200 and our reference methods. Serum values were measured for the following analytes: prostate-specific antigen (PSA), alpha-fetoprotein (AFP), carcinoembryonic antigen (CEA), cancer antigen 125 (CA125), carbohydrate antigen 15-3 (CA15-3), carbohydrate antigen 19-9 (CA19-9), and cytokeratin 19 fragment (CYFRA 21-1). For the determination of CEA, AFP, and PSA, an automatic analyzer based on chemiluminescence was applied as reference method. To assess CYFRA 21-1, CA125, CA19-9, and CA15-3, an immunoradiometric manual system was employed. Method comparison by Passing-Bablok analysis resulted in slopes ranging from 0.9728 to 1.9089 and correlation coefficients from 0.9977 to 0.9335. The precision of each assay was assessed by testing six serum samples. Each sample was analyzed for all tumor biomarkers in duplicate and in three different runs. The coefficients of variation were less than 6.3 and 6.2 % for within-run and between-run variation, respectively. Our data suggest an overall good interassay agreement for all markers. The comparison with our reference methods showed good precision and reliability, highlighting its usefulness in clinical laboratory's routine. PMID:23775009

  6. Automated Thermal Image Processing for Detection and Classification of Birds and Bats - FY2012 Annual Report

    SciTech Connect

    Duberstein, Corey A.; Matzner, Shari; Cullinan, Valerie I.; Virden, Daniel J.; Myers, Joshua R.; Maxwell, Adam R.

    2012-09-01

    Surveying wildlife at risk from offshore wind energy development is difficult and expensive. Infrared video can be used to record birds and bats that pass through the camera view, but it is also time consuming and expensive to review video and determine what was recorded. We proposed to conduct algorithm and software development to identify and to differentiate thermally detected targets of interest that would allow automated processing of thermal image data to enumerate birds, bats, and insects. During FY2012 we developed computer code within MATLAB to identify objects recorded in video and extract attribute information that describes the objects recorded. We tested the efficiency of track identification using observer-based counts of tracks within segments of sample video. We examined object attributes, modeled the effects of random variability on attributes, and produced data smoothing techniques to limit random variation within attribute data. We also began drafting and testing methodology to identify objects recorded on video. We also recorded approximately 10 hours of infrared video of various marine birds, passerine birds, and bats near the Pacific Northwest National Laboratory (PNNL) Marine Sciences Laboratory (MSL) at Sequim, Washington. A total of 6 hours of bird video was captured overlooking Sequim Bay over a series of weeks. An additional 2 hours of video of birds was also captured during two weeks overlooking Dungeness Bay within the Strait of Juan de Fuca. Bats and passerine birds (swallows) were also recorded at dusk on the MSL campus during nine evenings. An observer noted the identity of objects viewed through the camera concurrently with recording. These video files will provide the information necessary to produce and test software developed during FY2013. The annotation will also form the basis for creation of a method to reliably identify recorded objects.

  7. Advanced Automated Solar Filament Detection And Characterization Code: Description, Performance, And Results

    NASA Astrophysics Data System (ADS)

    Bernasconi, Pietro N.; Rust, David M.; Hakim, Daniel

    2005-05-01

    We present a code for automated detection, classification, and tracking of solar filaments in full-disk Hα images that can contribute to Living With a Star science investigations and space weather forecasting. The program can reliably identify filaments; determine their chirality and other relevant parameters like filament area, length, and average orientation with respect to the equator. It is also capable of tracking the day-by-day evolution of filaments while they travel across the visible disk. The code was tested by analyzing daily Hα images taken at the Big Bear Solar Observatory from mid-2000 until beginning of 2005. It identified and established the chirality of thousands of filaments without human intervention. We compared the results with a list of filament proprieties manually compiled by Pevtsov, Balasubramaniam and Rogers (2003) over the same period of time. The computer list matches Pevtsov's list with a 72% accuracy. The code results confirm the hemispheric chirality rule stating that dextral filaments predominate in the north and sinistral ones predominate in the south. The main difference between the two lists is that the code finds significantly more filaments without an identifiable chirality. This may be due to a tendency of human operators to be biased, thereby assigning a chirality in less clear cases, while the code is totally unbiased. We also have found evidence that filaments obeying the chirality rule tend to be larger and last longer than the ones that do not follow the hemispherical rule. Filaments adhering to the hemispheric rule also tend to be more tilted toward the equator between latitudes 10∘ and 30∘, than the ones that do not.

  8. Automated detection of unstable glacier flow and a spectrum of speedup behavior in the Alaska Range

    NASA Astrophysics Data System (ADS)

    Herreid, Sam; Truffer, Martin

    2016-01-01

    Surge-type glaciers are loosely defined as glaciers that experience periodic alterations between slow and fast flow regimes. Glaciers from a variety of mountain ranges around the world have been classified as surge type, yet consensus of what defines a glacier as surge type has not always been met. A common source of dispute is the lack of a succinct and globally applicable delimiter between a surging and nonsurging glacier. The attempt is often a Boolean classification; however, glacier speedup events can vary significantly with respect to event magnitude, duration, and the fraction of the glacier that participates in the speedup. For this study, we first updated the inventory of glaciers that show flow instabilities in the Alaska Range and then quantified the spectrum of speedup behavior. We developed a new method that automatically detects glaciers with flow instabilities. Our automated results show a 91% success rate when compared to direct observations of speedup events and glaciers that are suspected to display unstable flow based on surface features. Through a combination of observations from the Landsat archive and previously published data, our inventory now contains 36 glaciers that encompass at least one branch exhibiting unstable flow and we document 53 speedup events that occurred between 1936 and 2014. We then present a universal method for comparing glacier speedup events based on a normalized event magnitude metric. This method provides a consistent way to include and quantify the full spectrum of speedup events and allows for comparisons with glaciers that exhibit clear surge characteristics yet have no observed surge event to date. Our results show a continuous spectrum of speedup magnitudes, from steady flow to clearly surge type, which suggests that qualitative classifications, such as "surge-type" or "pulse-type" behavior, might be too simplistic and should be accompanied by a standardized magnitude metric.

  9. Automated detection of prostate cancer using wavelet transform features of ultrasound RF time series

    NASA Astrophysics Data System (ADS)

    Aboofazeli, Mohammad; Abolmaesumi, Purang; Moradi, Mehdi; Sauerbrei, Eric; Siemens, Robert; Boag, Alexander; Mousavi, Parvin

    2009-02-01

    The aim of this research was to investigate the performance of wavelet transform based features of ultrasound radiofrequency (RF) time series for automated detection of prostate cancer tumors in transrectal ultrasound images. Sequential frames of RF echo signals from 35 extracted prostate specimens were recorded in parallel planes, while the ultrasound probe and the tissue were fixed in position in each imaging plane. The sequence of RF echo signal samples corresponding to a particular spot in tissue imaging plane constitutes one RF time series. Each region of interest (ROI) of ultrasound image was represented by three groups of features of its time series, namely, wavelet, spectral and fractal features. Wavelet transform approximation and detail sequences of each ROI were averaged and used as wavelet features. The average value of the normalized spectrum in four quarters of the frequency range along with the intercept and slope of a regression line fitted to the values of the spectrum versus normalized frequency plot formed six spectral features. Fractal dimension (FD) of the RF time series were computed based on the Higuchi's approach. A support vector machine (SVM) classifier was used to classify the ROIs. The results indicate that combining wavelet coefficient based features with previously proposed spectral and fractal features of RF time series data would increase the area under ROC curve from 93.1% to 95.0%, respectively. Furthermore, the accuracy, sensitivity, and specificity increases to 91.7%, 86.6%, and 94.7%, from 85.7%, 85.2%, and 86.1%, respectively, using only spectral and fractal features.

  10. Detection of Hydroxychloroquine Retinal Toxicity by Automated Perimetry in 60 Rheumatoid Arthritis Patients With Normal Fundoscopic Findings

    PubMed Central

    Motarjemizadeh, Qader; Aidenloo, Naser Samadi; Abbaszadeh, Mohammad

    2016-01-01

    Hydroxychloroquine (HCQ) is an antimalarial drug used extensively in treatment of autoimmune diseases such as rheumatoid arthritis. Retinal toxicity is the most important side effects of this drug. Even after the drug is discontinued, retinal degeneration from HCQ can continue to progress. Consequently, multiple ophthalmic screening tests have been developed to detect early retinopathy. The aim of the current study was to evaluate the value of central 2-10 perimetry method in early detection of retinal toxicity. This prospective cross-sectional investigation was carried out on 60 rheumatoid arthritis patients, who had been receiving HCQ for at least 6 months and still were on their medication (HCQ intake) at the time of enrollment. An ophthalmologist examined participants using direct and indirect ophthalmoscopy. Visual field testing with automated perimetry technique (central 2-10 perimetry with red target) was performed on all included subjects twice in 6 months interval: The first one at the time of enrollment and the second one 6 months later. Males and females did not show any significant difference in terms of age, duration of therapy, daily and cumulative HCQ dose, anterior or posterior segment abnormalities, hypertension, body mass index, and best corrected visual acuity. Anterior segment was abnormal in 9 individuals including 3 subjects with macular pigmentary changes, 4 individuals with cataract and 2 cases with dry eyes. Moreover, 12 subjects had retinal pigmented epithelium (RPE) in their posterior segments. After 6 months, depressive changes appeared in 12 subjects. Additionally, HCQ therapy worsened significantly the perimetric results of 5 (55.6%) patients with abnormal anterior segment. A same trend was observed in perimetric results of 6 (50.0%) subjects with abnormal posterior segments (P=0.009). The daily dose of HCQ (P=0.035) as well as the cumulative dose of hydroxychloroquine (P=0.021) displayed statistically significant associations with

  11. Detection of Hydroxychloroquine Retinal Toxicity by Automated Perimetry in 60 Rheumatoid Arthritis Patients with Normal Fundoscopic Findings.

    PubMed

    Motarjemizadeh, Qader; Aidenloo, Naser Samadi; Abbaszadeh, Mohammad

    2016-03-01

    Hydroxychloroquine (HCQ) is an antimalarial drug used extensively in treatment of autoimmune diseases such as rheumatoid arthritis. Retinal toxicity is the most important side effects of this drug. Even after the drug is discontinued, retinal degeneration from HCQ can continue to progress. Consequently, multiple ophthalmic screening tests have been developed to detect early retinopathy. The aim of the current study was to evaluate the value of central 2-10 perimetry method in early detection of retinal toxicity. This prospective cross-sectional investigation was carried out on 60 rheumatoid arthritis patients, who had been receiving HCQ for at least 6 months and still were on their medication (HCQ intake) at the time of enrollment. An ophthalmologist examined participants using direct and indirect ophthalmoscopy. Visual field testing with automated perimetry technique (central 2-10 perimetry with red target) was performed on all included subjects twice in 6 months interval: The first one at the time of enrollment and the second one 6 months later. Males and females did not show any significant difference in terms of age, duration of therapy, daily and cumulative HCQ dose, anterior or posterior segment abnormalities, hypertension, body mass index, and best corrected visual acuity. Anterior segment was abnormal in 9 individuals including 3 subjects with macular pigmentary changes, 4 individuals with cataract and 2 cases with dry eyes. Moreover, 12 subjects had retinal pigmented epithelium (RPE) in their posterior segments. After 6 months, depressive changes appeared in 12 subjects. Additionally, HCQ therapy worsened significantly the perimetric results of 5 (55.6%) patients with abnormal anterior segment. A same trend was observed in perimetric results of 6 (50.0%) subjects with abnormal posterior segments (P=0.009). The daily dose of HCQ (P=0.035) as well as the cumulative dose of hydroxychloroquine (P=0.021) displayed statistically significant associations with

  12. Monitoring seasonal and diurnal changes in photosynthetic pigments with automated PRI and NDVI sensors

    NASA Astrophysics Data System (ADS)

    Gamon, J. A.; Kovalchuck, O.; Wong, C. Y. S.; Harris, A.; Garrity, S. R.

    2015-07-01

    The vegetation indices normalized difference vegetation index (NDVI) and photochemical reflectance index (PRI) provide indicators of pigmentation and photosynthetic activity that can be used to model photosynthesis from remote sensing with the light-use-efficiency model. To help develop and validate this approach, reliable proximal NDVI and PRI sensors have been needed. We tested new NDVI and PRI sensors, "spectral reflectance sensors" (SRS sensors; recently developed by Decagon Devices, during spring activation of photosynthetic activity in evergreen and deciduous stands. We also evaluated two methods of sensor cross-calibration - one that considered sky conditions (cloud cover) at midday only, and another that also considered diurnal sun angle effects. Cross-calibration clearly affected sensor agreement with independent measurements, with the best method dependent upon the study aim and time frame (seasonal vs. diurnal). The seasonal patterns of NDVI and PRI differed for evergreen and deciduous species, demonstrating the complementary nature of these two indices. Over the spring season, PRI was most strongly influenced by changing chlorophyll : carotenoid pool sizes, while over the diurnal timescale, PRI was most affected by the xanthophyll cycle epoxidation state. This finding demonstrates that the SRS PRI sensors can resolve different processes affecting PRI over different timescales. The advent of small, inexpensive, automated PRI and NDVI sensors offers new ways to explore environmental and physiological constraints on photosynthesis, and may be particularly well suited for use at flux tower sites. Wider application of automated sensors could lead to improved integration of flux and remote sensing approaches for studying photosynthetic carbon uptake, and could help define the concept of contrasting vegetation optical types.

  13. Monitoring seasonal and diurnal changes in photosynthetic pigments with automated PRI and NDVI sensors

    NASA Astrophysics Data System (ADS)

    Gamon, J. A.; Kovalchuk, O.; Wong, C. Y. S.; Harris, A.; Garrity, S. R.

    2015-02-01

    The vegetation indices normalized difference vegetation index (NDVI) and photochemical reflectance index (PRI) provide indicators of pigmentation and photosynthetic activity that can be used to model photosynthesis from remote sensing with the light-use efficiency model. To help develop and validate this approach, reliable proximal NDVI and PRI sensors have been needed. We tested new NDVI and PRI sensors, "SRS" sensors recently developed by Decagon Devices, during spring activation of photosynthetic activity in evergreen and deciduous stands. We also evaluated two methods of sensor cross-calibration, one that considered sky conditions (cloud cover) at midday only, and the other that also considered diurnal sun angle effects. Cross-calibration clearly affected sensor agreement with independent measurements, with the best method dependent upon the study aim and time frame (seasonal vs. diurnal). The seasonal patterns of NDVI and PRI differed for evergreen and deciduous species, demonstrating the complementary nature of these two indices. Over the spring season, PRI was most strongly influenced by changing chlorophyll : carotenoid pool sizes, while over the diurnal time scale PRI was most affected by the xanthophyll cycle epoxidation state. This finding demonstrates that the SRS PRI sensors can resolve different processes affecting PRI over different time scales. The advent of small, inexpensive, automated PRI and NDVI sensors offers new ways to explore environmental and physiological constraints on photosynthesis, and may be particularly well-suited for use at flux tower sites. Wider application of automated sensors could lead to improved integration of flux and remote sensing approaches to studying photosynthetic carbon uptake, and could help define the concept of contrasting vegetation optical types.

  14. Evaluation of change detection techniques for monitoring coastal zone environments

    NASA Technical Reports Server (NTRS)

    Weismiller, R. A.; Kristof, S. J.; Scholz, D. K.; Anuta, P. E.; Momin, S. M.

    1977-01-01

    Development of satisfactory techniques for detecting change in coastal zone environments is required before operational monitoring procedures can be established. In an effort to meet this need a study was directed toward developing and evaluating different types of change detection techniques, based upon computer aided analysis of LANDSAT multispectral scanner (MSS) data, to monitor these environments. The Matagorda Bay estuarine system along the Texas coast was selected as the study area. Four change detection techniques were designed and implemented for evaluation: (1) post classification comparison change detection, (2) delta data change detection, (3) spectral/temporal change classification, and (4) layered spectral/temporal change classification. Each of the four techniques was used to analyze a LANDSAT MSS temporal data set to detect areas of change of the Matagorda Bay region.

  15. Flow cytometric-membrane potential detection of sodium channel active marine toxins: application to ciguatoxins in fish muscle and feasibility of automating saxitoxin detection.

    PubMed

    Manger, Ronald; Woodle, Doug; Berger, Andrew; Dickey, Robert W; Jester, Edward; Yasumoto, Takeshi; Lewis, Richard; Hawryluk, Timothy; Hungerford, James

    2014-01-01

    Ciguatoxins are potent neurotoxins with a significant public health impact. Cytotoxicity assays have allowed the most sensitive means of detection of ciguatoxin-like activity without reliance on mouse bioassays and have been invaluable in studying outbreaks. An improvement of these cell-based assays is presented here in which rapid flow cytometric detection of ciguatoxins and saxitoxins is demonstrated using fluorescent voltage sensitive dyes. A depolarization response can be detected directly due to ciguatoxin alone; however, an approximate 1000-fold increase in sensitivity is observed in the presence of veratridine. These results demonstrate that flow cytometric assessment of ciguatoxins is possible at levels approaching the trace detection limits of our earlier cytotoxicity assays, however, with a significant reduction in analysis time. Preliminary results are also presented for detection of brevetoxins and for automation and throughput improvements to a previously described method for detecting saxitoxins in shellfish extracts. PMID:24830140

  16. Feasibility of fully automated detection of fiducial markers implanted into the prostate using electronic portal imaging: A comparison of methods

    SciTech Connect

    Harris, Emma J. . E-mail: eharris@icr.ac.uk; McNair, Helen A.; Evans, Phillip M.

    2006-11-15

    Purpose: To investigate the feasibility of fully automated detection of fiducial markers implanted into the prostate using portal images acquired with an electronic portal imaging device. Methods and Materials: We have made a direct comparison of 4 different methods (2 template matching-based methods, a method incorporating attenuation and constellation analyses and a cross correlation method) that have been published in the literature for the automatic detection of fiducial markers. The cross-correlation technique requires a-priory information from the portal images, therefore the technique is not fully automated for the first treatment fraction. Images of 7 patients implanted with gold fiducial markers (8 mm in length and 1 mm in diameter) were acquired before treatment (set-up images) and during treatment (movie images) using 1MU and 15MU per image respectively. Images included: 75 anterior (AP) and 69 lateral (LAT) set-up images and 51 AP and 83 LAT movie images. Using the different methods described in the literature, marker positions were automatically identified. Results: The method based upon cross correlation techniques gave the highest percentage detection success rate of 99% (AP) and 83% (LAT) set-up (1MU) images. The methods gave detection success rates of less than 91% (AP) and 42% (LAT) set-up images. The amount of a-priory information used and how it affects the way the techniques are implemented, is discussed. Conclusions: Fully automated marker detection in set-up images for the first treatment fraction is unachievable using these methods and that using cross-correlation is the best technique for automatic detection on subsequent radiotherapy treatment fractions.

  17. Automated Non-invasive Video-Microscopy of Oyster Spat Heart Rate during Acute Temperature Change: Impact of Acclimation Temperature

    PubMed Central

    Domnik, Nicolle J.; Polymeropoulos, Elias T.; Elliott, Nicholas G.; Frappell, Peter B.; Fisher, John T.

    2016-01-01

    We developed an automated, non-invasive method to detect real-time cardiac contraction in post-larval (1.1–1.7 mm length), juvenile oysters (i.e., oyster spat) via a fiber-optic trans-illumination system. The system is housed within a temperature-controlled chamber and video microscopy imaging of the heart was coupled with video edge-detection to measure cardiac contraction, inter-beat interval, and heart rate (HR). We used the method to address the hypothesis that cool acclimation (10°C vs. 22°C—Ta10 or Ta22, respectively; each n = 8) would preserve cardiac phenotype (assessed via HR variability, HRV analysis and maintained cardiac activity) during acute temperature changes. The temperature ramp (TR) protocol comprised 2°C steps (10 min/experimental temperature, Texp) from 22°C to 10°C to 22°C. HR was related to Texp in both acclimation groups. Spat became asystolic at low temperatures, particularly Ta22 spat (Ta22: 8/8 vs. Ta10: 3/8 asystolic at Texp = 10°C). The rate of HR decrease during cooling was less in Ta10 vs. Ta22 spat when asystole was included in analysis (P = 0.026). Time-domain HRV was inversely related to temperature and elevated in Ta10 vs. Ta22 spat (P < 0.001), whereas a lack of defined peaks in spectral density precluded frequency-domain analysis. Application of the method during an acute cooling challenge revealed that cool temperature acclimation preserved active cardiac contraction in oyster spat and increased time-domain HRV responses, whereas warm acclimation enhanced asystole. These physiologic changes highlight the need for studies of mechanisms, and have translational potential for oyster aquaculture practices. PMID:27445833

  18. Automated Non-invasive Video-Microscopy of Oyster Spat Heart Rate during Acute Temperature Change: Impact of Acclimation Temperature.

    PubMed

    Domnik, Nicolle J; Polymeropoulos, Elias T; Elliott, Nicholas G; Frappell, Peter B; Fisher, John T

    2016-01-01

    We developed an automated, non-invasive method to detect real-time cardiac contraction in post-larval (1.1-1.7 mm length), juvenile oysters (i.e., oyster spat) via a fiber-optic trans-illumination system. The system is housed within a temperature-controlled chamber and video microscopy imaging of the heart was coupled with video edge-detection to measure cardiac contraction, inter-beat interval, and heart rate (HR). We used the method to address the hypothesis that cool acclimation (10°C vs. 22°C-Ta10 or Ta22, respectively; each n = 8) would preserve cardiac phenotype (assessed via HR variability, HRV analysis and maintained cardiac activity) during acute temperature changes. The temperature ramp (TR) protocol comprised 2°C steps (10 min/experimental temperature, Texp) from 22°C to 10°C to 22°C. HR was related to Texp in both acclimation groups. Spat became asystolic at low temperatures, particularly Ta22 spat (Ta22: 8/8 vs. Ta10: 3/8 asystolic at Texp = 10°C). The rate of HR decrease during cooling was less in Ta10 vs. Ta22 spat when asystole was included in analysis (P = 0.026). Time-domain HRV was inversely related to temperature and elevated in Ta10 vs. Ta22 spat (P < 0.001), whereas a lack of defined peaks in spectral density precluded frequency-domain analysis. Application of the method during an acute cooling challenge revealed that cool temperature acclimation preserved active cardiac contraction in oyster spat and increased time-domain HRV responses, whereas warm acclimation enhanced asystole. These physiologic changes highlight the need for studies of mechanisms, and have translational potential for oyster aquaculture practices. PMID:27445833

  19. Treehuggers: Wireless Sensor Networks for Automated Measurement and Reporting of Changes in Tree Diameter

    NASA Astrophysics Data System (ADS)

    DeLucia, E. H.; Mies, T. A.; Anderson-Teixeira, K. J.; Bohleber, A. P.; Herrmann, V.

    2014-12-01

    Ground-based measurements of changes in tree diameter and subsequent calculation of carbon storage provide validation of indirect estimates of forest productivity from remote sensing platforms, and measurements made with high temporal resolution provide critical information about the responsiveness of tree growth to variations in important physical drivers (e.g. temperature and water availability). We have developed an environmentally robust instrument for automated measurement of expansion and contraction in tree diameter that can be deployed in remote locations (TreeHuggers; TH). TH uses a membrane potentiometer to measure changes in circumference with resolution ≤ 6 mm at user-selected intervals (≥ 1 min). Simultaneous measurement of temperature is used to correct for the thermal properties of the stainless steel band. Data are stored on micro SD cards and transmitted tree-to-tree to a base station. Preliminary measurement of beech trees shows the precise initiation of growth and the emergence of diel changes in stem diameter associated with sap flow. Because of their low cost and on-board data logging and communication packages, TH will greatly increase the capacity of the scientific community and private sectors to monitor tree growth and carbon storage. Possible applications include deploying TH in the footprint of eddy covariance sites to help interpret drivers affecting net ecosystem exchange and evapotranspiration. A large scale implementation of TH will contribute to our ability to forecast changes in the carbon sink strength of forests across environmental gradients and biotic disturbances, and they could prove useful in assessing changes in forest stocks as part of evaluating carbon offsets purchased by commercial entities.

  20. Detection of temporal changes in earthquake rates

    NASA Astrophysics Data System (ADS)

    Touati, S.

    2012-12-01

    Many statistical analyses of earthquake rates and time-dependent forecasting of future rates involve the detection of changes in the basic rate of events, independent of the fluctuations caused by aftershock sequences. We examine some of the statistical techniques for inferring these changes, using both real and synthetic earthquake data to check the statistical significance of these inferences. One common method is to use the Akaike Information Criterion (AIC) to choose between a single model and a double model with a changepoint; this criterion evaluates the strength of the fit and incorporates a penalty for the extra parameters. We test this method on many realisations of the ETAS model, with and without changepoints present, to see how often it chooses the correct model. A more rigorous method is to calculate the Bayesian evidence, or marginal likelihood, for each model and then compare these. The evidence is essentially the likelihood of the model integrated over the whole of the model space, giving a measure of how likely the data is for that model. It does not rely on estimation of best-fit parameters, making it a better comparator than the AIC; Occam's razor also arises naturally in this process due to the fact that more complex models tend to be able to explain a larger range of observations, and therefore the relative likelihood of any particular observations will be smaller than for a simpler model. Evidence can be calculated using Markov Chain Monte Carlo techniques. We compare these two approaches on synthetic data. We also look at the 1997-98 Colfiorito sequence in Umbria-Marche, Italy, using maximum likelihood to fit the ETAS model and then simulating the ETAS model to create synthetic versions of the catalogue for comparison. We simulate using ensembles of parameter values sampled from the posterior for each parameter, with the largest events artificially inserted, to compare the resultant event rates, inter-event time distributions and other

  1. Rational Manual and Automated Scoring Thresholds for the Immunohistochemical Detection of TP53 Missense Mutations in Human Breast Carcinomas.

    PubMed

    Taylor, Nicholas J; Nikolaishvili-Feinberg, Nana; Midkiff, Bentley R; Conway, Kathleen; Millikan, Robert C; Geradts, Joseph

    2016-07-01

    Missense mutations in TP53 are common in human breast cancer, have been associated with worse prognosis, and may predict therapy effect. TP53 missense mutations are associated with aberrant accumulation of p53 protein in tumor cell nuclei. Previous studies have used relatively arbitrary cutoffs to characterize breast tumors as positive for p53 staining by immunohistochemical assays. This study aimed to objectively determine optimal thresholds for p53 positivity by manual and automated scoring methods using whole tissue sections from the Carolina Breast Cancer Study. p53-immunostained slides were available for 564 breast tumors previously assayed for TP53 mutations. Average nuclear p53 staining intensity was manually scored as negative, borderline, weak, moderate, or strong and percentage of positive tumor cells was estimated. Automated p53 signal intensity was measured using the Aperio nuclear v9 algorithm combined with the Genie histology pattern recognition tool and tuned to achieve optimal nuclear segmentation. Receiver operating characteristic curve analysis was performed to determine optimal cutoffs for average staining intensity and percent cells positive to distinguish between tumors with and without a missense mutation. Receiver operating characteristic curve analysis demonstrated a threshold of moderate average nuclear staining intensity as a good surrogate for TP53 missense mutations in both manual (area under the curve=0.87) and automated (area under the curve=0.84) scoring systems. Both manual and automated immunohistochemical scoring methods predicted missense mutations in breast carcinomas with high accuracy. Validation of the automated intensity scoring threshold suggests a role for such algorithms in detecting TP53 missense mutations in high throughput studies. PMID:26200835

  2. Using a forehead reflectance pulse oximeter to detect changes in sympathetic tone.

    PubMed

    Wendelken, Suzanne M; McGrath, Susan P; Akay, Metin; Blike, George T

    2004-01-01

    The extreme conditions of combat and multi-casualty rescue often make field triage difficult and put the medic or first responder at risk. In an effort to improve field triage, we have developed an automated remote triage system called ARTEMIS (automated remote triage and emergency management information system) for use in the battlefield or disaster zone. Common to field injuries is a sudden change in arterial pressure resulting from massive blood loss or shock. In effort to stabilize the arterial pressure, the sympathetic system is strongly activated and sympathetic tone is increased. This preliminary research seeks to empirically demonstrate that a forehead reflectance pulse oximeter is a viable sensor for detecting sudden changes in sympathetic tone. We performed the classic supine-standing experiment and collected the raw waveform, the photoplethysmogram (PPG), continuously using a forehead reflectance pulse oximeter. The resulting waveform was processed in Matlab using various spectral analysis techniques (FFT and AR). Our preliminary results show that a relative ratio analysis (low frequency power/high frequency power) for both the raw PPG signal and its derived pulse statistics (height, beat-to-beat interval) is a useful technique for detecting change in sympathetic tone resulting from positional change. PMID:17271676

  3. 2006 Automation Survey: The Systems Are Changing. But School Libraries Aren't

    ERIC Educational Resources Information Center

    Fuller, Daniel

    2006-01-01

    This article presents the findings of the 2006 School Library Journal-San Jose State University Automation Survey. The study takes a close look at the systems that media specialists are using, how they are using them, and what librarians want from their future automation programs. The findings reveal that while respondents were satisfied with…

  4. Automated detection of kinks from blood vessels for optic cup segmentation in retinal images

    NASA Astrophysics Data System (ADS)

    Wong, D. W. K.; Liu, J.; Lim, J. H.; Li, H.; Wong, T. Y.

    2009-02-01

    The accurate localization of the optic cup in retinal images is important to assess the cup to disc ratio (CDR) for glaucoma screening and management. Glaucoma is physiologically assessed by the increased excavation of the optic cup within the optic nerve head, also known as the optic disc. The CDR is thus an important indicator of risk and severity of glaucoma. In this paper, we propose a method of determining the cup boundary using non-stereographic retinal images by the automatic detection of a morphological feature within the optic disc known as kinks. Kinks are defined as the bendings of small vessels as they traverse from the disc to the cup, providing physiological validation for the cup boundary. To detect kinks, localized patches are first generated from a preliminary cup boundary obtained via level set. Features obtained using edge detection and wavelet transform are combined using a statistical approach rule to identify likely vessel edges. The kinks are then obtained automatically by analyzing the detected vessel edges for angular changes, and these kinks are subsequently used to obtain the cup boundary. A set of retinal images from the Singapore Eye Research Institute was obtained to assess the performance of the method, with each image being clinically graded for the CDR. From experiments, when kinks were used, the error on the CDR was reduced to less than 0.1 CDR units relative to the clinical CDR, which is within the intra-observer variability of 0.2 CDR units.

  5. Texture analysis and classification in coherent anti-Stokes Raman scattering (CARS) microscopy images for automated detection of skin cancer.

    PubMed

    Legesse, Fisseha Bekele; Medyukhina, Anna; Heuke, Sandro; Popp, Jürgen

    2015-07-01

    Coherent anti-Stokes Raman scattering (CARS) microscopy is a powerful tool for fast label-free tissue imaging, which is promising for early medical diagnostics. To facilitate the diagnostic process, automatic image analysis algorithms, which are capable of extracting relevant features from the image content, are needed. In this contribution we perform an automated classification of healthy and tumor areas in CARS images of basal cell carcinoma (BCC) skin samples. The classification is based on extraction of texture features from image regions and subsequent classification of these regions into healthy and cancerous with a perceptron algorithm. The developed approach is capable of an accurate classification of texture types with high sensitivity and specificity, which is an important step towards an automated tumor detection procedure. PMID:25797604

  6. Automated Detection of Health Websites' HONcode Conformity: Can N-gram Tokenization Replace Stemming?

    PubMed

    Boyer, Célia; Dolamic, Ljiljana; Grabar, Natalia

    2015-01-01

    Authors evaluated supervised automatic classification algorithms for determination of health related web-page compliance with individual HONcode criteria of conduct using varying length character n-gram vectors to represent healthcare web page documents. The training/testing collection comprised web page fragments extracted by HONcode experts during the manual certification process. The authors compared automated classification performance of n-gram tokenization to the automated classification performance of document words and Porter-stemmed document words using a Naive Bayes classifier and DF (document frequency) dimensionality reduction metrics. The study attempted to determine whether the automated, language-independent approach might safely replace word-based classification. Using 5-grams as document features, authors also compared the baseline DF reduction function to Chi-square and Z-score dimensionality reductions. Overall study results indicate that n-gram tokenization provided a potentially viable alternative to document word stemming. PMID:26262363

  7. Detecting holocene changes in thermohaline circulation.

    PubMed

    Keigwin, L D; Boyle, E A

    2000-02-15

    Throughout the last glacial cycle, reorganizations of deep ocean water masses were coincident with rapid millennial-scale changes in climate. Climate changes have been less severe during the present interglacial, but evidence for concurrent deep ocean circulation change is ambiguous. PMID:10677463

  8. Method of detecting tissue contact for fiber-optic probes to automate data acquisition without hardware modification

    PubMed Central

    Ruderman, Sarah; Mueller, Scott; Gomes, Andrew; Rogers, Jeremy; Backman, Vadim

    2013-01-01

    We present a novel algorithm to detect contact with tissue and automate data acquisition. Contact fiber-optic probe systems are useful in noninvasive applications and real-time analysis of tissue properties. However, applications of these technologies are limited to procedures with visualization to ensure probe-tissue contact and individual user techniques can introduce variability. The software design exploits the system previously designed by our group as an optical method to automatically detect tissue contact and trigger acquisition. This method detected tissue contact with 91% accuracy, detected removal from tissue with 83% accuracy and reduced user variability by > 8%. Without the need for additional hardware, this software algorithm can easily integrate into any fiber-optic system and expands applications where visualization is difficult. PMID:24010002

  9. Automated Flaw Detection Scheme For Cast Austenitic Stainless Steel Weld Specimens Using Hilbert Huang Transform Of Ultrasonic Phased Array Data

    SciTech Connect

    Khan, T.; Majumdar, Shantanu; Udpa, L.; Ramuhalli, Pradeep; Crawford, Susan L.; Diaz, Aaron A.; Anderson, Michael T.

    2012-01-01

    The objective of this work is to develop processing algorithms to detect and localize the flaws using NDE ultrasonic data. Data was collected using cast austenitic stainless steel (CASS) weld specimens on-loan from the U.S. nuclear power industry’s Pressurized Water Reactor Owners Group (PWROG) specimen set. Each specimen consists of a centrifugally cast stainless steel (CCSS) pipe section welded to a statically cast (SCSS) or wrought (WRSS) section. The paper presents a novel automated flaw detection and localization scheme using low frequency ultrasonic phased array inspection signals in the weld and heat affected zone of the base materials. The major steps of the overall scheme are preprocessing and region of interest (ROI) detection followed by the Hilbert Huang transform (HHT) of A-scans in the detected ROIs. HHT offers time-frequency-energy distribution for each ROI. The accumulation of energy in a particular frequency band is used as a classification feature for the particular ROI.

  10. Reliability and accuracy of an automated tracking algorithm to measure controlled passive and active muscle fascicle length changes from ultrasound.

    PubMed

    Gillett, Jarred G; Barrett, Rod S; Lichtwark, Glen A

    2013-01-01

    Manual tracking of muscle fascicle length changes from ultrasound images is a subjective and time-consuming process. The purpose of this study was to assess the repeatability and accuracy of an automated algorithm for tracking fascicle length changes in the medial gastrocnemius (MG) muscle during passive length changes and active contractions (isometric, concentric and eccentric) performed on a dynamometer. The freely available, automated tracking algorithm was based on the Lucas-Kanade optical flow algorithm with an affine optic flow extension, which accounts for image translation, dilation, rotation and shear between consecutive frames of an image sequence. Automated tracking was performed by three experienced assessors, and within- and between-examiner repeatability was computed using the coefficient of multiple determination (CMD). Fascicle tracking data were also compared with manual digitisation of the same image sequences, and the level of agreement between the two methods was calculated using the coefficient of multiple correlation (CMC). The CMDs across all test conditions ranged from 0.50 to 0.93 and were all above 0.98 when recomputed after the systematic error due to the estimate of the initial fascicle length on the first ultrasound frame was removed from the individual fascicle length waveforms. The automated and manual tracking approaches produced similar fascicle length waveforms, with an overall CMC of 0.88, which improved to 0.94 when the initial length offset was removed. Overall results indicate that the automated fascicle tracking algorithm was a repeatable, accurate and time-efficient method for estimating fascicle length changes of the MG muscle in controlled passive and active conditions. PMID:22235878

  11. Change detection on a hunch: pre-attentive vision allows "sensing" of unique feature changes.

    PubMed

    Ball, Felix; Busch, Niko A

    2015-11-01

    Studies on change detection and change blindness have investigated the nature of visual representations by testing the conditions under which observers are able to detect when an object in a complex scene changes from one moment to the next. Several authors have proposed that change detection can occur without identification of the changing object, but the perceptual processes underlying this phenomenon are currently unknown. We hypothesized that change detection without localization or identification occurs when the change happens outside the focus of attention. Such changes would usually go entirely unnoticed, unless the change brings about a modification of one of the feature maps representing the scene. Thus, the appearance or disappearance of a unique feature might be registered even in the absence of focused attention and without feature binding, allowing for change detection, but not localization or identification. We tested this hypothesis in three experiments, in which changes either involved colors that were already present elsewhere in the display or entirely unique colors. Observers detected whether any change had occurred and then localized or identified the change. Change detection without localization occurred almost exclusively when changes involved a unique color. Moreover, change detection without localization for unique feature changes was independent of the number of objects in the display and independent of change identification. These findings suggest that pre-attentive registration of a change on a feature map can give rise to a conscious experience even when feature binding has failed: that something has changed without knowing what or where. PMID:26353860

  12. Automated Bayesian model development for frequency detection in biological time series

    PubMed Central

    2011-01-01

    Background A first step in building a mathematical model of a biological system is often the analysis of the temporal behaviour of key quantities. Mathematical relationships between the time and frequency domain, such as Fourier Transforms and wavelets, are commonly used to extract information about the underlying signal from a given time series. This one-to-one mapping from time points to frequencies inherently assumes that both domains contain the complete knowledge of the system. However, for truncated, noisy time series with background trends this unique mapping breaks down and the question reduces to an inference problem of identifying the most probable frequencies. Results In this paper we build on the method of Bayesian Spectrum Analysis and demonstrate its advantages over conventional methods by applying it to a number of test cases, including two types of biological time series. Firstly, oscillations of calcium in plant root cells in response to microbial symbionts are non-stationary and noisy, posing challenges to data analysis. Secondly, circadian rhythms in gene expression measured over only two cycles highlights the problem of time series with limited length. The results show that the Bayesian frequency detection approach can provide useful results in specific areas where Fourier analysis can be uninformative or misleading. We demonstrate further benefits of the Bayesian approach for time series analysis, such as direct comparison of different hypotheses, inherent estimation of noise levels and parameter precision, and a flexible framework for modelling the data without pre-processing. Conclusions Modelling in systems biology often builds on the study of time-dependent phenomena. Fourier Transforms are a convenient tool for analysing the frequency domain of time series. However, there are well-known limitations of this method, such as the introduction of spurious frequencies when handling short and noisy time series, and the requirement for uniformly

  13. Automated JPSS VIIRS GEO code change testing by using Chain Run Scripts

    NASA Astrophysics Data System (ADS)

    Chen, W.; Wang, W.; Zhao, Q.; Das, B.; Mikles, V. J.; Sprietzer, K.; Tsidulko, M.; Zhao, Y.; Dharmawardane, V.; Wolf, W.

    2015-12-01

    The Joint Polar Satellite System (JPSS) is the next generation polar-orbiting operational environmental satellite system. The first satellite in the JPSS series of satellites, J-1, is scheduled to launch in early 2017. J1 will carry similar versions of the instruments that are on board of Suomi National Polar-Orbiting Partnership (S-NPP) satellite which was launched on October 28, 2011. The center for Satellite Applications and Research Algorithm Integration Team (STAR AIT) uses the Algorithm Development Library (ADL) to run S-NPP and pre-J1 algorithms in a development and test mode. The ADL is an offline test system developed by Raytheon to mimic the operational system while enabling a development environment for plug and play algorithms. The Perl Chain Run Scripts have been developed by STAR AIT to automate the staging and processing of multiple JPSS Sensor Data Record (SDR) and Environmental Data Record (EDR) products. JPSS J1 VIIRS Day Night Band (DNB) has anomalous non-linear response at high scan angles based on prelaunch testing. The flight project has proposed multiple mitigation options through onboard aggregation, and the Option 21 has been suggested by the VIIRS SDR team as the baseline aggregation mode. VIIRS GEOlocation (GEO) code analysis results show that J1 DNB GEO product cannot be generated correctly without the software update. The modified code will support both Op21, Op21/26 and is backward compatible with SNPP. J1 GEO code change version 0 delivery package is under development for the current change request. In this presentation, we will discuss how to use the Chain Run Script to verify the code change and Lookup Tables (LUTs) update in ADL Block2.

  14. Automation pilot

    NASA Technical Reports Server (NTRS)

    1983-01-01

    An important concept of the Action Information Management System (AIMS) approach is to evaluate office automation technology in the context of hands on use by technical program managers in the conduct of human acceptance difficulties which may accompany the transition to a significantly changing work environment. The improved productivity and communications which result from application of office automation technology are already well established for general office environments, but benefits unique to NASA are anticipated and these will be explored in detail.

  15. Object-based change detection on multiscale fusion for VHR remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Hansong; Chen, Jianyu; Liu, Xin

    2015-12-01

    This paper presents a novel Object-based context sensitive technique for unsupervised change detection in very high spatial resolution(VHR) remote sensing images. The proposed technique models the scene at different segment levels defining multiscale-level image objects. Multiscale-level image object change features is helpful for improving the discriminability between the changed class and unchanged class. Firstly according to the best classification principle as "homogeneity in class, heterogeneity between class", A set of optimal scales are determined. Then a multiscale level change vector analysis to each pixel of the considered images helps improve the accuracy and the degree of automation, which is implemented on multiscale features fusion. The technique properly analyzes the multiscale-level image objects' context information of the considered spatial position. The adaptive nature of optimal multiscale image objects and their multilevel representation allow one a proper modeling of complex scene in the investigated region. Experimental results confirm the effectiveness of the proposed approach.

  16. Automated Detection of Geomorphic Features in LiDAR Point Clouds of Various Spatial Density

    NASA Astrophysics Data System (ADS)

    Dorninger, Peter; Székely, Balázs; Zámolyi, András.; Nothegger, Clemens

    2010-05-01

    LiDAR, also referred to as laser scanning, has proved to be an important tool for topographic data acquisition. Terrestrial laser scanning allows for accurate (several millimeter) and high resolution (several centimeter) data acquisition at distances of up to some hundred meters. By contrast, airborne laser scanning allows for acquiring homogeneous data for large areas, albeit with lower accuracy (decimeter) and resolution (some ten points per square meter) compared to terrestrial laser scanning. Hence, te