Cai, Wei; He, Baochun; Fan, Yingfang; Fang, Chihua; Jia, Fucang
2016-11-08
This study was to evaluate the accuracy, consistency, and efficiency of three liver volumetry methods- one interactive method, an in-house-developed 3D medical Image Analysis (3DMIA) system, one automatic active shape model (ASM)-based segmentation, and one automatic probabilistic atlas (PA)-guided segmentation method on clinical contrast-enhanced CT images. Forty-two datasets, including 27 normal liver and 15 space-occupying liver lesion patients, were retrospectively included in this study. The three methods - one semiautomatic 3DMIA, one automatic ASM-based, and one automatic PA-based liver volumetry - achieved an accuracy with VD (volume difference) of -1.69%, -2.75%, and 3.06% in the normal group, respectively, and with VD of -3.20%, -3.35%, and 4.14% in the space-occupying lesion group, respectively. However, the three methods achieved an efficiency of 27.63 mins, 1.26 mins, 1.18 mins on average, respectively, compared with the manual volumetry, which took 43.98 mins. The high intraclass correlation coefficient between the three methods and the manual method indicated an excel-lent agreement on liver volumetry. Significant differences in segmentation time were observed between the three methods (3DMIA, ASM, and PA) and the manual volumetry (p < 0.001), as well as between the automatic volumetries (ASM and PA) and the semiautomatic volumetry (3DMIA) (p < 0.001). The semiautomatic interactive 3DMIA, automatic ASM-based, and automatic PA-based liver volum-etry agreed well with manual gold standard in both the normal liver group and the space-occupying lesion group. The ASM- and PA-based automatic segmentation have better efficiency in clinical use. © 2016 The Authors.
Cai, Wei; He, Baochun; Fang, Chihua
2016-01-01
This study was to evaluate the accuracy, consistency, and efficiency of three liver volumetry methods— one interactive method, an in‐house‐developed 3D medical Image Analysis (3DMIA) system, one automatic active shape model (ASM)‐based segmentation, and one automatic probabilistic atlas (PA)‐guided segmentation method on clinical contrast‐enhanced CT images. Forty‐two datasets, including 27 normal liver and 15 space‐occupying liver lesion patients, were retrospectively included in this study. The three methods — one semiautomatic 3DMIA, one automatic ASM‐based, and one automatic PA‐based liver volumetry — achieved an accuracy with VD (volume difference) of −1.69%,−2.75%, and 3.06% in the normal group, respectively, and with VD of −3.20%,−3.35%, and 4.14% in the space‐occupying lesion group, respectively. However, the three methods achieved an efficiency of 27.63 mins, 1.26 mins, 1.18 mins on average, respectively, compared with the manual volumetry, which took 43.98 mins. The high intraclass correlation coefficient between the three methods and the manual method indicated an excellent agreement on liver volumetry. Significant differences in segmentation time were observed between the three methods (3DMIA, ASM, and PA) and the manual volumetry (p<0.001), as well as between the automatic volumetries (ASM and PA) and the semiautomatic volumetry (3DMIA) (p<0.001). The semiautomatic interactive 3DMIA, automatic ASM‐based, and automatic PA‐based liver volumetry agreed well with manual gold standard in both the normal liver group and the space‐occupying lesion group. The ASM‐ and PA‐based automatic segmentation have better efficiency in clinical use. PACS number(s): 87.55.‐x PMID:27929487
Automatic rule generation for high-level vision
NASA Technical Reports Server (NTRS)
Rhee, Frank Chung-Hoon; Krishnapuram, Raghu
1992-01-01
A new fuzzy set based technique that was developed for decision making is discussed. It is a method to generate fuzzy decision rules automatically for image analysis. This paper proposes a method to generate rule-based approaches to solve problems such as autonomous navigation and image understanding automatically from training data. The proposed method is also capable of filtering out irrelevant features and criteria from the rules.
a Method for the Seamlines Network Automatic Selection Based on Building Vector
NASA Astrophysics Data System (ADS)
Li, P.; Dong, Y.; Hu, Y.; Li, X.; Tan, P.
2018-04-01
In order to improve the efficiency of large scale orthophoto production of city, this paper presents a method for automatic selection of seamlines network in large scale orthophoto based on the buildings' vector. Firstly, a simple model of the building is built by combining building's vector, height and DEM, and the imaging area of the building on single DOM is obtained. Then, the initial Voronoi network of the measurement area is automatically generated based on the positions of the bottom of all images. Finally, the final seamlines network is obtained by optimizing all nodes and seamlines in the network automatically based on the imaging areas of the buildings. The experimental results show that the proposed method can not only get around the building seamlines network quickly, but also remain the Voronoi network' characteristics of projection distortion minimum theory, which can solve the problem of automatic selection of orthophoto seamlines network in image mosaicking effectively.
Application of image recognition-based automatic hyphae detection in fungal keratitis.
Wu, Xuelian; Tao, Yuan; Qiu, Qingchen; Wu, Xinyi
2018-03-01
The purpose of this study is to evaluate the accuracy of two methods in diagnosis of fungal keratitis, whereby one method is automatic hyphae detection based on images recognition and the other method is corneal smear. We evaluate the sensitivity and specificity of the method in diagnosis of fungal keratitis, which is automatic hyphae detection based on image recognition. We analyze the consistency of clinical symptoms and the density of hyphae, and perform quantification using the method of automatic hyphae detection based on image recognition. In our study, 56 cases with fungal keratitis (just single eye) and 23 cases with bacterial keratitis were included. All cases underwent the routine inspection of slit lamp biomicroscopy, corneal smear examination, microorganism culture and the assessment of in vivo confocal microscopy images before starting medical treatment. Then, we recognize the hyphae images of in vivo confocal microscopy by using automatic hyphae detection based on image recognition to evaluate its sensitivity and specificity and compare with the method of corneal smear. The next step is to use the index of density to assess the severity of infection, and then find the correlation with the patients' clinical symptoms and evaluate consistency between them. The accuracy of this technology was superior to corneal smear examination (p < 0.05). The sensitivity of the technology of automatic hyphae detection of image recognition was 89.29%, and the specificity was 95.65%. The area under the ROC curve was 0.946. The correlation coefficient between the grading of the severity in the fungal keratitis by the automatic hyphae detection based on image recognition and the clinical grading is 0.87. The technology of automatic hyphae detection based on image recognition was with high sensitivity and specificity, able to identify fungal keratitis, which is better than the method of corneal smear examination. This technology has the advantages when compared with the conventional artificial identification of confocal microscope corneal images, of being accurate, stable and does not rely on human expertise. It was the most useful to the medical experts who are not familiar with fungal keratitis. The technology of automatic hyphae detection based on image recognition can quantify the hyphae density and grade this property. Being noninvasive, it can provide an evaluation criterion to fungal keratitis in a timely, accurate, objective and quantitative manner.
Zeng, Xueqiang; Luo, Gang
2017-12-01
Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.
NASA Astrophysics Data System (ADS)
Gao, M.; Li, J.
2018-04-01
Geometric correction is an important preprocessing process in the application of GF4 PMS image. The method of geometric correction that is based on the manual selection of geometric control points is time-consuming and laborious. The more common method, based on a reference image, is automatic image registration. This method involves several steps and parameters. For the multi-spectral sensor GF4 PMS, it is necessary for us to identify the best combination of parameters and steps. This study mainly focuses on the following issues: necessity of Rational Polynomial Coefficients (RPC) correction before automatic registration, base band in the automatic registration and configuration of GF4 PMS spatial resolution.
Automatic calibration method for plenoptic camera
NASA Astrophysics Data System (ADS)
Luan, Yinsen; He, Xing; Xu, Bing; Yang, Ping; Tang, Guomao
2016-04-01
An automatic calibration method is proposed for a microlens-based plenoptic camera. First, all microlens images on the white image are searched and recognized automatically based on digital morphology. Then, the center points of microlens images are rearranged according to their relative position relationships. Consequently, the microlens images are located, i.e., the plenoptic camera is calibrated without the prior knowledge of camera parameters. Furthermore, this method is appropriate for all types of microlens-based plenoptic cameras, even the multifocus plenoptic camera, the plenoptic camera with arbitrarily arranged microlenses, or the plenoptic camera with different sizes of microlenses. Finally, we verify our method by the raw data of Lytro. The experiments show that our method has higher intelligence than the methods published before.
Fully automatic multi-atlas segmentation of CTA for partial volume correction in cardiac SPECT/CT
NASA Astrophysics Data System (ADS)
Liu, Qingyi; Mohy-ud-Din, Hassan; Boutagy, Nabil E.; Jiang, Mingyan; Ren, Silin; Stendahl, John C.; Sinusas, Albert J.; Liu, Chi
2017-05-01
Anatomical-based partial volume correction (PVC) has been shown to improve image quality and quantitative accuracy in cardiac SPECT/CT. However, this method requires manual segmentation of various organs from contrast-enhanced computed tomography angiography (CTA) data. In order to achieve fully automatic CTA segmentation for clinical translation, we investigated the most common multi-atlas segmentation methods. We also modified the multi-atlas segmentation method by introducing a novel label fusion algorithm for multiple organ segmentation to eliminate overlap and gap voxels. To evaluate our proposed automatic segmentation, eight canine 99mTc-labeled red blood cell SPECT/CT datasets that incorporated PVC were analyzed, using the leave-one-out approach. The Dice similarity coefficient of each organ was computed. Compared to the conventional label fusion method, our proposed label fusion method effectively eliminated gaps and overlaps and improved the CTA segmentation accuracy. The anatomical-based PVC of cardiac SPECT images with automatic multi-atlas segmentation provided consistent image quality and quantitative estimation of intramyocardial blood volume, as compared to those derived using manual segmentation. In conclusion, our proposed automatic multi-atlas segmentation method of CTAs is feasible, practical, and facilitates anatomical-based PVC of cardiac SPECT/CT images.
Miyawaki, Shinjiro; Tawhai, Merryn H.; Hoffman, Eric A.; Wenzel, Sally E.; Lin, Ching-Long
2016-01-01
We propose a method to construct three-dimensional airway geometric models based on airway skeletons, or centerlines (CLs). Given a CT-segmented airway skeleton and surface, the proposed CL-based method automatically constructs subject-specific models that contain anatomical information regarding branches, include bifurcations and trifurcations, and extend from the trachea to terminal bronchioles. The resulting model can be anatomically realistic with the assistance of an image-based surface; alternatively a model with an idealized skeleton and/or branch diameters is also possible. This method systematically identifies and classifies trifurcations to successfully construct the models, which also provides the number and type of trifurcations for the analysis of the airways from an anatomical point of view. We applied this method to 16 normal and 16 severe asthmatic subjects using their computed tomography images. The average distance between the surface of the model and the image-based surface was 11% of the average voxel size of the image. The four most frequent locations of trifurcations were the left upper division bronchus, left lower lobar bronchus, right upper lobar bronchus, and right intermediate bronchus. The proposed method automatically constructed accurate subject-specific three-dimensional airway geometric models that contain anatomical information regarding branches using airway skeleton, diameters, and image-based surface geometry. The proposed method can construct (i) geometry automatically for population-based studies, (ii) trifurcations to retain the original airway topology, (iii) geometry that can be used for automatic generation of computational fluid dynamics meshes, and (iv) geometry based only on a skeleton and diameters for idealized branches. PMID:27704229
Automatic image enhancement based on multi-scale image decomposition
NASA Astrophysics Data System (ADS)
Feng, Lu; Wu, Zhuangzhi; Pei, Luo; Long, Xiong
2014-01-01
In image processing and computational photography, automatic image enhancement is one of the long-range objectives. Recently the automatic image enhancement methods not only take account of the globe semantics, like correct color hue and brightness imbalances, but also the local content of the image, such as human face and sky of landscape. In this paper we describe a new scheme for automatic image enhancement that considers both global semantics and local content of image. Our automatic image enhancement method employs the multi-scale edge-aware image decomposition approach to detect the underexposure regions and enhance the detail of the salient content. The experiment results demonstrate the effectiveness of our approach compared to existing automatic enhancement methods.
Automatic Syllabification in English: A Comparison of Different Algorithms
ERIC Educational Resources Information Center
Marchand, Yannick; Adsett, Connie R.; Damper, Robert I.
2009-01-01
Automatic syllabification of words is challenging, not least because the syllable is not easy to define precisely. Consequently, no accepted standard algorithm for automatic syllabification exists. There are two broad approaches: rule-based and data-driven. The rule-based method effectively embodies some theoretical position regarding the…
AISLE: an automatic volumetric segmentation method for the study of lung allometry.
Ren, Hongliang; Kazanzides, Peter
2011-01-01
We developed a fully automatic segmentation method for volumetric CT (computer tomography) datasets to support construction of a statistical atlas for the study of allometric laws of the lung. The proposed segmentation method, AISLE (Automated ITK-Snap based on Level-set), is based on the level-set implementation from an existing semi-automatic segmentation program, ITK-Snap. AISLE can segment the lung field without human interaction and provide intermediate graphical results as desired. The preliminary experimental results show that the proposed method can achieve accurate segmentation, in terms of volumetric overlap metric, by comparing with the ground-truth segmentation performed by a radiologist.
NASA Astrophysics Data System (ADS)
Maspero, Matteo; van den Berg, Cornelis A. T.; Zijlstra, Frank; Sikkes, Gonda G.; de Boer, Hans C. J.; Meijer, Gert J.; Kerkmeijer, Linda G. W.; Viergever, Max A.; Lagendijk, Jan J. W.; Seevinck, Peter R.
2017-10-01
An MR-only radiotherapy planning (RTP) workflow would reduce the cost, radiation exposure and uncertainties introduced by CT-MRI registrations. In the case of prostate treatment, one of the remaining challenges currently holding back the implementation of an RTP workflow is the MR-based localisation of intraprostatic gold fiducial markers (FMs), which is crucial for accurate patient positioning. Currently, MR-based FM localisation is clinically performed manually. This is sub-optimal, as manual interaction increases the workload. Attempts to perform automatic FM detection often rely on being able to detect signal voids induced by the FMs in magnitude images. However, signal voids may not always be sufficiently specific, hampering accurate and robust automatic FM localisation. Here, we present an approach that aims at automatic MR-based FM localisation. This method is based on template matching using a library of simulated complex-valued templates, and exploiting the behaviour of the complex MR signal in the vicinity of the FM. Clinical evaluation was performed on seventeen prostate cancer patients undergoing external beam radiotherapy treatment. Automatic MR-based FM localisation was compared to manual MR-based and semi-automatic CT-based localisation (the current gold standard) in terms of detection rate and the spatial accuracy and precision of localisation. The proposed method correctly detected all three FMs in 15/17 patients. The spatial accuracy (mean) and precision (STD) were 0.9 mm and 0.5 mm respectively, which is below the voxel size of 1.1 × 1.1 × 1.2 mm3 and comparable to MR-based manual localisation. FM localisation failed (3/51 FMs) in the presence of bleeding or calcifications in the direct vicinity of the FM. The method was found to be spatially accurate and precise, which is essential for clinical use. To overcome any missed detection, we envision the use of the proposed method along with verification by an observer. This will result in a semi-automatic workflow facilitating the introduction of an MR-only workflow.
Gao, Liqiang; Sun, Chao; Zhang, Chen; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang
2013-01-01
Traditional automatic navigation methods for bio-robots are constrained to configured environments and thus can't be applied to tasks in unknown environments. With no consideration of bio-robot's own innate living ability and treating bio-robots in the same way as mechanical robots, those methods neglect the intelligence behavior of animals. This paper proposes a novel ratbot automatic navigation method in unknown environments using only reward stimulation and distance measurement. By utilizing rat's habit of thigmotaxis and its reward-seeking behavior, this method is able to incorporate rat's intrinsic intelligence of obstacle avoidance and path searching into navigation. Experiment results show that this method works robustly and can successfully navigate the ratbot to a target in the unknown environment. This work might put a solid base for application of ratbots and also has significant implication of automatic navigation for other bio-robots as well.
Expert Knowledge-Based Automatic Sleep Stage Determination by Multi-Valued Decision Making Method
NASA Astrophysics Data System (ADS)
Wang, Bei; Sugi, Takenao; Kawana, Fusae; Wang, Xingyu; Nakamura, Masatoshi
In this study, an expert knowledge-based automatic sleep stage determination system working on a multi-valued decision making method is developed. Visual inspection by a qualified clinician is adopted to obtain the expert knowledge database. The expert knowledge database consists of probability density functions of parameters for various sleep stages. Sleep stages are determined automatically according to the conditional probability. Totally, four subjects were participated. The automatic sleep stage determination results showed close agreements with the visual inspection on sleep stages of awake, REM (rapid eye movement), light sleep and deep sleep. The constructed expert knowledge database reflects the distributions of characteristic parameters which can be adaptive to variable sleep data in hospitals. The developed automatic determination technique based on expert knowledge of visual inspection can be an assistant tool enabling further inspection of sleep disorder cases for clinical practice.
Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information.
Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Yan, Bin; Li, Jianxin
2015-01-01
Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition.
Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information
Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Li, Jianxin
2015-01-01
Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition. PMID:26380294
NASA Astrophysics Data System (ADS)
Takemine, S.; Rikimaru, A.; Takahashi, K.
The rice is one of the staple foods in the world High quality rice production requires periodically collecting rice growth data to control the growth of rice The height of plant the number of stem the color of leaf is well known parameters to indicate rice growth Rice growth diagnosis method based on these parameters is used operationally in Japan although collecting these parameters by field survey needs a lot of labor and time Recently a laborsaving method for rice growth diagnosis is proposed which is based on vegetation cover rate of rice Vegetation cover rate of rice is calculated based on discriminating rice plant areas in a digital camera image which is photographed in nadir direction Discrimination of rice plant areas in the image was done by the automatic binarization processing However in the case of vegetation cover rate calculation method depending on the automatic binarization process there is a possibility to decrease vegetation cover rate against growth of rice In this paper a calculation method of vegetation cover rate was proposed which based on the automatic binarization process and referred to the growth hysteresis information For several images obtained by field survey during rice growing season vegetation cover rate was calculated by the conventional automatic binarization processing and the proposed method respectively And vegetation cover rate of both methods was compared with reference value obtained by visual interpretation As a result of comparison the accuracy of discriminating rice plant areas was increased by the proposed
An overview of very high level software design methods
NASA Technical Reports Server (NTRS)
Asdjodi, Maryam; Hooper, James W.
1988-01-01
Very High Level design methods emphasize automatic transfer of requirements to formal design specifications, and/or may concentrate on automatic transformation of formal design specifications that include some semantic information of the system into machine executable form. Very high level design methods range from general domain independent methods to approaches implementable for specific applications or domains. Applying AI techniques, abstract programming methods, domain heuristics, software engineering tools, library-based programming and other methods different approaches for higher level software design are being developed. Though one finds that a given approach does not always fall exactly in any specific class, this paper provides a classification for very high level design methods including examples for each class. These methods are analyzed and compared based on their basic approaches, strengths and feasibility for future expansion toward automatic development of software systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCarroll, R; UT Health Science Center, Graduate School of Biomedical Sciences, Houston, TX; Beadle, B
Purpose: To investigate and validate the use of an independent deformable-based contouring algorithm for automatic verification of auto-contoured structures in the head and neck towards fully automated treatment planning. Methods: Two independent automatic contouring algorithms [(1) Eclipse’s Smart Segmentation followed by pixel-wise majority voting, (2) an in-house multi-atlas based method] were used to create contours of 6 normal structures of 10 head-and-neck patients. After rating by a radiation oncologist, the higher performing algorithm was selected as the primary contouring method, the other used for automatic verification of the primary. To determine the ability of the verification algorithm to detect incorrectmore » contours, contours from the primary method were shifted from 0.5 to 2cm. Using a logit model the structure-specific minimum detectable shift was identified. The models were then applied to a set of twenty different patients and the sensitivity and specificity of the models verified. Results: Per physician rating, the multi-atlas method (4.8/5 point scale, with 3 rated as generally acceptable for planning purposes) was selected as primary and the Eclipse-based method (3.5/5) for verification. Mean distance to agreement and true positive rate were selected as covariates in an optimized logit model. These models, when applied to a group of twenty different patients, indicated that shifts could be detected at 0.5cm (brain), 0.75cm (mandible, cord), 1cm (brainstem, cochlea), or 1.25cm (parotid), with sensitivity and specificity greater than 0.95. If sensitivity and specificity constraints are reduced to 0.9, detectable shifts of mandible and brainstem were reduced by 0.25cm. These shifts represent additional safety margins which might be considered if auto-contours are used for automatic treatment planning without physician review. Conclusion: Automatically contoured structures can be automatically verified. This fully automated process could be used to flag auto-contours for special review or used with safety margins in a fully automatic treatment planning system.« less
Terminal Sliding Mode Tracking Controller Design for Automatic Guided Vehicle
NASA Astrophysics Data System (ADS)
Chen, Hongbin
2018-03-01
Based on sliding mode variable structure control theory, the path tracking problem of automatic guided vehicle is studied, proposed a controller design method based on the terminal sliding mode. First of all, through analyzing the characteristics of the automatic guided vehicle movement, the kinematics model is presented. Then to improve the traditional expression of terminal sliding mode, design a nonlinear sliding mode which the convergence speed is faster than the former, verified by theoretical analysis, the design of sliding mode is steady and fast convergence in the limited time. Finally combining Lyapunov method to design the tracking control law of automatic guided vehicle, the controller can make the automatic guided vehicle track the desired trajectory in the global sense as well as in finite time. The simulation results verify the correctness and effectiveness of the control law.
Castillo, Andrés M; Bernal, Andrés; Patiny, Luc; Wist, Julien
2015-08-01
We present a method for the automatic assignment of small molecules' NMR spectra. The method includes an automatic and novel self-consistent peak-picking routine that validates NMR peaks in each spectrum against peaks in the same or other spectra that are due to the same resonances. The auto-assignment routine used is based on branch-and-bound optimization and relies predominantly on integration and correlation data; chemical shift information may be included when available to fasten the search and shorten the list of viable assignments, but in most cases tested, it is not required in order to find the correct assignment. This automatic assignment method is implemented as a web-based tool that runs without any user input other than the acquired spectra. Copyright © 2015 John Wiley & Sons, Ltd.
Automatic lumbar spine measurement in CT images
NASA Astrophysics Data System (ADS)
Mao, Yunxiang; Zheng, Dong; Liao, Shu; Peng, Zhigang; Yan, Ruyi; Liu, Junhua; Dong, Zhongxing; Gong, Liyan; Zhou, Xiang Sean; Zhan, Yiqiang; Fei, Jun
2017-03-01
Accurate lumbar spine measurement in CT images provides an essential way for quantitative spinal diseases analysis such as spondylolisthesis and scoliosis. In today's clinical workflow, the measurements are manually performed by radiologists and surgeons, which is time consuming and irreproducible. Therefore, automatic and accurate lumbar spine measurement algorithm becomes highly desirable. In this study, we propose a method to automatically calculate five different lumbar spine measurements in CT images. There are three main stages of the proposed method: First, a learning based spine labeling method, which integrates both the image appearance and spine geometry information, is used to detect lumbar and sacrum vertebrae in CT images. Then, a multiatlases based image segmentation method is used to segment each lumbar vertebra and the sacrum based on the detection result. Finally, measurements are derived from the segmentation result of each vertebra. Our method has been evaluated on 138 spinal CT scans to automatically calculate five widely used clinical spine measurements. Experimental results show that our method can achieve more than 90% success rates across all the measurements. Our method also significantly improves the measurement efficiency compared to manual measurements. Besides benefiting the routine clinical diagnosis of spinal diseases, our method also enables the large scale data analytics for scientific and clinical researches.
Automatic Extraction of Urban Built-Up Area Based on Object-Oriented Method and Remote Sensing Data
NASA Astrophysics Data System (ADS)
Li, L.; Zhou, H.; Wen, Q.; Chen, T.; Guan, F.; Ren, B.; Yu, H.; Wang, Z.
2018-04-01
Built-up area marks the use of city construction land in the different periods of the development, the accurate extraction is the key to the studies of the changes of urban expansion. This paper studies the technology of automatic extraction of urban built-up area based on object-oriented method and remote sensing data, and realizes the automatic extraction of the main built-up area of the city, which saves the manpower cost greatly. First, the extraction of construction land based on object-oriented method, the main technical steps include: (1) Multi-resolution segmentation; (2) Feature Construction and Selection; (3) Information Extraction of Construction Land Based on Rule Set, The characteristic parameters used in the rule set mainly include the mean of the red band (Mean R), Normalized Difference Vegetation Index (NDVI), Ratio of residential index (RRI), Blue band mean (Mean B), Through the combination of the above characteristic parameters, the construction site information can be extracted. Based on the degree of adaptability, distance and area of the object domain, the urban built-up area can be quickly and accurately defined from the construction land information without depending on other data and expert knowledge to achieve the automatic extraction of the urban built-up area. In this paper, Beijing city as an experimental area for the technical methods of the experiment, the results show that: the city built-up area to achieve automatic extraction, boundary accuracy of 2359.65 m to meet the requirements. The automatic extraction of urban built-up area has strong practicality and can be applied to the monitoring of the change of the main built-up area of city.
Online automatic tuning and control for fed-batch cultivation
van Straten, Gerrit; van der Pol, Leo A.; van Boxtel, Anton J. B.
2007-01-01
Performance of controllers applied in biotechnological production is often below expectation. Online automatic tuning has the capability to improve control performance by adjusting control parameters. This work presents automatic tuning approaches for model reference specific growth rate control during fed-batch cultivation. The approaches are direct methods that use the error between observed specific growth rate and its set point; systematic perturbations of the cultivation are not necessary. Two automatic tuning methods proved to be efficient, in which the adaptation rate is based on a combination of the error, squared error and integral error. These methods are relatively simple and robust against disturbances, parameter uncertainties, and initialization errors. Application of the specific growth rate controller yields a stable system. The controller and automatic tuning methods are qualified by simulations and laboratory experiments with Bordetella pertussis. PMID:18157554
Automatic blood vessel based-liver segmentation using the portal phase abdominal CT
NASA Astrophysics Data System (ADS)
Maklad, Ahmed S.; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Shimada, Mitsuo; Iinuma, Gen
2018-02-01
Liver segmentation is the basis for computer-based planning of hepatic surgical interventions. In diagnosis and analysis of hepatic diseases and surgery planning, automatic segmentation of liver has high importance. Blood vessel (BV) has showed high performance at liver segmentation. In our previous work, we developed a semi-automatic method that segments the liver through the portal phase abdominal CT images in two stages. First stage was interactive segmentation of abdominal blood vessels (ABVs) and subsequent classification into hepatic (HBVs) and non-hepatic (non-HBVs). This stage had 5 interactions that include selective threshold for bone segmentation, selecting two seed points for kidneys segmentation, selection of inferior vena cava (IVC) entrance for starting ABVs segmentation, identification of the portal vein (PV) entrance to the liver and the IVC-exit for classifying HBVs from other ABVs (non-HBVs). Second stage is automatic segmentation of the liver based on segmented ABVs as described in [4]. For full automation of our method we developed a method [5] that segments ABVs automatically tackling the first three interactions. In this paper, we propose full automation of classifying ABVs into HBVs and non- HBVs and consequently full automation of liver segmentation that we proposed in [4]. Results illustrate that the method is effective at segmentation of the liver through the portal abdominal CT images.
Brain Tumor Image Segmentation in MRI Image
NASA Astrophysics Data System (ADS)
Peni Agustin Tjahyaningtijas, Hapsari
2018-04-01
Brain tumor segmentation plays an important role in medical image processing. Treatment of patients with brain tumors is highly dependent on early detection of these tumors. Early detection of brain tumors will improve the patient’s life chances. Diagnosis of brain tumors by experts usually use a manual segmentation that is difficult and time consuming because of the necessary automatic segmentation. Nowadays automatic segmentation is very populer and can be a solution to the problem of tumor brain segmentation with better performance. The purpose of this paper is to provide a review of MRI-based brain tumor segmentation methods. There are number of existing review papers, focusing on traditional methods for MRI-based brain tumor image segmentation. this paper, we focus on the recent trend of automatic segmentation in this field. First, an introduction to brain tumors and methods for brain tumor segmentation is given. Then, the state-of-the-art algorithms with a focus on recent trend of full automatic segmentaion are discussed. Finally, an assessment of the current state is presented and future developments to standardize MRI-based brain tumor segmentation methods into daily clinical routine are addressed.
Image analysis for skeletal evaluation of carpal bones
NASA Astrophysics Data System (ADS)
Ko, Chien-Chuan; Mao, Chi-Wu; Lin, Chi-Jen; Sun, Yung-Nien
1995-04-01
The assessment of bone age is an important field to the pediatric radiology. It provides very important information for treatment and prediction of skeletal growth in a developing child. So far, various computerized algorithms for automatically assessing the skeletal growth have been reported. Most of these methods made attempt to analyze the phalangeal growth. The most fundamental step in these automatic measurement methods is the image segmentation that extracts bones from soft-tissue and background. These automatic segmentation methods of hand radiographs can roughly be categorized into two main approaches that are edge and region based methods. This paper presents a region-based carpal-bone segmentation approach. It is organized into four stages: contrast enhancement, moment-preserving thresholding, morphological processing, and region-growing labeling.
Wong, Pak C.; Mackey, Patrick S.; Perrine, Kenneth A.; Foote, Harlan P.; Thomas, James J.
2008-12-23
Methods for visualizing a graph by automatically drawing elements of the graph as labels are disclosed. In one embodiment, the method comprises receiving node information and edge information from an input device and/or communication interface, constructing a graph layout based at least in part on that information, wherein the edges are automatically drawn as labels, and displaying the graph on a display device according to the graph layout. In some embodiments, the nodes are automatically drawn as labels instead of, or in addition to, the label-edges.
A Model-Based Method for Content Validation of Automatically Generated Test Items
ERIC Educational Resources Information Center
Zhang, Xinxin; Gierl, Mark
2016-01-01
The purpose of this study is to describe a methodology to recover the item model used to generate multiple-choice test items with a novel graph theory approach. Beginning with the generated test items and working backward to recover the original item model provides a model-based method for validating the content used to automatically generate test…
Automatic and user-centric approaches to video summary evaluation
NASA Astrophysics Data System (ADS)
Taskiran, Cuneyt M.; Bentley, Frank
2007-01-01
Automatic video summarization has become an active research topic in content-based video processing. However, not much emphasis has been placed on developing rigorous summary evaluation methods and developing summarization systems based on a clear understanding of user needs, obtained through user centered design. In this paper we address these two topics and propose an automatic video summary evaluation algorithm adapted from teh text summarization domain.
[A wavelet-transform-based method for the automatic detection of late-type stars].
Liu, Zhong-tian; Zhao, Rrui-zhen; Zhao, Yong-heng; Wu, Fu-chao
2005-07-01
The LAMOST project, the world largest sky survey project, urgently needs an automatic late-type stars detection system. However, to our knowledge, no effective methods for automatic late-type stars detection have been reported in the literature up to now. The present study work is intended to explore possible ways to deal with this issue. Here, by "late-type stars" we mean those stars with strong molecule absorption bands, including oxygen-rich M, L and T type stars and carbon-rich C stars. Based on experimental results, the authors find that after a wavelet transform with 5 scales on the late-type stars spectra, their frequency spectrum of the transformed coefficient on the 5th scale consistently manifests a unimodal distribution, and the energy of frequency spectrum is largely concentrated on a small neighborhood centered around the unique peak. However, for the spectra of other celestial bodies, the corresponding frequency spectrum is of multimodal and the energy of frequency spectrum is dispersible. Based on such a finding, the authors presented a wavelet-transform-based automatic late-type stars detection method. The proposed method is shown by extensive experiments to be practical and of good robustness.
[Wearable Automatic External Defibrillators].
Luo, Huajie; Luo, Zhangyuan; Jin, Xun; Zhang, Leilei; Wang, Changjin; Zhang, Wenzan; Tu, Quan
2015-11-01
Defibrillation is the most effective method of treating ventricular fibrillation(VF), this paper introduces wearable automatic external defibrillators based on embedded system which includes EGG measurements, bioelectrical impedance measurement, discharge defibrillation module, which can automatic identify VF signal, biphasic exponential waveform defibrillation discharge. After verified by animal tests, the device can realize EGG acquisition and automatic identification. After identifying the ventricular fibrillation signal, it can automatic defibrillate to abort ventricular fibrillation and to realize the cardiac electrical cardioversion.
Refining Automatically Extracted Knowledge Bases Using Crowdsourcing.
Li, Chunhua; Zhao, Pengpeng; Sheng, Victor S; Xian, Xuefeng; Wu, Jian; Cui, Zhiming
2017-01-01
Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.
Segmentation of stereo terrain images
NASA Astrophysics Data System (ADS)
George, Debra A.; Privitera, Claudio M.; Blackmon, Theodore T.; Zbinden, Eric; Stark, Lawrence W.
2000-06-01
We have studied four approaches to segmentation of images: three automatic ones using image processing algorithms and a fourth approach, human manual segmentation. We were motivated toward helping with an important NASA Mars rover mission task -- replacing laborious manual path planning with automatic navigation of the rover on the Mars terrain. The goal of the automatic segmentations was to identify an obstacle map on the Mars terrain to enable automatic path planning for the rover. The automatic segmentation was first explored with two different segmentation methods: one based on pixel luminance, and the other based on pixel altitude generated through stereo image processing. The third automatic segmentation was achieved by combining these two types of image segmentation. Human manual segmentation of Martian terrain images was used for evaluating the effectiveness of the combined automatic segmentation as well as for determining how different humans segment the same images. Comparisons between two different segmentations, manual or automatic, were measured using a similarity metric, SAB. Based on this metric, the combined automatic segmentation did fairly well in agreeing with the manual segmentation. This was a demonstration of a positive step towards automatically creating the accurate obstacle maps necessary for automatic path planning and rover navigation.
Urschler, Martin; Grassegger, Sabine; Štern, Darko
2015-01-01
Age estimation of individuals is important in human biology and has various medical and forensic applications. Recent interest in MR-based methods aims to investigate alternatives for established methods involving ionising radiation. Automatic, software-based methods additionally promise improved estimation objectivity. To investigate how informative automatically selected image features are regarding their ability to discriminate age, by exploring a recently proposed software-based age estimation method for MR images of the left hand and wrist. One hundred and two MR datasets of left hand images are used to evaluate age estimation performance, consisting of bone and epiphyseal gap volume localisation, computation of one age regression model per bone mapping image features to age and fusion of individual bone age predictions to a final age estimate. Quantitative results of the software-based method show an age estimation performance with a mean absolute difference of 0.85 years (SD = 0.58 years) to chronological age, as determined by a cross-validation experiment. Qualitatively, it is demonstrated how feature selection works and which image features of skeletal maturation are automatically chosen to model the non-linear regression function. Feasibility of automatic age estimation based on MRI data is shown and selected image features are found to be informative for describing anatomical changes during physical maturation in male adolescents.
Method of center localization for objects containing concentric arcs
NASA Astrophysics Data System (ADS)
Kuznetsova, Elena G.; Shvets, Evgeny A.; Nikolaev, Dmitry P.
2015-02-01
This paper proposes a method for automatic center location of objects containing concentric arcs. The method utilizes structure tensor analysis and voting scheme optimized with Fast Hough Transform. Two applications of the proposed method are considered: (i) wheel tracking in video-based system for automatic vehicle classification and (ii) tree growth rings analysis on a tree cross cut image.
NASA Astrophysics Data System (ADS)
Liu, Xi; Zhou, Mei; Qiu, Song; Sun, Li; Liu, Hongying; Li, Qingli; Wang, Yiting
2017-12-01
Red blood cell counting, as a routine examination, plays an important role in medical diagnoses. Although automated hematology analyzers are widely used, manual microscopic examination by a hematologist or pathologist is still unavoidable, which is time-consuming and error-prone. This paper proposes a full-automatic red blood cell counting method which is based on microscopic hyperspectral imaging of blood smears and combines spatial and spectral information to achieve high precision. The acquired hyperspectral image data of the blood smear in the visible and near-infrared spectral range are firstly preprocessed, and then a quadratic blind linear unmixing algorithm is used to get endmember abundance images. Based on mathematical morphological operation and an adaptive Otsu’s method, a binaryzation process is performed on the abundance images. Finally, the connected component labeling algorithm with magnification-based parameter setting is applied to automatically select the binary images of red blood cell cytoplasm. Experimental results show that the proposed method can perform well and has potential for clinical applications.
A Modular Hierarchical Approach to 3D Electron Microscopy Image Segmentation
Liu, Ting; Jones, Cory; Seyedhosseini, Mojtaba; Tasdizen, Tolga
2014-01-01
The study of neural circuit reconstruction, i.e., connectomics, is a challenging problem in neuroscience. Automated and semi-automated electron microscopy (EM) image analysis can be tremendously helpful for connectomics research. In this paper, we propose a fully automatic approach for intra-section segmentation and inter-section reconstruction of neurons using EM images. A hierarchical merge tree structure is built to represent multiple region hypotheses and supervised classification techniques are used to evaluate their potentials, based on which we resolve the merge tree with consistency constraints to acquire final intra-section segmentation. Then, we use a supervised learning based linking procedure for the inter-section neuron reconstruction. Also, we develop a semi-automatic method that utilizes the intermediate outputs of our automatic algorithm and achieves intra-segmentation with minimal user intervention. The experimental results show that our automatic method can achieve close-to-human intra-segmentation accuracy and state-of-the-art inter-section reconstruction accuracy. We also show that our semi-automatic method can further improve the intra-segmentation accuracy. PMID:24491638
Higher-order automatic differentiation of mathematical functions
NASA Astrophysics Data System (ADS)
Charpentier, Isabelle; Dal Cappello, Claude
2015-04-01
Functions of mathematical physics such as the Bessel functions, the Chebyshev polynomials, the Gauss hypergeometric function and so forth, have practical applications in many scientific domains. On the one hand, differentiation formulas provided in reference books apply to real or complex variables. These do not account for the chain rule. On the other hand, based on the chain rule, the automatic differentiation has become a natural tool in numerical modeling. Nevertheless automatic differentiation tools do not deal with the numerous mathematical functions. This paper describes formulas and provides codes for the higher-order automatic differentiation of mathematical functions. The first method is based on Faà di Bruno's formula that generalizes the chain rule. The second one makes use of the second order differential equation they satisfy. Both methods are exemplified with the aforementioned functions.
Valderrama, Joaquin T; de la Torre, Angel; Alvarez, Isaac; Segura, Jose Carlos; Thornton, A Roger D; Sainz, Manuel; Vargas, Jose Luis
2014-05-01
The recording of the auditory brainstem response (ABR) is used worldwide for hearing screening purposes. In this process, a precise estimation of the most relevant components is essential for an accurate interpretation of these signals. This evaluation is usually carried out subjectively by an audiologist. However, the use of automatic methods for this purpose is being encouraged nowadays in order to reduce human evaluation biases and ensure uniformity among test conditions, patients, and screening personnel. This article describes a new method that performs automatic quality assessment and identification of the peaks, the fitted parametric peaks (FPP). This method is based on the use of synthesized peaks that are adjusted to the ABR response. The FPP is validated, on one hand, by an analysis of amplitudes and latencies measured manually by an audiologist and automatically by the FPP method in ABR signals recorded at different stimulation rates; and on the other hand, contrasting the performance of the FPP method with the automatic evaluation techniques based on the correlation coefficient, FSP, and cross correlation with a predefined template waveform by comparing the automatic evaluations of the quality of these methods with subjective evaluations provided by five experienced evaluators on a set of ABR signals of different quality. The results of this study suggest (a) that the FPP method can be used to provide an accurate parameterization of the peaks in terms of amplitude, latency, and width, and (b) that the FPP remains as the method that best approaches the averaged subjective quality evaluation, as well as provides the best results in terms of sensitivity and specificity in ABR signals validation. The significance of these findings and the clinical value of the FPP method are highlighted on this paper. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Automatic classification of sleep stages based on the time-frequency image of EEG signals.
Bajaj, Varun; Pachori, Ram Bilas
2013-12-01
In this paper, a new method for automatic sleep stage classification based on time-frequency image (TFI) of electroencephalogram (EEG) signals is proposed. Automatic classification of sleep stages is an important part for diagnosis and treatment of sleep disorders. The smoothed pseudo Wigner-Ville distribution (SPWVD) based time-frequency representation (TFR) of EEG signal has been used to obtain the time-frequency image (TFI). The segmentation of TFI has been performed based on the frequency-bands of the rhythms of EEG signals. The features derived from the histogram of segmented TFI have been used as an input feature set to multiclass least squares support vector machines (MC-LS-SVM) together with the radial basis function (RBF), Mexican hat wavelet, and Morlet wavelet kernel functions for automatic classification of sleep stages from EEG signals. The experimental results are presented to show the effectiveness of the proposed method for classification of sleep stages from EEG signals. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Igual, Laura; Soliva, Joan Carles; Escalera, Sergio; Gimeno, Roger; Vilarroya, Oscar; Radeva, Petia
2012-12-01
We present a fully automatic diagnostic imaging test for Attention-Deficit/Hyperactivity Disorder diagnosis assistance based on previously found evidences of caudate nucleus volumetric abnormalities. The proposed method consists of different steps: a new automatic method for external and internal segmentation of caudate based on Machine Learning methodologies; the definition of a set of new volume relation features, 3D Dissociated Dipoles, used for caudate representation and classification. We separately validate the contributions using real data from a pediatric population and show precise internal caudate segmentation and discrimination power of the diagnostic test, showing significant performance improvements in comparison to other state-of-the-art methods. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Šilhavý, Jakub; Minár, Jozef; Mentlík, Pavel; Sládek, Ján
2016-07-01
This paper presents a new method of automatic lineament extraction which includes the removal of the 'artefacts effect' which is associated with the process of raster based analysis. The core of the proposed Multi-Hillshade Hierarchic Clustering (MHHC) method incorporates a set of variously illuminated and rotated hillshades in combination with hierarchic clustering of derived 'protolineaments'. The algorithm also includes classification into positive and negative lineaments. MHHC was tested in two different territories in Bohemian Forest and Central Western Carpathians. The original vector-based algorithm was developed for comparison of the individual lineaments proximity. Its use confirms the compatibility of manual and automatic extraction and their similar relationships to structural data in the study areas.
Image-based red cell counting for wild animals blood.
Mauricio, Claudio R M; Schneider, Fabio K; Dos Santos, Leonilda Correia
2010-01-01
An image-based red blood cell (RBC) automatic counting system is presented for wild animals blood analysis. Images with 2048×1536-pixel resolution acquired on an optical microscope using Neubauer chambers are used to evaluate RBC counting for three animal species (Leopardus pardalis, Cebus apella and Nasua nasua) and the error found using the proposed method is similar to that obtained for inter observer visual counting method, i.e., around 10%. Smaller errors (e.g., 3%) can be obtained in regions with less grid artifacts. These promising results allow the use of the proposed method either as a complete automatic counting tool in laboratories for wild animal's blood analysis or as a first counting stage in a semi-automatic counting tool.
Research on Generating Method of Embedded Software Test Document Based on Dynamic Model
NASA Astrophysics Data System (ADS)
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying
2018-03-01
This paper provides a dynamic model-based test document generation method for embedded software that provides automatic generation of two documents: test requirements specification documentation and configuration item test documentation. This method enables dynamic test requirements to be implemented in dynamic models, enabling dynamic test demand tracking to be easily generated; able to automatically generate standardized, standardized test requirements and test documentation, improved document-related content inconsistency and lack of integrity And other issues, improve the efficiency.
NASA Astrophysics Data System (ADS)
Leavens, Claudia; Vik, Torbjørn; Schulz, Heinrich; Allaire, Stéphane; Kim, John; Dawson, Laura; O'Sullivan, Brian; Breen, Stephen; Jaffray, David; Pekar, Vladimir
2008-03-01
Manual contouring of target volumes and organs at risk in radiation therapy is extremely time-consuming, in particular for treating the head-and-neck area, where a single patient treatment plan can take several hours to contour. As radiation treatment delivery moves towards adaptive treatment, the need for more efficient segmentation techniques will increase. We are developing a method for automatic model-based segmentation of the head and neck. This process can be broken down into three main steps: i) automatic landmark identification in the image dataset of interest, ii) automatic landmark-based initialization of deformable surface models to the patient image dataset, and iii) adaptation of the deformable models to the patient-specific anatomical boundaries of interest. In this paper, we focus on the validation of the first step of this method, quantifying the results of our automatic landmark identification method. We use an image atlas formed by applying thin-plate spline (TPS) interpolation to ten atlas datasets, using 27 manually identified landmarks in each atlas/training dataset. The principal variation modes returned by principal component analysis (PCA) of the landmark positions were used by an automatic registration algorithm, which sought the corresponding landmarks in the clinical dataset of interest using a controlled random search algorithm. Applying a run time of 60 seconds to the random search, a root mean square (rms) distance to the ground-truth landmark position of 9.5 +/- 0.6 mm was calculated for the identified landmarks. Automatic segmentation of the brain, mandible and brain stem, using the detected landmarks, is demonstrated.
2012-01-01
Background While progress has been made to develop automatic segmentation techniques for mitochondria, there remains a need for more accurate and robust techniques to delineate mitochondria in serial blockface scanning electron microscopic data. Previously developed texture based methods are limited for solving this problem because texture alone is often not sufficient to identify mitochondria. This paper presents a new three-step method, the Cytoseg process, for automated segmentation of mitochondria contained in 3D electron microscopic volumes generated through serial block face scanning electron microscopic imaging. The method consists of three steps. The first is a random forest patch classification step operating directly on 2D image patches. The second step consists of contour-pair classification. At the final step, we introduce a method to automatically seed a level set operation with output from previous steps. Results We report accuracy of the Cytoseg process on three types of tissue and compare it to a previous method based on Radon-Like Features. At step 1, we show that the patch classifier identifies mitochondria texture but creates many false positive pixels. At step 2, our contour processing step produces contours and then filters them with a second classification step, helping to improve overall accuracy. We show that our final level set operation, which is automatically seeded with output from previous steps, helps to smooth the results. Overall, our results show that use of contour pair classification and level set operations improve segmentation accuracy beyond patch classification alone. We show that the Cytoseg process performs well compared to another modern technique based on Radon-Like Features. Conclusions We demonstrated that texture based methods for mitochondria segmentation can be enhanced with multiple steps that form an image processing pipeline. While we used a random-forest based patch classifier to recognize texture, it would be possible to replace this with other texture identifiers, and we plan to explore this in future work. PMID:22321695
NASA Astrophysics Data System (ADS)
Sa, Qila; Wang, Zhihui
2018-03-01
At present, content-based video retrieval (CBVR) is the most mainstream video retrieval method, using the video features of its own to perform automatic identification and retrieval. This method involves a key technology, i.e. shot segmentation. In this paper, the method of automatic video shot boundary detection with K-means clustering and improved adaptive dual threshold comparison is proposed. First, extract the visual features of every frame and divide them into two categories using K-means clustering algorithm, namely, one with significant change and one with no significant change. Then, as to the classification results, utilize the improved adaptive dual threshold comparison method to determine the abrupt as well as gradual shot boundaries.Finally, achieve automatic video shot boundary detection system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, J; Washington University in St Louis, St Louis, MO; Li, H. Harlod
Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The mostmore » important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.« less
Automatic Correction Algorithm of Hyfrology Feature Attribute in National Geographic Census
NASA Astrophysics Data System (ADS)
Li, C.; Guo, P.; Liu, X.
2017-09-01
A subset of the attributes of hydrologic features data in national geographic census are not clear, the current solution to this problem was through manual filling which is inefficient and liable to mistakes. So this paper proposes an automatic correction algorithm of hydrologic features attribute. Based on the analysis of the structure characteristics and topological relation, we put forward three basic principles of correction which include network proximity, structure robustness and topology ductility. Based on the WJ-III map workstation, we realize the automatic correction of hydrologic features. Finally, practical data is used to validate the method. The results show that our method is highly reasonable and efficient.
Automatic labeling of MR brain images through extensible learning and atlas forests.
Xu, Lijun; Liu, Hong; Song, Enmin; Yan, Meng; Jin, Renchao; Hung, Chih-Cheng
2017-12-01
Multiatlas-based method is extensively used in MR brain images segmentation because of its simplicity and robustness. This method provides excellent accuracy although it is time consuming and limited in terms of obtaining information about new atlases. In this study, an automatic labeling of MR brain images through extensible learning and atlas forest is presented to address these limitations. We propose an extensible learning model which allows the multiatlas-based framework capable of managing the datasets with numerous atlases or dynamic atlas datasets and simultaneously ensure the accuracy of automatic labeling. Two new strategies are used to reduce the time and space complexity and improve the efficiency of the automatic labeling of brain MR images. First, atlases are encoded to atlas forests through random forest technology to reduce the time consumed for cross-registration between atlases and target image, and a scatter spatial vector is designed to eliminate errors caused by inaccurate registration. Second, an atlas selection method based on the extensible learning model is used to select atlases for target image without traversing the entire dataset and then obtain the accurate labeling. The labeling results of the proposed method were evaluated in three public datasets, namely, IBSR, LONI LPBA40, and ADNI. With the proposed method, the dice coefficient metric values on the three datasets were 84.17 ± 4.61%, 83.25 ± 4.29%, and 81.88 ± 4.53% which were 5% higher than those of the conventional method, respectively. The efficiency of the extensible learning model was evaluated by state-of-the-art methods for labeling of MR brain images. Experimental results showed that the proposed method could achieve accurate labeling for MR brain images without traversing the entire datasets. In the proposed multiatlas-based method, extensible learning and atlas forests were applied to control the automatic labeling of brain anatomies on large atlas datasets or dynamic atlas datasets and obtain accurate results. © 2017 American Association of Physicists in Medicine.
CAD-based Automatic Modeling Method for Geant4 geometry model Through MCAM
NASA Astrophysics Data System (ADS)
Wang, Dong; Nie, Fanzhi; Wang, Guozhong; Long, Pengcheng; LV, Zhongliang; LV, Zhongliang
2014-06-01
Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problem existed in most of present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics & Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling.
Automatic approach to deriving fuzzy slope positions
NASA Astrophysics Data System (ADS)
Zhu, Liang-Jun; Zhu, A.-Xing; Qin, Cheng-Zhi; Liu, Jun-Zhi
2018-03-01
Fuzzy characterization of slope positions is important for geographic modeling. Most of the existing fuzzy classification-based methods for fuzzy characterization require extensive user intervention in data preparation and parameter setting, which is tedious and time-consuming. This paper presents an automatic approach to overcoming these limitations in the prototype-based inference method for deriving fuzzy membership value (or similarity) to slope positions. The key contribution is a procedure for finding the typical locations and setting the fuzzy inference parameters for each slope position type. Instead of being determined totally by users in the prototype-based inference method, in the proposed approach the typical locations and fuzzy inference parameters for each slope position type are automatically determined by a rule set based on prior domain knowledge and the frequency distributions of topographic attributes. Furthermore, the preparation of topographic attributes (e.g., slope gradient, curvature, and relative position index) is automated, so the proposed automatic approach has only one necessary input, i.e., the gridded digital elevation model of the study area. All compute-intensive algorithms in the proposed approach were speeded up by parallel computing. Two study cases were provided to demonstrate that this approach can properly, conveniently and quickly derive the fuzzy slope positions.
Cruz-Roa, Angel; Díaz, Gloria; Romero, Eduardo; González, Fabio A.
2011-01-01
Histopathological images are an important resource for clinical diagnosis and biomedical research. From an image understanding point of view, the automatic annotation of these images is a challenging problem. This paper presents a new method for automatic histopathological image annotation based on three complementary strategies, first, a part-based image representation, called the bag of features, which takes advantage of the natural redundancy of histopathological images for capturing the fundamental patterns of biological structures, second, a latent topic model, based on non-negative matrix factorization, which captures the high-level visual patterns hidden in the image, and, third, a probabilistic annotation model that links visual appearance of morphological and architectural features associated to 10 histopathological image annotations. The method was evaluated using 1,604 annotated images of skin tissues, which included normal and pathological architectural and morphological features, obtaining a recall of 74% and a precision of 50%, which improved a baseline annotation method based on support vector machines in a 64% and 24%, respectively. PMID:22811960
NASA Astrophysics Data System (ADS)
Přibil, Jiří; Přibilová, Anna; Frollo, Ivan
2017-12-01
The paper focuses on two methods of evaluation of successfulness of speech signal enhancement recorded in the open-air magnetic resonance imager during phonation for the 3D human vocal tract modeling. The first approach enables to obtain a comparison based on statistical analysis by ANOVA and hypothesis tests. The second method is based on classification by Gaussian mixture models (GMM). The performed experiments have confirmed that the proposed ANOVA and GMM classifiers for automatic evaluation of the speech quality are functional and produce fully comparable results with the standard evaluation based on the listening test method.
Automatic tracking of red blood cells in micro channels using OpenCV
NASA Astrophysics Data System (ADS)
Rodrigues, Vânia; Rodrigues, Pedro J.; Pereira, Ana I.; Lima, Rui
2013-10-01
The present study aims to developan automatic method able to track red blood cells (RBCs) trajectories flowing through a microchannel using the Open Source Computer Vision (OpenCV). The developed method is based on optical flux calculation assisted by the maximization of the template-matching product. The experimental results show a good functional performance of this method.
NASA Astrophysics Data System (ADS)
Ansari, Muhammad Ahsan; Zai, Sammer; Moon, Young Shik
2017-01-01
Manual analysis of the bulk data generated by computed tomography angiography (CTA) is time consuming, and interpretation of such data requires previous knowledge and expertise of the radiologist. Therefore, an automatic method that can isolate the coronary arteries from a given CTA dataset is required. We present an automatic yet effective segmentation method to delineate the coronary arteries from a three-dimensional CTA data cloud. Instead of a region growing process, which is usually time consuming and prone to leakages, the method is based on the optimal thresholding, which is applied globally on the Hessian-based vesselness measure in a localized way (slice by slice) to track the coronaries carefully to their distal ends. Moreover, to make the process automatic, we detect the aorta using the Hough transform technique. The proposed segmentation method is independent of the starting point to initiate its process and is fast in the sense that coronary arteries are obtained without any preprocessing or postprocessing steps. We used 12 real clinical datasets to show the efficiency and accuracy of the presented method. Experimental results reveal that the proposed method achieves 95% average accuracy.
[Modeling and implementation method for the automatic biochemistry analyzer control system].
Wang, Dong; Ge, Wan-cheng; Song, Chun-lin; Wang, Yun-guang
2009-03-01
In this paper the system structure The automatic biochemistry analyzer is a necessary instrument for clinical diagnostics. First of is analyzed. The system problems description and the fundamental principles for dispatch are brought forward. Then this text puts emphasis on the modeling for the automatic biochemistry analyzer control system. The objects model and the communications model are put forward. Finally, the implementation method is designed. It indicates that the system based on the model has good performance.
Refining Automatically Extracted Knowledge Bases Using Crowdsourcing
Xian, Xuefeng; Cui, Zhiming
2017-01-01
Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost. PMID:28588611
Automatic diet monitoring: a review of computer vision and wearable sensor-based methods.
Hassannejad, Hamid; Matrella, Guido; Ciampolini, Paolo; De Munari, Ilaria; Mordonini, Monica; Cagnoni, Stefano
2017-09-01
Food intake and eating habits have a significant impact on people's health. Widespread diseases, such as diabetes and obesity, are directly related to eating habits. Therefore, monitoring diet can be a substantial base for developing methods and services to promote healthy lifestyle and improve personal and national health economy. Studies have demonstrated that manual reporting of food intake is inaccurate and often impractical. Thus, several methods have been proposed to automate the process. This article reviews the most relevant and recent researches on automatic diet monitoring, discussing their strengths and weaknesses. In particular, the article reviews two approaches to this problem, accounting for most of the work in the area. The first approach is based on image analysis and aims at extracting information about food content automatically from food images. The second one relies on wearable sensors and has the detection of eating behaviours as its main goal.
Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano
2016-07-07
Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology.
NASA Astrophysics Data System (ADS)
Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano
2016-07-01
Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology.
Johansson, Jarkko; Alakurtti, Kati; Joutsa, Juho; Tohka, Jussi; Ruotsalainen, Ulla; Rinne, Juha O
2016-10-01
The striatum is the primary target in regional C-raclopride-PET studies, and despite its small volume, it contains several functional and anatomical subregions. The outcome of the quantitative dopamine receptor study using C-raclopride-PET depends heavily on the quality of the region-of-interest (ROI) definition of these subregions. The aim of this study was to evaluate subregional analysis techniques because new approaches have emerged, but have not yet been compared directly. In this paper, we compared manual ROI delineation with several automatic methods. The automatic methods used either direct clustering of the PET image or individualization of chosen brain atlases on the basis of MRI or PET image normalization. State-of-the-art normalization methods and atlases were applied, including those provided in the FreeSurfer, Statistical Parametric Mapping8, and FSL software packages. Evaluation of the automatic methods was based on voxel-wise congruity with the manual delineations and the test-retest variability and reliability of the outcome measures using data from seven healthy male participants who were scanned twice with C-raclopride-PET on the same day. The results show that both manual and automatic methods can be used to define striatal subregions. Although most of the methods performed well with respect to the test-retest variability and reliability of binding potential, the smallest average test-retest variability and SEM were obtained using a connectivity-based atlas and PET normalization (test-retest variability=4.5%, SEM=0.17). The current state-of-the-art automatic ROI methods can be considered good alternatives for subjective and laborious manual segmentation in C-raclopride-PET studies.
Automatic Topography Using High Precision Digital Moire Methods
NASA Astrophysics Data System (ADS)
Yatagai, T.; Idesawa, M.; Saito, S.
1983-07-01
Three types of moire topographic methods using digital techniques are proposed. Deformed gratings obtained by projecting a reference grating onto an object under test are subjected to digital analysis. The electronic analysis procedures of deformed gratings described here enable us to distinguish between depression and elevation of the object, so that automatic measurement of 3-D shapes and automatic moire fringe interpolation are performed. Based on the digital moire methods, we have developed a practical measurement system, with a linear photodiode array on a micro-stage as a scanning image sensor. Examples of fringe analysis in medical applications are presented.
Experimental Study for Automatic Colony Counting System Based Onimage Processing
NASA Astrophysics Data System (ADS)
Fang, Junlong; Li, Wenzhe; Wang, Guoxin
Colony counting in many colony experiments is detected by manual method at present, therefore it is difficult for man to execute the method quickly and accurately .A new automatic colony counting system was developed. Making use of image-processing technology, a study was made on the feasibility of distinguishing objectively white bacterial colonies from clear plates according to the RGB color theory. An optimal chromatic value was obtained based upon a lot of experiments on the distribution of the chromatic value. It has been proved that the method greatly improves the accuracy and efficiency of the colony counting and the counting result is not affected by using inoculation, shape or size of the colony. It is revealed that automatic detection of colony quantity using image-processing technology could be an effective way.
Fast automatic 3D liver segmentation based on a three-level AdaBoost-guided active shape model
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Baochun; Huang, Cheng; Zhou, Shoujun
Purpose: A robust, automatic, and rapid method for liver delineation is urgently needed for the diagnosis and treatment of liver disorders. Until now, the high variability in liver shape, local image artifacts, and the presence of tumors have complicated the development of automatic 3D liver segmentation. In this study, an automatic three-level AdaBoost-guided active shape model (ASM) is proposed for the segmentation of the liver based on enhanced computed tomography images in a robust and fast manner, with an emphasis on the detection of tumors. Methods: The AdaBoost voxel classifier and AdaBoost profile classifier were used to automatically guide three-levelmore » active shape modeling. In the first level of model initialization, fast automatic liver segmentation by an AdaBoost voxel classifier method is proposed. A shape model is then initialized by registration with the resulting rough segmentation. In the second level of active shape model fitting, a prior model based on the two-class AdaBoost profile classifier is proposed to identify the optimal surface. In the third level, a deformable simplex mesh with profile probability and curvature constraint as the external force is used to refine the shape fitting result. In total, three registration methods—3D similarity registration, probability atlas B-spline, and their proposed deformable closest point registration—are used to establish shape correspondence. Results: The proposed method was evaluated using three public challenge datasets: 3Dircadb1, SLIVER07, and Visceral Anatomy3. The results showed that our approach performs with promising efficiency, with an average of 35 s, and accuracy, with an average Dice similarity coefficient (DSC) of 0.94 ± 0.02, 0.96 ± 0.01, and 0.94 ± 0.02 for the 3Dircadb1, SLIVER07, and Anatomy3 training datasets, respectively. The DSC of the SLIVER07 testing and Anatomy3 unseen testing datasets were 0.964 and 0.933, respectively. Conclusions: The proposed automatic approach achieves robust, accurate, and fast liver segmentation for 3D CTce datasets. The AdaBoost voxel classifier can detect liver area quickly without errors and provides sufficient liver shape information for model initialization. The AdaBoost profile classifier achieves sufficient accuracy and greatly decreases segmentation time. These results show that the proposed segmentation method achieves a level of accuracy comparable to that of state-of-the-art automatic methods based on ASM.« less
Fast automatic 3D liver segmentation based on a three-level AdaBoost-guided active shape model.
He, Baochun; Huang, Cheng; Sharp, Gregory; Zhou, Shoujun; Hu, Qingmao; Fang, Chihua; Fan, Yingfang; Jia, Fucang
2016-05-01
A robust, automatic, and rapid method for liver delineation is urgently needed for the diagnosis and treatment of liver disorders. Until now, the high variability in liver shape, local image artifacts, and the presence of tumors have complicated the development of automatic 3D liver segmentation. In this study, an automatic three-level AdaBoost-guided active shape model (ASM) is proposed for the segmentation of the liver based on enhanced computed tomography images in a robust and fast manner, with an emphasis on the detection of tumors. The AdaBoost voxel classifier and AdaBoost profile classifier were used to automatically guide three-level active shape modeling. In the first level of model initialization, fast automatic liver segmentation by an AdaBoost voxel classifier method is proposed. A shape model is then initialized by registration with the resulting rough segmentation. In the second level of active shape model fitting, a prior model based on the two-class AdaBoost profile classifier is proposed to identify the optimal surface. In the third level, a deformable simplex mesh with profile probability and curvature constraint as the external force is used to refine the shape fitting result. In total, three registration methods-3D similarity registration, probability atlas B-spline, and their proposed deformable closest point registration-are used to establish shape correspondence. The proposed method was evaluated using three public challenge datasets: 3Dircadb1, SLIVER07, and Visceral Anatomy3. The results showed that our approach performs with promising efficiency, with an average of 35 s, and accuracy, with an average Dice similarity coefficient (DSC) of 0.94 ± 0.02, 0.96 ± 0.01, and 0.94 ± 0.02 for the 3Dircadb1, SLIVER07, and Anatomy3 training datasets, respectively. The DSC of the SLIVER07 testing and Anatomy3 unseen testing datasets were 0.964 and 0.933, respectively. The proposed automatic approach achieves robust, accurate, and fast liver segmentation for 3D CTce datasets. The AdaBoost voxel classifier can detect liver area quickly without errors and provides sufficient liver shape information for model initialization. The AdaBoost profile classifier achieves sufficient accuracy and greatly decreases segmentation time. These results show that the proposed segmentation method achieves a level of accuracy comparable to that of state-of-the-art automatic methods based on ASM.
Automatic extraction of blocks from 3D point clouds of fractured rock
NASA Astrophysics Data System (ADS)
Chen, Na; Kemeny, John; Jiang, Qinghui; Pan, Zhiwen
2017-12-01
This paper presents a new method for extracting blocks and calculating block size automatically from rock surface 3D point clouds. Block size is an important rock mass characteristic and forms the basis for several rock mass classification schemes. The proposed method consists of four steps: 1) the automatic extraction of discontinuities using an improved Ransac Shape Detection method, 2) the calculation of discontinuity intersections based on plane geometry, 3) the extraction of block candidates based on three discontinuities intersecting one another to form corners, and 4) the identification of "true" blocks using an improved Floodfill algorithm. The calculated block sizes were compared with manual measurements in two case studies, one with fabricated cardboard blocks and the other from an actual rock mass outcrop. The results demonstrate that the proposed method is accurate and overcomes the inaccuracies, safety hazards, and biases of traditional techniques.
PRESBYOPIA OPTOMETRY METHOD BASED ON DIOPTER REGULATION AND CHARGE COUPLE DEVICE IMAGING TECHNOLOGY.
Zhao, Q; Wu, X X; Zhou, J; Wang, X; Liu, R F; Gao, J
2015-01-01
With the development of photoelectric technology and single-chip microcomputer technology, objective optometry, also known as automatic optometry, is becoming precise. This paper proposed a presbyopia optometry method based on diopter regulation and Charge Couple Device (CCD) imaging technology and, in the meantime, designed a light path that could measure the system. This method projects a test figure to the eye ground and then the reflected image from the eye ground is detected by CCD. The image is then automatically identified by computer and the far point and near point diopters are determined to calculate lens parameter. This is a fully automatic objective optometry method which eliminates subjective factors of the tested subject. Furthermore, it can acquire the lens parameter of presbyopia accurately and quickly and can be used to measure the lens parameter of hyperopia, myopia and astigmatism.
NASA Astrophysics Data System (ADS)
Qin, Xulei; Cong, Zhibin; Fei, Baowei
2013-11-01
An automatic segmentation framework is proposed to segment the right ventricle (RV) in echocardiographic images. The method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform, a training model, and a localized region-based level set. First, the sparse matrix transform extracts main motion regions of the myocardium as eigen-images by analyzing the statistical information of the images. Second, an RV training model is registered to the eigen-images in order to locate the position of the RV. Third, the training model is adjusted and then serves as an optimized initialization for the segmentation of each image. Finally, based on the initializations, a localized, region-based level set algorithm is applied to segment both epicardial and endocardial boundaries in each echocardiograph. Three evaluation methods were used to validate the performance of the segmentation framework. The Dice coefficient measures the overall agreement between the manual and automatic segmentation. The absolute distance and the Hausdorff distance between the boundaries from manual and automatic segmentation were used to measure the accuracy of the segmentation. Ultrasound images of human subjects were used for validation. For the epicardial and endocardial boundaries, the Dice coefficients were 90.8 ± 1.7% and 87.3 ± 1.9%, the absolute distances were 2.0 ± 0.42 mm and 1.79 ± 0.45 mm, and the Hausdorff distances were 6.86 ± 1.71 mm and 7.02 ± 1.17 mm, respectively. The automatic segmentation method based on a sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.
NASA Astrophysics Data System (ADS)
Qin, Xulei; Cong, Zhibin; Halig, Luma V.; Fei, Baowei
2013-03-01
An automatic framework is proposed to segment right ventricle on ultrasound images. This method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform (SMT), a training model, and a localized region based level set. First, the sparse matrix transform extracts main motion regions of myocardium as eigenimages by analyzing statistical information of these images. Second, a training model of right ventricle is registered to the extracted eigenimages in order to automatically detect the main location of the right ventricle and the corresponding transform relationship between the training model and the SMT-extracted results in the series. Third, the training model is then adjusted as an adapted initialization for the segmentation of each image in the series. Finally, based on the adapted initializations, a localized region based level set algorithm is applied to segment both epicardial and endocardial boundaries of the right ventricle from the whole series. Experimental results from real subject data validated the performance of the proposed framework in segmenting right ventricle from echocardiography. The mean Dice scores for both epicardial and endocardial boundaries are 89.1%+/-2.3% and 83.6+/-7.3%, respectively. The automatic segmentation method based on sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.
Surface smoothness: cartilage biomarkers for knee OA beyond the radiologist
NASA Astrophysics Data System (ADS)
Tummala, Sudhakar; Dam, Erik B.
2010-03-01
Fully automatic imaging biomarkers may allow quantification of patho-physiological processes that a radiologist would not be able to assess reliably. This can introduce new insight but is problematic to validate due to lack of meaningful ground truth expert measurements. Rather than quantification accuracy, such novel markers must therefore be validated against clinically meaningful end-goals such as the ability to allow correct diagnosis. We present a method for automatic cartilage surface smoothness quantification in the knee joint. The quantification is based on a curvature flow method used on tibial and femoral cartilage compartments resulting from an automatic segmentation scheme. These smoothness estimates are validated for their ability to diagnose osteoarthritis and compared to smoothness estimates based on manual expert segmentations and to conventional cartilage volume quantification. We demonstrate that the fully automatic markers eliminate the time required for radiologist annotations, and in addition provide a diagnostic marker superior to the evaluated semi-manual markers.
Automatic textual annotation of video news based on semantic visual object extraction
NASA Astrophysics Data System (ADS)
Boujemaa, Nozha; Fleuret, Francois; Gouet, Valerie; Sahbi, Hichem
2003-12-01
In this paper, we present our work for automatic generation of textual metadata based on visual content analysis of video news. We present two methods for semantic object detection and recognition from a cross modal image-text thesaurus. These thesaurus represent a supervised association between models and semantic labels. This paper is concerned with two semantic objects: faces and Tv logos. In the first part, we present our work for efficient face detection and recogniton with automatic name generation. This method allows us also to suggest the textual annotation of shots close-up estimation. On the other hand, we were interested to automatically detect and recognize different Tv logos present on incoming different news from different Tv Channels. This work was done jointly with the French Tv Channel TF1 within the "MediaWorks" project that consists on an hybrid text-image indexing and retrieval plateform for video news.
A fast and automatic mosaic method for high-resolution satellite images
NASA Astrophysics Data System (ADS)
Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing
2015-12-01
We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.
Zhang, Junming; Wu, Yan
2018-03-28
Many systems are developed for automatic sleep stage classification. However, nearly all models are based on handcrafted features. Because of the large feature space, there are so many features that feature selection should be used. Meanwhile, designing handcrafted features is a difficult and time-consuming task because the feature designing needs domain knowledge of experienced experts. Results vary when different sets of features are chosen to identify sleep stages. Additionally, many features that we may be unaware of exist. However, these features may be important for sleep stage classification. Therefore, a new sleep stage classification system, which is based on the complex-valued convolutional neural network (CCNN), is proposed in this study. Unlike the existing sleep stage methods, our method can automatically extract features from raw electroencephalography data and then classify sleep stage based on the learned features. Additionally, we also prove that the decision boundaries for the real and imaginary parts of a complex-valued convolutional neuron intersect orthogonally. The classification performances of handcrafted features are compared with those of learned features via CCNN. Experimental results show that the proposed method is comparable to the existing methods. CCNN obtains a better classification performance and considerably faster convergence speed than convolutional neural network. Experimental results also show that the proposed method is a useful decision-support tool for automatic sleep stage classification.
Liukkonen, Mimmi K; Mononen, Mika E; Tanska, Petri; Saarakkala, Simo; Nieminen, Miika T; Korhonen, Rami K
2017-10-01
Manual segmentation of articular cartilage from knee joint 3D magnetic resonance images (MRI) is a time consuming and laborious task. Thus, automatic methods are needed for faster and reproducible segmentations. In the present study, we developed a semi-automatic segmentation method based on radial intensity profiles to generate 3D geometries of knee joint cartilage which were then used in computational biomechanical models of the knee joint. Six healthy volunteers were imaged with a 3T MRI device and their knee cartilages were segmented both manually and semi-automatically. The values of cartilage thicknesses and volumes produced by these two methods were compared. Furthermore, the influences of possible geometrical differences on cartilage stresses and strains in the knee were evaluated with finite element modeling. The semi-automatic segmentation and 3D geometry construction of one knee joint (menisci, femoral and tibial cartilages) was approximately two times faster than with manual segmentation. Differences in cartilage thicknesses, volumes, contact pressures, stresses, and strains between segmentation methods in femoral and tibial cartilage were mostly insignificant (p > 0.05) and random, i.e. there were no systematic differences between the methods. In conclusion, the devised semi-automatic segmentation method is a quick and accurate way to determine cartilage geometries; it may become a valuable tool for biomechanical modeling applications with large patient groups.
Ben Younes, Lassad; Nakajima, Yoshikazu; Saito, Toki
2014-03-01
Femur segmentation is well established and widely used in computer-assisted orthopedic surgery. However, most of the robust segmentation methods such as statistical shape models (SSM) require human intervention to provide an initial position for the SSM. In this paper, we propose to overcome this problem and provide a fully automatic femur segmentation method for CT images based on primitive shape recognition and SSM. Femur segmentation in CT scans was performed using primitive shape recognition based on a robust algorithm such as the Hough transform and RANdom SAmple Consensus. The proposed method is divided into 3 steps: (1) detection of the femoral head as sphere and the femoral shaft as cylinder in the SSM and the CT images, (2) rigid registration between primitives of SSM and CT image to initialize the SSM into the CT image, and (3) fitting of the SSM to the CT image edge using an affine transformation followed by a nonlinear fitting. The automated method provided good results even with a high number of outliers. The difference of segmentation error between the proposed automatic initialization method and a manual initialization method is less than 1 mm. The proposed method detects primitive shape position to initialize the SSM into the target image. Based on primitive shapes, this method overcomes the problem of inter-patient variability. Moreover, the results demonstrate that our method of primitive shape recognition can be used for 3D SSM initialization to achieve fully automatic segmentation of the femur.
Wang, Jieqiong; Miao, Wen; Li, Jing; Li, Meng; Zhen, Zonglei; Sabel, Bernhard; Xian, Junfang; He, Huiguang
2015-11-30
The lateral geniculate nucleus (LGN) is a key relay center of the visual system. Because the LGN morphology is affected by different diseases, it is of interest to analyze its morphology by segmentation. However, existing LGN segmentation methods are non-automatic, inefficient and prone to experimenters' bias. To address these problems, we proposed an automatic LGN segmentation algorithm based on T1-weighted imaging. First, the prior information of LGN was used to create a prior mask. Then region growing was applied to delineate LGN. We evaluated this automatic LGN segmentation method by (1) comparison with manually segmented LGN, (2) anatomically locating LGN in the visual system via LGN-based tractography, (3) application to control and glaucoma patients. The similarity coefficients of automatic segmented LGN and manually segmented one are 0.72 (0.06) for the left LGN and 0.77 (0.07) for the right LGN. LGN-based tractography shows the subcortical pathway seeding from LGN passes the optic tract and also reaches V1 through the optic radiation, which is consistent with the LGN location in the visual system. In addition, LGN asymmetry as well as LGN atrophy along with age is observed in normal controls. The investigation of glaucoma effects on LGN volumes demonstrates that the bilateral LGN volumes shrink in patients. The automatic LGN segmentation is objective, efficient, valid and applicable. Experiment results proved the validity and applicability of the algorithm. Our method will speed up the research on visual system and greatly enhance studies of different vision-related diseases. Copyright © 2015 Elsevier B.V. All rights reserved.
Younghak Shin; Balasingham, Ilangko
2017-07-01
Colonoscopy is a standard method for screening polyps by highly trained physicians. Miss-detected polyps in colonoscopy are potential risk factor for colorectal cancer. In this study, we investigate an automatic polyp classification framework. We aim to compare two different approaches named hand-craft feature method and convolutional neural network (CNN) based deep learning method. Combined shape and color features are used for hand craft feature extraction and support vector machine (SVM) method is adopted for classification. For CNN approach, three convolution and pooling based deep learning framework is used for classification purpose. The proposed framework is evaluated using three public polyp databases. From the experimental results, we have shown that the CNN based deep learning framework shows better classification performance than the hand-craft feature based methods. It achieves over 90% of classification accuracy, sensitivity, specificity and precision.
Ultrasound image-based thyroid nodule automatic segmentation using convolutional neural networks.
Ma, Jinlian; Wu, Fa; Jiang, Tian'an; Zhao, Qiyu; Kong, Dexing
2017-11-01
Delineation of thyroid nodule boundaries from ultrasound images plays an important role in calculation of clinical indices and diagnosis of thyroid diseases. However, it is challenging for accurate and automatic segmentation of thyroid nodules because of their heterogeneous appearance and components similar to the background. In this study, we employ a deep convolutional neural network (CNN) to automatically segment thyroid nodules from ultrasound images. Our CNN-based method formulates a thyroid nodule segmentation problem as a patch classification task, where the relationship among patches is ignored. Specifically, the CNN used image patches from images of normal thyroids and thyroid nodules as inputs and then generated the segmentation probability maps as outputs. A multi-view strategy is used to improve the performance of the CNN-based model. Additionally, we compared the performance of our approach with that of the commonly used segmentation methods on the same dataset. The experimental results suggest that our proposed method outperforms prior methods on thyroid nodule segmentation. Moreover, the results show that the CNN-based model is able to delineate multiple nodules in thyroid ultrasound images accurately and effectively. In detail, our CNN-based model can achieve an average of the overlap metric, dice ratio, true positive rate, false positive rate, and modified Hausdorff distance as [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text] on overall folds, respectively. Our proposed method is fully automatic without any user interaction. Quantitative results also indicate that our method is so efficient and accurate that it can be good enough to replace the time-consuming and tedious manual segmentation approach, demonstrating the potential clinical applications.
User Interaction in Semi-Automatic Segmentation of Organs at Risk: a Case Study in Radiotherapy.
Ramkumar, Anjana; Dolz, Jose; Kirisli, Hortense A; Adebahr, Sonja; Schimek-Jasch, Tanja; Nestle, Ursula; Massoptier, Laurent; Varga, Edit; Stappers, Pieter Jan; Niessen, Wiro J; Song, Yu
2016-04-01
Accurate segmentation of organs at risk is an important step in radiotherapy planning. Manual segmentation being a tedious procedure and prone to inter- and intra-observer variability, there is a growing interest in automated segmentation methods. However, automatic methods frequently fail to provide satisfactory result, and post-processing corrections are often needed. Semi-automatic segmentation methods are designed to overcome these problems by combining physicians' expertise and computers' potential. This study evaluates two semi-automatic segmentation methods with different types of user interactions, named the "strokes" and the "contour", to provide insights into the role and impact of human-computer interaction. Two physicians participated in the experiment. In total, 42 case studies were carried out on five different types of organs at risk. For each case study, both the human-computer interaction process and quality of the segmentation results were measured subjectively and objectively. Furthermore, different measures of the process and the results were correlated. A total of 36 quantifiable and ten non-quantifiable correlations were identified for each type of interaction. Among those pairs of measures, 20 of the contour method and 22 of the strokes method were strongly or moderately correlated, either directly or inversely. Based on those correlated measures, it is concluded that: (1) in the design of semi-automatic segmentation methods, user interactions need to be less cognitively challenging; (2) based on the observed workflows and preferences of physicians, there is a need for flexibility in the interface design; (3) the correlated measures provide insights that can be used in improving user interaction design.
NASA Astrophysics Data System (ADS)
Su, Yi; Xu, Lei; Liu, Ningning; Huang, Wei; Xu, Xiaojing
2016-10-01
Purpose to find an efficient, non-destructive examining method for showing the disappearing words after writing with automatic disappearance pen. Method Using the imaging spectrometer to show the potential disappearance words on paper surface according to different properties of reflection absorbed by various substances in different bands. Results the disappeared words by using different disappearance pens to write on the same paper or the same disappearance pen to write on different papers, both can get good show results through the use of the spectral imaging examining methods. Conclusion Spectral imaging technology can show the disappearing words after writing by using the automatic disappearance pen.
Ramme, Austin J; Voss, Kevin; Lesporis, Jurinus; Lendhey, Matin S; Coughlin, Thomas R; Strauss, Eric J; Kennedy, Oran D
2017-05-01
MicroCT imaging allows for noninvasive microstructural evaluation of mineralized bone tissue, and is essential in studies of small animal models of bone and joint diseases. Automatic segmentation and evaluation of articular surfaces is challenging. Here, we present a novel method to create knee joint surface models, for the evaluation of PTOA-related joint changes in the rat using an atlas-based diffeomorphic registration to automatically isolate bone from surrounding tissues. As validation, two independent raters manually segment datasets and the resulting segmentations were compared to our novel automatic segmentation process. Data were evaluated using label map volumes, overlap metrics, Euclidean distance mapping, and a time trial. Intraclass correlation coefficients were calculated to compare methods, and were greater than 0.90. Total overlap, union overlap, and mean overlap were calculated to compare the automatic and manual methods and ranged from 0.85 to 0.99. A Euclidean distance comparison was also performed and showed no measurable difference between manual and automatic segmentations. Furthermore, our new method was 18 times faster than manual segmentation. Overall, this study describes a reliable, accurate, and automatic segmentation method for mineralized knee structures from microCT images, and will allow for efficient assessment of bony changes in small animal models of PTOA.
Research on Application of Automatic Weather Station Based on Internet of Things
NASA Astrophysics Data System (ADS)
Jianyun, Chen; Yunfan, Sun; Chunyan, Lin
2017-12-01
In this paper, the Internet of Things is briefly introduced, and then its application in the weather station is studied. A method of data acquisition and transmission based on NB-iot communication mode is proposed, Introduction of Internet of things technology, Sensor digital and independent power supply as the technical basis, In the construction of Automatic To realize the intelligent interconnection of the automatic weather station, and then to form an automatic weather station based on the Internet of things. A network structure of automatic weather station based on Internet of things technology is constructed to realize the independent operation of intelligent sensors and wireless data transmission. Research on networking data collection and dissemination of meteorological data, through the data platform for data analysis, the preliminary work of meteorological information publishing standards, networking of meteorological information receiving terminal provides the data interface, to the wisdom of the city, the wisdom of the purpose of the meteorological service.
NASA Astrophysics Data System (ADS)
Cong, Chao; Liu, Dingsheng; Zhao, Lingjun
2008-12-01
This paper discusses a new method for the automatic matching of ground control points (GCPs) between satellite remote sensing Image and digital raster graphic (DRG) in urban areas. The key of this method is to automatically extract tie point pairs according to geographic characters from such heterogeneous images. Since there are big differences between such heterogeneous images respect to texture and corner features, more detail analyzations are performed to find similarities and differences between high resolution remote sensing Image and (DRG). Furthermore a new algorithms based on the fuzzy-c means (FCM) method is proposed to extract linear feature in remote sensing Image. Based on linear feature, crossings and corners extracted from these features are chosen as GCPs. On the other hand, similar method was used to find same features from DRGs. Finally, Hausdorff Distance was adopted to pick matching GCPs from above two GCP groups. Experiences shown the method can extract GCPs from such images with a reasonable RMS error.
Morphological feature extraction for the classification of digital images of cancerous tissues.
Thiran, J P; Macq, B
1996-10-01
This paper presents a new method for automatic recognition of cancerous tissues from an image of a microscopic section. Based on the shape and the size analysis of the observed cells, this method provides the physician with nonsubjective numerical values for four criteria of malignancy. This automatic approach is based on mathematical morphology, and more specifically on the use of Geodesy. This technique is used first to remove the background noise from the image and then to operate a segmentation of the nuclei of the cells and an analysis of their shape, their size, and their texture. From the values of the extracted criteria, an automatic classification of the image (cancerous or not) is finally operated.
Corner detection and sorting method based on improved Harris algorithm in camera calibration
NASA Astrophysics Data System (ADS)
Xiao, Ying; Wang, Yonghong; Dan, Xizuo; Huang, Anqi; Hu, Yue; Yang, Lianxiang
2016-11-01
In traditional Harris corner detection algorithm, the appropriate threshold which is used to eliminate false corners is selected manually. In order to detect corners automatically, an improved algorithm which combines Harris and circular boundary theory of corners is proposed in this paper. After detecting accurate corner coordinates by using Harris algorithm and Forstner algorithm, false corners within chessboard pattern of the calibration plate can be eliminated automatically by using circular boundary theory. Moreover, a corner sorting method based on an improved calibration plate is proposed to eliminate false background corners and sort remaining corners in order. Experiment results show that the proposed algorithms can eliminate all false corners and sort remaining corners correctly and automatically.
Automatic rule generation for high-level vision
NASA Technical Reports Server (NTRS)
Rhee, Frank Chung-Hoon; Krishnapuram, Raghu
1992-01-01
Many high-level vision systems use rule-based approaches to solving problems such as autonomous navigation and image understanding. The rules are usually elaborated by experts. However, this procedure may be rather tedious. In this paper, we propose a method to generate such rules automatically from training data. The proposed method is also capable of filtering out irrelevant features and criteria from the rules.
An analysis of automatic human detection and tracking
NASA Astrophysics Data System (ADS)
Demuth, Philipe R.; Cosmo, Daniel L.; Ciarelli, Patrick M.
2015-12-01
This paper presents an automatic method to detect and follow people on video streams. This method uses two techniques to determine the initial position of the person at the beginning of the video file: one based on optical flow and the other one based on Histogram of Oriented Gradients (HOG). After defining the initial bounding box, tracking is done using four different trackers: Median Flow tracker, TLD tracker, Mean Shift tracker and a modified version of the Mean Shift tracker using HSV color space. The results of the methods presented in this paper are then compared at the end of the paper.
An image-based automatic recognition method for the flowering stage of maize
NASA Astrophysics Data System (ADS)
Yu, Zhenghong; Zhou, Huabing; Li, Cuina
2018-03-01
In this paper, we proposed an image-based approach for automatic recognizing the flowering stage of maize. A modified HOG/SVM detection framework is first adopted to detect the ears of maize. Then, we use low-rank matrix recovery technology to precisely extract the ears at pixel level. At last, a new feature called color gradient histogram, as an indicator, is proposed to determine the flowering stage. Comparing experiment has been carried out to testify the validity of our method and the results indicate that our method can meet the demand for practical observation.
Brain tumor segmentation in MR slices using improved GrowCut algorithm
NASA Astrophysics Data System (ADS)
Ji, Chunhong; Yu, Jinhua; Wang, Yuanyuan; Chen, Liang; Shi, Zhifeng; Mao, Ying
2015-12-01
The detection of brain tumor from MR images is very significant for medical diagnosis and treatment. However, the existing methods are mostly based on manual or semiautomatic segmentation which are awkward when dealing with a large amount of MR slices. In this paper, a new fully automatic method for the segmentation of brain tumors in MR slices is presented. Based on the hypothesis of the symmetric brain structure, the method improves the interactive GrowCut algorithm by further using the bounding box algorithm in the pre-processing step. More importantly, local reflectional symmetry is used to make up the deficiency of the bounding box method. After segmentation, 3D tumor image is reconstructed. We evaluate the accuracy of the proposed method on MR slices with synthetic tumors and actual clinical MR images. Result of the proposed method is compared with the actual position of simulated 3D tumor qualitatively and quantitatively. In addition, our automatic method produces equivalent performance as manual segmentation and the interactive GrowCut with manual interference while providing fully automatic segmentation.
NASA Astrophysics Data System (ADS)
Furzeland, R. M.; Verwer, J. G.; Zegeling, P. A.
1990-08-01
In recent years, several sophisticated packages based on the method of lines (MOL) have been developed for the automatic numerical integration of time-dependent problems in partial differential equations (PDEs), notably for problems in one space dimension. These packages greatly benefit from the very successful developments of automatic stiff ordinary differential equation solvers. However, from the PDE point of view, they integrate only in a semiautomatic way in the sense that they automatically adjust the time step sizes, but use just a fixed space grid, chosen a priori, for the entire calculation. For solutions possessing sharp spatial transitions that move, e.g., travelling wave fronts or emerging boundary and interior layers, a grid held fixed for the entire calculation is computationally inefficient, since for a good solution this grid often must contain a very large number of nodes. In such cases methods which attempt automatically to adjust the sizes of both the space and the time steps are likely to be more successful in efficiently resolving critical regions of high spatial and temporal activity. Methods and codes that operate this way belong to the realm of adaptive or moving-grid methods. Following the MOL approach, this paper is devoted to an evaluation and comparison, mainly based on extensive numerical tests, of three moving-grid methods for 1D problems, viz., the finite-element method of Miller and co-workers, the method published by Petzold, and a method based on ideas adopted from Dorfi and Drury. Our examination of these three methods is aimed at assessing which is the most suitable from the point of view of retaining the acknowledged features of reliability, robustness, and efficiency of the conventional MOL approach. Therefore, considerable attention is paid to the temporal performance of the methods.
NASA Astrophysics Data System (ADS)
Yang, Zili
2017-07-01
Heart segmentation is an important auxiliary method in the diagnosis of many heart diseases, such as coronary heart disease and atrial fibrillation, and in the planning of tumor radiotherapy. Most of the existing methods for full heart segmentation treat the heart as a whole part and cannot accurately extract the bottom of the heart. In this paper, we propose a new method based on linear gradient model to segment the whole heart from the CT images automatically and accurately. Twelve cases were tested in order to test this method and accurate segmentation results were achieved and identified by clinical experts. The results can provide reliable clinical support.
Hybrid Automatic Building Interpretation System
NASA Astrophysics Data System (ADS)
Pakzad, K.; Klink, A.; Müterthies, A.; Gröger, G.; Stroh, V.; Plümer, L.
2011-09-01
HABIS (Hybrid Automatic Building Interpretation System) is a system for an automatic reconstruction of building roofs used in virtual 3D building models. Unlike most of the commercially available systems, HABIS is able to work to a high degree automatically. The hybrid method uses different sources intending to exploit the advantages of the particular sources. 3D point clouds usually provide good height and surface data, whereas spatial high resolution aerial images provide important information for edges and detail information for roof objects like dormers or chimneys. The cadastral data provide important basis information about the building ground plans. The approach used in HABIS works with a multi-stage-process, which starts with a coarse roof classification based on 3D point clouds. After that it continues with an image based verification of these predicted roofs. In a further step a final classification and adjustment of the roofs is done. In addition some roof objects like dormers and chimneys are also extracted based on aerial images and added to the models. In this paper the used methods are described and some results are presented.
Reproducibility measurements of three methods for calculating in vivo MR-based knee kinematics.
Lansdown, Drew A; Zaid, Musa; Pedoia, Valentina; Subburaj, Karupppasamy; Souza, Richard; Benjamin, C; Li, Xiaojuan
2015-08-01
To describe three quantification methods for magnetic resonance imaging (MRI)-based knee kinematic evaluation and to report on the reproducibility of these algorithms. T2 -weighted, fast-spin echo images were obtained of the bilateral knees in six healthy volunteers. Scans were repeated for each knee after repositioning to evaluate protocol reproducibility. Semiautomatic segmentation defined regions of interest for the tibia and femur. The posterior femoral condyles and diaphyseal axes were defined using the previously defined tibia and femur. All segmentation was performed twice to evaluate segmentation reliability. Anterior tibial translation (ATT) and internal tibial rotation (ITR) were calculated using three methods: a tibial-based registration system, a combined tibiofemoral-based registration method with all manual segmentation, and a combined tibiofemoral-based registration method with automatic definition of condyles and axes. Intraclass correlation coefficients and standard deviations across multiple measures were determined. Reproducibility of segmentation was excellent (ATT = 0.98; ITR = 0.99) for both combined methods. ATT and ITR measurements were also reproducible across multiple scans in the combined registration measurements with manual (ATT = 0.94; ITR = 0.94) or automatic (ATT = 0.95; ITR = 0.94) condyles and axes. The combined tibiofemoral registration with automatic definition of the posterior femoral condyle and diaphyseal axes allows for improved knee kinematics quantification with excellent in vivo reproducibility. © 2014 Wiley Periodicals, Inc.
An automatic method for segmentation of fission tracks in epidote crystal photomicrographs
NASA Astrophysics Data System (ADS)
de Siqueira, Alexandre Fioravante; Nakasuga, Wagner Massayuki; Pagamisse, Aylton; Tello Saenz, Carlos Alberto; Job, Aldo Eloizo
2014-08-01
Manual identification of fission tracks has practical problems, such as variation due to observe-observation efficiency. An automatic processing method that could identify fission tracks in a photomicrograph could solve this problem and improve the speed of track counting. However, separation of nontrivial images is one of the most difficult tasks in image processing. Several commercial and free softwares are available, but these softwares are meant to be used in specific images. In this paper, an automatic method based on starlet wavelets is presented in order to separate fission tracks in mineral photomicrographs. Automatization is obtained by the Matthews correlation coefficient, and results are evaluated by precision, recall and accuracy. This technique is an improvement of a method aimed at segmentation of scanning electron microscopy images. This method is applied in photomicrographs of epidote phenocrystals, in which accuracy higher than 89% was obtained in fission track segmentation, even for difficult images. Algorithms corresponding to the proposed method are available for download. Using the method presented here, a user could easily determine fission tracks in photomicrographs of mineral samples.
Automatic segmentation of the prostate on CT images using deep learning and multi-atlas fusion
NASA Astrophysics Data System (ADS)
Ma, Ling; Guo, Rongrong; Zhang, Guoyi; Tade, Funmilayo; Schuster, David M.; Nieh, Peter; Master, Viraj; Fei, Baowei
2017-02-01
Automatic segmentation of the prostate on CT images has many applications in prostate cancer diagnosis and therapy. However, prostate CT image segmentation is challenging because of the low contrast of soft tissue on CT images. In this paper, we propose an automatic segmentation method by combining a deep learning method and multi-atlas refinement. First, instead of segmenting the whole image, we extract the region of interesting (ROI) to delete irrelevant regions. Then, we use the convolutional neural networks (CNN) to learn the deep features for distinguishing the prostate pixels from the non-prostate pixels in order to obtain the preliminary segmentation results. CNN can automatically learn the deep features adapting to the data, which are different from some handcrafted features. Finally, we select some similar atlases to refine the initial segmentation results. The proposed method has been evaluated on a dataset of 92 prostate CT images. Experimental results show that our method achieved a Dice similarity coefficient of 86.80% as compared to the manual segmentation. The deep learning based method can provide a useful tool for automatic segmentation of the prostate on CT images and thus can have a variety of clinical applications.
A new user-assisted segmentation and tracking technique for an object-based video editing system
NASA Astrophysics Data System (ADS)
Yu, Hong Y.; Hong, Sung-Hoon; Lee, Mike M.; Choi, Jae-Gark
2004-03-01
This paper presents a semi-automatic segmentation method which can be used to generate video object plane (VOP) for object based coding scheme and multimedia authoring environment. Semi-automatic segmentation can be considered as a user-assisted segmentation technique. A user can initially mark objects of interest around the object boundaries and then the user-guided and selected objects are continuously separated from the unselected areas through time evolution in the image sequences. The proposed segmentation method consists of two processing steps: partially manual intra-frame segmentation and fully automatic inter-frame segmentation. The intra-frame segmentation incorporates user-assistance to define the meaningful complete visual object of interest to be segmentation and decides precise object boundary. The inter-frame segmentation involves boundary and region tracking to obtain temporal coherence of moving object based on the object boundary information of previous frame. The proposed method shows stable efficient results that could be suitable for many digital video applications such as multimedia contents authoring, content based coding and indexing. Based on these results, we have developed objects based video editing system with several convenient editing functions.
Automatic draft reading based on image processing
NASA Astrophysics Data System (ADS)
Tsujii, Takahiro; Yoshida, Hiromi; Iiguni, Youji
2016-10-01
In marine transportation, a draft survey is a means to determine the quantity of bulk cargo. Automatic draft reading based on computer image processing has been proposed. However, the conventional draft mark segmentation may fail when the video sequence has many other regions than draft marks and a hull, and the estimated waterline is inherently higher than the true one. To solve these problems, we propose an automatic draft reading method that uses morphological operations to detect draft marks and estimate the waterline for every frame with Canny edge detection and a robust estimation. Moreover, we emulate surveyors' draft reading process for getting the understanding of a shipper and a receiver. In an experiment in a towing tank, the draft reading error of the proposed method was <1 cm, showing the advantage of the proposed method. It is also shown that accurate draft reading has been achieved in a real-world scene.
Automatic aortic root segmentation in CTA whole-body dataset
NASA Astrophysics Data System (ADS)
Gao, Xinpei; Kitslaar, Pieter H.; Scholte, Arthur J. H. A.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke; Reiber, Johan H. C.
2016-03-01
Trans-catheter aortic valve replacement (TAVR) is an evolving technique for patients with serious aortic stenosis disease. Typically, in this application a CTA data set is obtained of the patient's arterial system from the subclavian artery to the femoral arteries, to evaluate the quality of the vascular access route and analyze the aortic root to determine if and which prosthesis should be used. In this paper, we concentrate on the automated segmentation of the aortic root. The purpose of this study was to automatically segment the aortic root in computed tomography angiography (CTA) datasets to support TAVR procedures. The method in this study includes 4 major steps. First, the patient's cardiac CTA image was resampled to reduce the computation time. Next, the cardiac CTA image was segmented using an atlas-based approach. The most similar atlas was selected from a total of 8 atlases based on its image similarity to the input CTA image. Third, the aortic root segmentation from the previous step was transferred to the patient's whole-body CTA image by affine registration and refined in the fourth step using a deformable subdivision surface model fitting procedure based on image intensity. The pipeline was applied to 20 patients. The ground truth was created by an analyst who semi-automatically corrected the contours of the automatic method, where necessary. The average Dice similarity index between the segmentations of the automatic method and the ground truth was found to be 0.965±0.024. In conclusion, the current results are very promising.
Effects of 99mTc-TRODAT-1 drug template on image quantitative analysis
Yang, Bang-Hung; Chou, Yuan-Hwa; Wang, Shyh-Jen; Chen, Jyh-Cheng
2018-01-01
99mTc-TRODAT-1 is a type of drug that can bind to dopamine transporters in living organisms and is often used in SPCT imaging for observation of changes in the activity uptake of dopamine in the striatum. Therefore, it is currently widely used in studies on clinical diagnosis of Parkinson’s disease (PD) and movement-related disorders. In conventional 99mTc-TRODAT-1 SPECT image evaluation, visual inspection or manual selection of ROI for semiquantitative analysis is mainly used to observe and evaluate the degree of striatal defects. However, these methods are dependent on the subjective opinions of observers, which lead to human errors, have shortcomings such as long duration, increased effort, and have low reproducibility. To solve this problem, this study aimed to establish an automatic semiquantitative analytical method for 99mTc-TRODAT-1. This method combines three drug templates (one built-in SPECT template in SPM software and two self-generated MRI-based and HMPAO-based TRODAT-1 templates) for the semiquantitative analysis of the striatal phantom and clinical images. At the same time, the results of automatic analysis of the three templates were compared with results from a conventional manual analysis for examining the feasibility of automatic analysis and the effects of drug templates on automatic semiquantitative analysis results. After comparison, it was found that the MRI-based TRODAT-1 template generated from MRI images is the most suitable template for 99mTc-TRODAT-1 automatic semiquantitative analysis. PMID:29543874
Development of an automated ultrasonic testing system
NASA Astrophysics Data System (ADS)
Shuxiang, Jiao; Wong, Brian Stephen
2005-04-01
Non-Destructive Testing is necessary in areas where defects in structures emerge over time due to wear and tear and structural integrity is necessary to maintain its usability. However, manual testing results in many limitations: high training cost, long training procedure, and worse, the inconsistent test results. A prime objective of this project is to develop an automatic Non-Destructive testing system for a shaft of the wheel axle of a railway carriage. Various methods, such as the neural network, pattern recognition methods and knowledge-based system are used for the artificial intelligence problem. In this paper, a statistical pattern recognition approach, Classification Tree is applied. Before feature selection, a thorough study on the ultrasonic signals produced was carried out. Based on the analysis of the ultrasonic signals, three signal processing methods were developed to enhance the ultrasonic signals: Cross-Correlation, Zero-Phase filter and Averaging. The target of this step is to reduce the noise and make the signal character more distinguishable. Four features: 1. The Auto Regressive Model Coefficients. 2. Standard Deviation. 3. Pearson Correlation 4. Dispersion Uniformity Degree are selected. And then a Classification Tree is created and applied to recognize the peak positions and amplitudes. Searching local maximum is carried out before feature computing. This procedure reduces much computation time in the real-time testing. Based on this algorithm, a software package called SOFRA was developed to recognize the peaks, calibrate automatically and test a simulated shaft automatically. The automatic calibration procedure and the automatic shaft testing procedure are developed.
Design of cylindrical pipe automatic welding control system based on STM32
NASA Astrophysics Data System (ADS)
Chen, Shuaishuai; Shen, Weicong
2018-04-01
The development of modern economy makes the demand for pipeline construction and construction rapidly increasing, and the pipeline welding has become an important link in pipeline construction. At present, there are still a large number of using of manual welding methods at home and abroad, and field pipe welding especially lacks miniature and portable automatic welding equipment. An automated welding system consists of a control system, which consisting of a lower computer control panel and a host computer operating interface, as well as automatic welding machine mechanisms and welding power systems in coordination with the control system. In this paper, a new control system of automatic pipe welding based on the control panel of the lower computer and the interface of the host computer is proposed, which has many advantages over the traditional automatic welding machine.
Automatic sleep stage classification using two-channel electro-oculography.
Virkkala, Jussi; Hasan, Joel; Värri, Alpo; Himanen, Sari-Leena; Müller, Kiti
2007-10-15
An automatic method for the classification of wakefulness and sleep stages SREM, S1, S2 and SWS was developed based on our two previous studies. The method is based on a two-channel electro-oculography (EOG) referenced to the left mastoid (M1). Synchronous electroencephalographic (EEG) activity in S2 and SWS was detected by calculating cross-correlation and peak-to-peak amplitude difference in the 0.5-6 Hz band between the two EOG channels. An automatic slow eye-movement (SEM) estimation was used to indicate wakefulness, SREM and S1. Beta power 18-30 Hz and alpha power 8-12 Hz was also used for wakefulness detection. Synchronous 1.5-6 Hz EEG activity and absence of large eye movements was used for S1 separation from SREM. Simple smoothing rules were also applied. Sleep EEG, EOG and EMG were recorded from 265 subjects. The system was tuned using data from 132 training subjects and then applied to data from 131 validation subjects that were different to the training subjects. Cohen's Kappa between the visual and the developed new automatic scoring in separating 30s wakefulness, SREM, S1, S2 and SWS epochs was substantial 0.62 with epoch by epoch agreement of 72%. With automatic subject specific alpha thresholds for offline applications results improved to 0.63 and 73%. The automatic method can be further developed and applied for ambulatory sleep recordings by using only four disposable, self-adhesive and self-applicable electrodes.
ARES v2: new features and improved performance
NASA Astrophysics Data System (ADS)
Sousa, S. G.; Santos, N. C.; Adibekyan, V.; Delgado-Mena, E.; Israelian, G.
2015-05-01
Aims: We present a new upgraded version of ARES. The new version includes a series of interesting new features such as automatic radial velocity correction, a fully automatic continuum determination, and an estimation of the errors for the equivalent widths. Methods: The automatic correction of the radial velocity is achieved with a simple cross-correlation function, and the automatic continuum determination, as well as the estimation of the errors, relies on a new approach to evaluating the spectral noise at the continuum level. Results: ARES v2 is totally compatible with its predecessor. We show that the fully automatic continuum determination is consistent with the previous methods applied for this task. It also presents a significant improvement on its performance thanks to the implementation of a parallel computation using the OpenMP library. Automatic Routine for line Equivalent widths in stellar Spectra - ARES webpage: http://www.astro.up.pt/~sousasag/ares/Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 075.D-0800(A).
Olesen, Alexander Neergaard; Christensen, Julie A E; Sorensen, Helge B D; Jennum, Poul J
2016-08-01
Reducing the number of recording modalities for sleep staging research can benefit both researchers and patients, under the condition that they provide as accurate results as conventional systems. This paper investigates the possibility of exploiting the multisource nature of the electrooculography (EOG) signals by presenting a method for automatic sleep staging using the complete ensemble empirical mode decomposition with adaptive noise algorithm, and a random forest classifier. It achieves a high overall accuracy of 82% and a Cohen's kappa of 0.74 indicating substantial agreement between automatic and manual scoring.
Target recognition based on convolutional neural network
NASA Astrophysics Data System (ADS)
Wang, Liqiang; Wang, Xin; Xi, Fubiao; Dong, Jian
2017-11-01
One of the important part of object target recognition is the feature extraction, which can be classified into feature extraction and automatic feature extraction. The traditional neural network is one of the automatic feature extraction methods, while it causes high possibility of over-fitting due to the global connection. The deep learning algorithm used in this paper is a hierarchical automatic feature extraction method, trained with the layer-by-layer convolutional neural network (CNN), which can extract the features from lower layers to higher layers. The features are more discriminative and it is beneficial to the object target recognition.
Automatic detection of articulation disorders in children with cleft lip and palate.
Maier, Andreas; Hönig, Florian; Bocklet, Tobias; Nöth, Elmar; Stelzle, Florian; Nkenke, Emeka; Schuster, Maria
2009-11-01
Speech of children with cleft lip and palate (CLP) is sometimes still disordered even after adequate surgical and nonsurgical therapies. Such speech shows complex articulation disorders, which are usually assessed perceptually, consuming time and manpower. Hence, there is a need for an easy to apply and reliable automatic method. To create a reference for an automatic system, speech data of 58 children with CLP were assessed perceptually by experienced speech therapists for characteristic phonetic disorders at the phoneme level. The first part of the article aims to detect such characteristics by a semiautomatic procedure and the second to evaluate a fully automatic, thus simple, procedure. The methods are based on a combination of speech processing algorithms. The semiautomatic method achieves moderate to good agreement (kappa approximately 0.6) for the detection of all phonetic disorders. On a speaker level, significant correlations between the perceptual evaluation and the automatic system of 0.89 are obtained. The fully automatic system yields a correlation on the speaker level of 0.81 to the perceptual evaluation. This correlation is in the range of the inter-rater correlation of the listeners. The automatic speech evaluation is able to detect phonetic disorders at an experts'level without any additional human postprocessing.
[An automatic system controlled by microcontroller for carotid sinus perfusion].
Yi, X L; Wang, M Y; Fan, Z Z; He, R R
2001-08-01
To establish a new method for controlling automatically the carotid perfusion pressure. A cheap practical automatic perfusion unit based on AT89C2051 micro controller was designed. The unit, LDB-M perfusion pump and the carotid sinus of an animal constituted an automatic perfusion system. This system was able to provide ramp and stepwise updown perfusion pattern and has been used in the research of baroreflex. It can insure the precision and reproducibility of perfusion pressure curve, and improve the technical level in corresponding medical field.
Automated Assessment of Child Vocalization Development Using LENA
ERIC Educational Resources Information Center
Richards, Jeffrey A.; Xu, Dongxin; Gilkerson, Jill; Yapanel, Umit; Gray, Sharmistha; Paul, Terrance
2017-01-01
Purpose: To produce a novel, efficient measure of children's expressive vocal development on the basis of automatic vocalization assessment (AVA), child vocalizations were automatically identified and extracted from audio recordings using Language Environment Analysis (LENA) System technology. Method: Assessment was based on full-day audio…
Automatic categorization of diverse experimental information in the bioscience literature
2012-01-01
Background Curation of information from bioscience literature into biological knowledge databases is a crucial way of capturing experimental information in a computable form. During the biocuration process, a critical first step is to identify from all published literature the papers that contain results for a specific data type the curator is interested in annotating. This step normally requires curators to manually examine many papers to ascertain which few contain information of interest and thus, is usually time consuming. We developed an automatic method for identifying papers containing these curation data types among a large pool of published scientific papers based on the machine learning method Support Vector Machine (SVM). This classification system is completely automatic and can be readily applied to diverse experimental data types. It has been in use in production for automatic categorization of 10 different experimental datatypes in the biocuration process at WormBase for the past two years and it is in the process of being adopted in the biocuration process at FlyBase and the Saccharomyces Genome Database (SGD). We anticipate that this method can be readily adopted by various databases in the biocuration community and thereby greatly reducing time spent on an otherwise laborious and demanding task. We also developed a simple, readily automated procedure to utilize training papers of similar data types from different bodies of literature such as C. elegans and D. melanogaster to identify papers with any of these data types for a single database. This approach has great significance because for some data types, especially those of low occurrence, a single corpus often does not have enough training papers to achieve satisfactory performance. Results We successfully tested the method on ten data types from WormBase, fifteen data types from FlyBase and three data types from Mouse Genomics Informatics (MGI). It is being used in the curation work flow at WormBase for automatic association of newly published papers with ten data types including RNAi, antibody, phenotype, gene regulation, mutant allele sequence, gene expression, gene product interaction, overexpression phenotype, gene interaction, and gene structure correction. Conclusions Our methods are applicable to a variety of data types with training set containing several hundreds to a few thousand documents. It is completely automatic and, thus can be readily incorporated to different workflow at different literature-based databases. We believe that the work presented here can contribute greatly to the tremendous task of automating the important yet labor-intensive biocuration effort. PMID:22280404
Automatic categorization of diverse experimental information in the bioscience literature.
Fang, Ruihua; Schindelman, Gary; Van Auken, Kimberly; Fernandes, Jolene; Chen, Wen; Wang, Xiaodong; Davis, Paul; Tuli, Mary Ann; Marygold, Steven J; Millburn, Gillian; Matthews, Beverley; Zhang, Haiyan; Brown, Nick; Gelbart, William M; Sternberg, Paul W
2012-01-26
Curation of information from bioscience literature into biological knowledge databases is a crucial way of capturing experimental information in a computable form. During the biocuration process, a critical first step is to identify from all published literature the papers that contain results for a specific data type the curator is interested in annotating. This step normally requires curators to manually examine many papers to ascertain which few contain information of interest and thus, is usually time consuming. We developed an automatic method for identifying papers containing these curation data types among a large pool of published scientific papers based on the machine learning method Support Vector Machine (SVM). This classification system is completely automatic and can be readily applied to diverse experimental data types. It has been in use in production for automatic categorization of 10 different experimental datatypes in the biocuration process at WormBase for the past two years and it is in the process of being adopted in the biocuration process at FlyBase and the Saccharomyces Genome Database (SGD). We anticipate that this method can be readily adopted by various databases in the biocuration community and thereby greatly reducing time spent on an otherwise laborious and demanding task. We also developed a simple, readily automated procedure to utilize training papers of similar data types from different bodies of literature such as C. elegans and D. melanogaster to identify papers with any of these data types for a single database. This approach has great significance because for some data types, especially those of low occurrence, a single corpus often does not have enough training papers to achieve satisfactory performance. We successfully tested the method on ten data types from WormBase, fifteen data types from FlyBase and three data types from Mouse Genomics Informatics (MGI). It is being used in the curation work flow at WormBase for automatic association of newly published papers with ten data types including RNAi, antibody, phenotype, gene regulation, mutant allele sequence, gene expression, gene product interaction, overexpression phenotype, gene interaction, and gene structure correction. Our methods are applicable to a variety of data types with training set containing several hundreds to a few thousand documents. It is completely automatic and, thus can be readily incorporated to different workflow at different literature-based databases. We believe that the work presented here can contribute greatly to the tremendous task of automating the important yet labor-intensive biocuration effort.
Khalilzadeh, Mohammad Mahdi; Fatemizadeh, Emad; Behnam, Hamid
2013-06-01
Automatic extraction of the varying regions of magnetic resonance images is required as a prior step in a diagnostic intelligent system. The sparsest representation and high-dimensional feature are provided based on learned dictionary. The classification is done by employing the technique that computes the reconstruction error locally and non-locally of each pixel. The acquired results from the real and simulated images are superior to the best MRI segmentation method with regard to the stability advantages. In addition, it is segmented exactly through a formula taken from the distance and sparse factors. Also, it is done automatically taking sparse factor in unsupervised clustering methods whose results have been improved. Copyright © 2013 Elsevier Inc. All rights reserved.
Automatic Mrf-Based Registration of High Resolution Satellite Video Data
NASA Astrophysics Data System (ADS)
Platias, C.; Vakalopoulou, M.; Karantzalos, K.
2016-06-01
In this paper we propose a deformable registration framework for high resolution satellite video data able to automatically and accurately co-register satellite video frames and/or register them to a reference map/image. The proposed approach performs non-rigid registration, formulates a Markov Random Fields (MRF) model, while efficient linear programming is employed for reaching the lowest potential of the cost function. The developed approach has been applied and validated on satellite video sequences from Skybox Imaging and compared with a rigid, descriptor-based registration method. Regarding the computational performance, both the MRF-based and the descriptor-based methods were quite efficient, with the first one converging in some minutes and the second in some seconds. Regarding the registration accuracy the proposed MRF-based method significantly outperformed the descriptor-based one in all the performing experiments.
NASA Astrophysics Data System (ADS)
Schlupp, A.; Sira, C.; Schmitt, K.; Schaming, M.
2013-12-01
In charge of intensity estimations in France, BCSF has collected and manually analyzed more than 47000 online individual macroseismic questionnaires since 2000 up to intensity VI. These macroseismic data allow us to estimate one SQI value (Single Questionnaire Intensity) for each form following the EMS98 scale. The reliability of the automatic intensity estimation is important as they are today used for automatic shakemaps communications and crisis management. Today, the automatic intensity estimation at BCSF is based on the direct use of thumbnails selected on a menu by the witnesses. Each thumbnail corresponds to an EMS-98 intensity value, allowing us to quickly issue an intensity map of the communal intensity by averaging the SQIs at each city. Afterwards an expert, to determine a definitive SQI, manually analyzes each form. This work is time consuming and not anymore suitable considering the increasing number of testimonies at BCSF. Nevertheless, it can take into account incoherent answers. We tested several automatic methods (USGS algorithm, Correlation coefficient, Thumbnails) (Sira et al. 2013, IASPEI) and compared them with 'expert' SQIs. These methods gave us medium score (between 50 to 60% of well SQI determined and 35 to 40% with plus one or minus one intensity degree). The best fit was observed with the thumbnails. Here, we present new approaches based on 3 statistical ranking methods as 1) Multinomial logistic regression model, 2) Discriminant analysis DISQUAL and 3) Support vector machines (SVMs). The two first methods are standard methods, while the third one is more recent. Theses methods could be applied because the BCSF has already in his database more then 47000 forms and because their questions and answers are well adapted for a statistical analysis. The ranking models could then be used as automatic method constrained on expert analysis. The performance of the automatic methods and the reliability of the estimated SQI can be evaluated thanks to the fact that each definitive BCSF SQIs is determined by an expert analysis. We compare the SQIs obtained by these methods from our database and discuss the coherency and variations between automatic and manual processes. These methods lead to high scores with up to 85% of the forms well classified and most of the remaining forms classified with only a shift of one intensity degree. This allows us to use the ranking methods as the best automatic methods to fast SQIs estimation and to produce fast shakemaps. The next step, to improve the use of these methods, will be to identify explanations for the forms not classified at the correct value and a way to select the few remaining forms that should be analyzed by the expert. Note that beyond intensity VI, on-line questionnaires are insufficient and a field survey is indispensable to estimate intensity. For such survey, in France, BCSF leads a macroseismic intervention group (GIM).
A new method for automatic discontinuity traces sampling on rock mass 3D model
NASA Astrophysics Data System (ADS)
Umili, G.; Ferrero, A.; Einstein, H. H.
2013-02-01
A new automatic method for discontinuity traces mapping and sampling on a rock mass digital model is described in this work. The implemented procedure allows one to automatically identify discontinuity traces on a Digital Surface Model: traces are detected directly as surface breaklines, by means of maximum and minimum principal curvature values of the vertices that constitute the model surface. Color influence and user errors, that usually characterize the trace mapping on images, are eliminated. Also trace sampling procedures based on circular windows and circular scanlines have been implemented: they are used to infer trace data and to calculate values of mean trace length, expected discontinuity diameter and intensity of rock discontinuities. The method is tested on a case study: results obtained applying the automatic procedure on the DSM of a rock face are compared to those obtained performing a manual sampling on the orthophotograph of the same rock face.
Fu, Yili; Gao, Wenpeng; Chen, Xiaoguang; Zhu, Minwei; Shen, Weigao; Wang, Shuguo
2010-01-01
The reference system based on the fourth ventricular landmarks (including the fastigial point and ventricular floor plane) is used in medical image analysis of the brain stem. The objective of this study was to develop a rapid, robust, and accurate method for the automatic identification of this reference system on T1-weighted magnetic resonance images. The fully automated method developed in this study consisted of four stages: preprocessing of the data set, expectation-maximization algorithm-based extraction of the fourth ventricle in the region of interest, a coarse-to-fine strategy for identifying the fastigial point, and localization of the base point. The method was evaluated on 27 Brain Web data sets qualitatively and 18 Internet Brain Segmentation Repository data sets and 30 clinical scans quantitatively. The results of qualitative evaluation indicated that the method was robust to rotation, landmark variation, noise, and inhomogeneity. The results of quantitative evaluation indicated that the method was able to identify the reference system with an accuracy of 0.7 +/- 0.2 mm for the fastigial point and 1.1 +/- 0.3 mm for the base point. It took <6 seconds for the method to identify the related landmarks on a personal computer with an Intel Core 2 6300 processor and 2 GB of random-access memory. The proposed method for the automatic identification of the reference system based on the fourth ventricular landmarks was shown to be rapid, robust, and accurate. The method has potentially utility in image registration and computer-aided surgery.
Sabet, Mahsheed; O'Connor, Daryl J.; Greer, Peter B.
2011-01-01
There have been several manual, semi‐automatic and fully‐automatic methods proposed for verification of the position of mechanical isocenter as part of comprehensive quality assurance programs required for linear accelerator‐based stereotactic radiosurgery/radiotherapy (SRS/SRT) treatments. In this paper, a systematic review has been carried out to discuss the present methods for isocenter verification and compare their characteristics, to help physicists in making a decision on selection of their quality assurance routine. PACS numbers: 87.53.Ly, 87.56.Fc, 87.56.‐v PMID:22089022
Optoelectronic imaging of speckle using image processing method
NASA Astrophysics Data System (ADS)
Wang, Jinjiang; Wang, Pengfei
2018-01-01
A detailed image processing of laser speckle interferometry is proposed as an example for the course of postgraduate student. Several image processing methods were used together for dealing with optoelectronic imaging system, such as the partial differential equations (PDEs) are used to reduce the effect of noise, the thresholding segmentation also based on heat equation with PDEs, the central line is extracted based on image skeleton, and the branch is removed automatically, the phase level is calculated by spline interpolation method, and the fringe phase can be unwrapped. Finally, the imaging processing method was used to automatically measure the bubble in rubber with negative pressure which could be used in the tire detection.
A new method for the automatic interpretation of Schlumberger and Wenner sounding curves
Zohdy, A.A.R.
1989-01-01
A fast iterative method for the automatic interpretation of Schlumberger and Wenner sounding curves is based on obtaining interpreted depths and resistivities from shifted electrode spacings and adjusted apparent resistivities, respectively. The method is fully automatic. It does not require an initial guess of the number of layers, their thicknesses, or their resistivities; and it does not require extrapolation of incomplete sounding curves. The number of layers in the interpreted model equals the number of digitized points on the sounding curve. The resulting multilayer model is always well-behaved with no thin layers of unusually high or unusually low resistivities. For noisy data, interpretation is done in two sets of iterations (two passes). Anomalous layers, created because of noise in the first pass, are eliminated in the second pass. Such layers are eliminated by considering the best-fitting curve from the first pass to be a smoothed version of the observed curve and automatically reinterpreting it (second pass). The application of the method is illustrated by several examples. -Author
Panicker, Rani Oomman; Soman, Biju; Saini, Gagan; Rajan, Jeny
2016-01-01
Tuberculosis (TB) is an infectious disease caused by the bacteria Mycobacterium tuberculosis. It primarily affects the lungs, but it can also affect other parts of the body. TB remains one of the leading causes of death in developing countries, and its recent resurgences in both developed and developing countries warrant global attention. The number of deaths due to TB is very high (as per the WHO report, 1.5 million died in 2013), although most are preventable if diagnosed early and treated. There are many tools for TB detection, but the most widely used one is sputum smear microscopy. It is done manually and is often time consuming; a laboratory technician is expected to spend at least 15 min per slide, limiting the number of slides that can be screened. Many countries, including India, have a dearth of properly trained technicians, and they often fail to detect TB cases due to the stress of a heavy workload. Automatic methods are generally considered as a solution to this problem. Attempts have been made to develop automatic approaches to identify TB bacteria from microscopic sputum smear images. In this paper, we provide a review of automatic methods based on image processing techniques published between 1998 and 2014. The review shows that the accuracy of algorithms for the automatic detection of TB increased significantly over the years and gladly acknowledges that commercial products based on published works also started appearing in the market. This review could be useful to researchers and practitioners working in the field of TB automation, providing a comprehensive and accessible overview of methods of this field of research.
The unbalanced signal measuring of automotive brake drum
NASA Astrophysics Data System (ADS)
Wang, Xiao-Dong; Ye, Sheng-Hua; Zhang, Bang-Cheng
2005-04-01
For the purpose of the research and development of automatic balancing system by mass removing, the dissertation deals with the measuring method of the unbalance signal, the design the automatic balance equipment and the software. This paper emphases the testing system of the balancer of automotive brake drum. The paper designs the band-pass filter product with favorable automatic follow of electronic product, and with favorable automatic follow capability, filtration effect and stability. The system of automatic balancing system by mass removing based on virtual instrument is designed in this paper. A lab system has been constructed. The results of contrast experiments indicate the notable effect of 1-plane automatic balance and the high precision of dynamic balance, and demonstrate the application value of the system.
Ramkumar, Barathram; Sabarimalai Manikandan, M.
2017-01-01
Automatic electrocardiogram (ECG) signal enhancement has become a crucial pre-processing step in most ECG signal analysis applications. In this Letter, the authors propose an automated noise-aware dictionary learning-based generalised ECG signal enhancement framework which can automatically learn the dictionaries based on the ECG noise type for effective representation of ECG signal and noises, and can reduce the computational load of sparse representation-based ECG enhancement system. The proposed framework consists of noise detection and identification, noise-aware dictionary learning, sparse signal decomposition and reconstruction. The noise detection and identification is performed based on the moving average filter, first-order difference, and temporal features such as number of turning points, maximum absolute amplitude, zerocrossings, and autocorrelation features. The representation dictionary is learned based on the type of noise identified in the previous stage. The proposed framework is evaluated using noise-free and noisy ECG signals. Results demonstrate that the proposed method can significantly reduce computational load as compared with conventional dictionary learning-based ECG denoising approaches. Further, comparative results show that the method outperforms existing methods in automatically removing noises such as baseline wanders, power-line interference, muscle artefacts and their combinations without distorting the morphological content of local waves of ECG signal. PMID:28529758
Satija, Udit; Ramkumar, Barathram; Sabarimalai Manikandan, M
2017-02-01
Automatic electrocardiogram (ECG) signal enhancement has become a crucial pre-processing step in most ECG signal analysis applications. In this Letter, the authors propose an automated noise-aware dictionary learning-based generalised ECG signal enhancement framework which can automatically learn the dictionaries based on the ECG noise type for effective representation of ECG signal and noises, and can reduce the computational load of sparse representation-based ECG enhancement system. The proposed framework consists of noise detection and identification, noise-aware dictionary learning, sparse signal decomposition and reconstruction. The noise detection and identification is performed based on the moving average filter, first-order difference, and temporal features such as number of turning points, maximum absolute amplitude, zerocrossings, and autocorrelation features. The representation dictionary is learned based on the type of noise identified in the previous stage. The proposed framework is evaluated using noise-free and noisy ECG signals. Results demonstrate that the proposed method can significantly reduce computational load as compared with conventional dictionary learning-based ECG denoising approaches. Further, comparative results show that the method outperforms existing methods in automatically removing noises such as baseline wanders, power-line interference, muscle artefacts and their combinations without distorting the morphological content of local waves of ECG signal.
NASA Astrophysics Data System (ADS)
Lv, Zheng; Sui, Haigang; Zhang, Xilin; Huang, Xianfeng
2007-11-01
As one of the most important geo-spatial objects and military establishment, airport is always a key target in fields of transportation and military affairs. Therefore, automatic recognition and extraction of airport from remote sensing images is very important and urgent for updating of civil aviation and military application. In this paper, a new multi-source data fusion approach on automatic airport information extraction, updating and 3D modeling is addressed. Corresponding key technologies including feature extraction of airport information based on a modified Ostu algorithm, automatic change detection based on new parallel lines-based buffer detection algorithm, 3D modeling based on gradual elimination of non-building points algorithm, 3D change detecting between old airport model and LIDAR data, typical CAD models imported and so on are discussed in detail. At last, based on these technologies, we develop a prototype system and the results show our method can achieve good effects.
Model-Based Reasoning in Humans Becomes Automatic with Training.
Economides, Marcos; Kurth-Nelson, Zeb; Lübbert, Annika; Guitart-Masip, Marc; Dolan, Raymond J
2015-09-01
Model-based and model-free reinforcement learning (RL) have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load--a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders.
Semi-automatic knee cartilage segmentation
NASA Astrophysics Data System (ADS)
Dam, Erik B.; Folkesson, Jenny; Pettersen, Paola C.; Christiansen, Claus
2006-03-01
Osteo-Arthritis (OA) is a very common age-related cause of pain and reduced range of motion. A central effect of OA is wear-down of the articular cartilage that otherwise ensures smooth joint motion. Quantification of the cartilage breakdown is central in monitoring disease progression and therefore cartilage segmentation is required. Recent advances allow automatic cartilage segmentation with high accuracy in most cases. However, the automatic methods still fail in some problematic cases. For clinical studies, even if a few failing cases will be averaged out in the overall results, this reduces the mean accuracy and precision and thereby necessitates larger/longer studies. Since the severe OA cases are often most problematic for the automatic methods, there is even a risk that the quantification will introduce a bias in the results. Therefore, interactive inspection and correction of these problematic cases is desirable. For diagnosis on individuals, this is even more crucial since the diagnosis will otherwise simply fail. We introduce and evaluate a semi-automatic cartilage segmentation method combining an automatic pre-segmentation with an interactive step that allows inspection and correction. The automatic step consists of voxel classification based on supervised learning. The interactive step combines a watershed transformation of the original scan with the posterior probability map from the classification step at sub-voxel precision. We evaluate the method for the task of segmenting the tibial cartilage sheet from low-field magnetic resonance imaging (MRI) of knees. The evaluation shows that the combined method allows accurate and highly reproducible correction of the segmentation of even the worst cases in approximately ten minutes of interaction.
Li, Zhixun; Zhang, Yingtao; Gong, Huiling; Li, Weimin; Tang, Xianglong
2016-12-01
Coronary artery disease has become the most dangerous diseases to human life. And coronary artery segmentation is the basis of computer aided diagnosis and analysis. Existing segmentation methods are difficult to handle the complex vascular texture due to the projective nature in conventional coronary angiography. Due to large amount of data and complex vascular shapes, any manual annotation has become increasingly unrealistic. A fully automatic segmentation method is necessary in clinic practice. In this work, we study a method based on reliable boundaries via multi-domains remapping and robust discrepancy correction via distance balance and quantile regression for automatic coronary artery segmentation of angiography images. The proposed method can not only segment overlapping vascular structures robustly, but also achieve good performance in low contrast regions. The effectiveness of our approach is demonstrated on a variety of coronary blood vessels compared with the existing methods. The overall segmentation performances si, fnvf, fvpf and tpvf were 95.135%, 3.733%, 6.113%, 96.268%, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.
Automatic Parametrization of Somatosensory Evoked Potentials With Chirp Modeling.
Vayrynen, Eero; Noponen, Kai; Vipin, Ashwati; Thow, X Y; Al-Nashash, Hasan; Kortelainen, Jukka; All, Angelo
2016-09-01
In this paper, an approach using polynomial phase chirp signals to model somatosensory evoked potentials (SEPs) is proposed. SEP waveforms are assumed as impulses undergoing group velocity dispersion while propagating along a multipath neural connection. Mathematical analysis of pulse dispersion resulting in chirp signals is performed. An automatic parameterization of SEPs is proposed using chirp models. A Particle Swarm Optimization algorithm is used to optimize the model parameters. Features describing the latencies and amplitudes of SEPs are automatically derived. A rat model is then used to evaluate the automatic parameterization of SEPs in two experimental cases, i.e., anesthesia level and spinal cord injury (SCI). Experimental results show that chirp-based model parameters and the derived SEP features are significant in describing both anesthesia level and SCI changes. The proposed automatic optimization based approach for extracting chirp parameters offers potential for detailed SEP analysis in future studies. The method implementation in Matlab technical computing language is provided online.
Automatic Match between Delimitation Line and Real Terrain Based on Least-Cost Path Analysis
NASA Astrophysics Data System (ADS)
Feng, C. Q.; Jiang, N.; Zhang, X. N.; Ma, J.
2013-11-01
Nowadays, during the international negotiation on separating dispute areas, manual adjusting is lonely applied to the match between delimitation line and real terrain, which not only consumes much time and great labor force, but also cannot ensure high precision. Concerning that, the paper mainly explores automatic match between them and study its general solution based on Least -Cost Path Analysis. First, under the guidelines of delimitation laws, the cost layer is acquired through special disposals of delimitation line and terrain features line. Second, a new delimitation line gets constructed with the help of Least-Cost Path Analysis. Third, the whole automatic match model is built via Module Builder in order to share and reuse it. Finally, the result of automatic match is analyzed from many different aspects, including delimitation laws, two-sided benefits and so on. Consequently, a conclusion is made that the method of automatic match is feasible and effective.
Automatic atlas-based three-label cartilage segmentation from MR knee images
Shan, Liang; Zach, Christopher; Charles, Cecil; Niethammer, Marc
2016-01-01
Osteoarthritis (OA) is the most common form of joint disease and often characterized by cartilage changes. Accurate quantitative methods are needed to rapidly screen large image databases to assess changes in cartilage morphology. We therefore propose a new automatic atlas-based cartilage segmentation method for future automatic OA studies. Atlas-based segmentation methods have been demonstrated to be robust and accurate in brain imaging and therefore also hold high promise to allow for reliable and high-quality segmentations of cartilage. Nevertheless, atlas-based methods have not been well explored for cartilage segmentation. A particular challenge is the thinness of cartilage, its relatively small volume in comparison to surrounding tissue and the difficulty to locate cartilage interfaces – for example the interface between femoral and tibial cartilage. This paper focuses on the segmentation of femoral and tibial cartilage, proposing a multi-atlas segmentation strategy with non-local patch-based label fusion which can robustly identify candidate regions of cartilage. This method is combined with a novel three-label segmentation method which guarantees the spatial separation of femoral and tibial cartilage, and ensures spatial regularity while preserving the thin cartilage shape through anisotropic regularization. Our segmentation energy is convex and therefore guarantees globally optimal solutions. We perform an extensive validation of the proposed method on 706 images of the Pfizer Longitudinal Study. Our validation includes comparisons of different atlas segmentation strategies, different local classifiers, and different types of regularizers. To compare to other cartilage segmentation approaches we validate based on the 50 images of the SKI10 dataset. PMID:25128683
A new method of automatic landmark tagging for shape model construction via local curvature scale
NASA Astrophysics Data System (ADS)
Rueda, Sylvia; Udupa, Jayaram K.; Bai, Li
2008-03-01
Segmentation of organs in medical images is a difficult task requiring very often the use of model-based approaches. To build the model, we need an annotated training set of shape examples with correspondences indicated among shapes. Manual positioning of landmarks is a tedious, time-consuming, and error prone task, and almost impossible in the 3D space. To overcome some of these drawbacks, we devised an automatic method based on the notion of c-scale, a new local scale concept. For each boundary element b, the arc length of the largest homogeneous curvature region connected to b is estimated as well as the orientation of the tangent at b. With this shape description method, we can automatically locate mathematical landmarks selected at different levels of detail. The method avoids the use of landmarks for the generation of the mean shape. The selection of landmarks on the mean shape is done automatically using the c-scale method. Then, these landmarks are propagated to each shape in the training set, defining this way the correspondences among the shapes. Altogether 12 strategies are described along these lines. The methods are evaluated on 40 MRI foot data sets, the object of interest being the talus bone. The results show that, for the same number of landmarks, the proposed methods are more compact than manual and equally spaced annotations. The approach is applicable to spaces of any dimensionality, although we have focused in this paper on 2D shapes.
A novel fully automatic scheme for fiducial marker-based alignment in electron tomography.
Han, Renmin; Wang, Liansan; Liu, Zhiyong; Sun, Fei; Zhang, Fa
2015-12-01
Although the topic of fiducial marker-based alignment in electron tomography (ET) has been widely discussed for decades, alignment without human intervention remains a difficult problem. Specifically, the emergence of subtomogram averaging has increased the demand for batch processing during tomographic reconstruction; fully automatic fiducial marker-based alignment is the main technique in this process. However, the lack of an accurate method for detecting and tracking fiducial markers precludes fully automatic alignment. In this paper, we present a novel, fully automatic alignment scheme for ET. Our scheme has two main contributions: First, we present a series of algorithms to ensure a high recognition rate and precise localization during the detection of fiducial markers. Our proposed solution reduces fiducial marker detection to a sampling and classification problem and further introduces an algorithm to solve the parameter dependence of marker diameter and marker number. Second, we propose a novel algorithm to solve the tracking of fiducial markers by reducing the tracking problem to an incomplete point set registration problem. Because a global optimization of a point set registration occurs, the result of our tracking is independent of the initial image position in the tilt series, allowing for the robust tracking of fiducial markers without pre-alignment. The experimental results indicate that our method can achieve an accurate tracking, almost identical to the current best one in IMOD with half automatic scheme. Furthermore, our scheme is fully automatic, depends on fewer parameters (only requires a gross value of the marker diameter) and does not require any manual interaction, providing the possibility of automatic batch processing of electron tomographic reconstruction. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Deng, Zhipeng; Lei, Lin; Zhou, Shilin
2015-10-01
Automatic image registration is a vital yet challenging task, particularly for non-rigid deformation images which are more complicated and common in remote sensing images, such as distorted UAV (unmanned aerial vehicle) images or scanning imaging images caused by flutter. Traditional non-rigid image registration methods are based on the correctly matched corresponding landmarks, which usually needs artificial markers. It is a rather challenging task to locate the accurate position of the points and get accurate homonymy point sets. In this paper, we proposed an automatic non-rigid image registration algorithm which mainly consists of three steps: To begin with, we introduce an automatic feature point extraction method based on non-linear scale space and uniform distribution strategy to extract the points which are uniform distributed along the edge of the image. Next, we propose a hybrid point matching algorithm using DaLI (Deformation and Light Invariant) descriptor and local affine invariant geometric constraint based on triangulation which is constructed by K-nearest neighbor algorithm. Based on the accurate homonymy point sets, the two images are registrated by the model of TPS (Thin Plate Spline). Our method is demonstrated by three deliberately designed experiments. The first two experiments are designed to evaluate the distribution of point set and the correctly matching rate on synthetic data and real data respectively. The last experiment is designed on the non-rigid deformation remote sensing images and the three experimental results demonstrate the accuracy, robustness, and efficiency of the proposed algorithm compared with other traditional methods.
Diagnostic accuracy of automatic normalization of CBV in glioma grading using T1- weighted DCE-MRI.
Sahoo, Prativa; Gupta, Rakesh K; Gupta, Pradeep K; Awasthi, Ashish; Pandey, Chandra M; Gupta, Mudit; Patir, Rana; Vaishya, Sandeep; Ahlawat, Sunita; Saha, Indrajit
2017-12-01
Aim of this retrospective study was to compare diagnostic accuracy of proposed automatic normalization method to quantify the relative cerebral blood volume (rCBV) with existing contra-lateral region of interest (ROI) based CBV normalization method for glioma grading using T1-weighted dynamic contrast enhanced MRI (DCE-MRI). Sixty patients with histologically confirmed gliomas were included in this study retrospectively. CBV maps were generated using T1-weighted DCE-MRI and are normalized by contralateral ROI based method (rCBV_contra), unaffected white matter (rCBV_WM) and unaffected gray matter (rCBV_GM), the latter two of these were generated automatically. An expert radiologist with >10years of experience in DCE-MRI and a non-expert with one year experience were used independently to measure rCBVs. Cutoff values for glioma grading were decided from ROC analysis. Agreement of histology with rCBV_WM, rCBV_GM and rCBV_contra respectively was studied using Kappa statistics and intra-class correlation coefficient (ICC). The diagnostic accuracy of glioma grading using the measured rCBV_contra by expert radiologist was found to be high (sensitivity=1.00, specificity=0.96, p<0.001) compared to the non-expert user (sensitivity=0.65, specificity=0.78, p<0.001). On the other hand, both the expert and non-expert user showed similar diagnostic accuracy for automatic rCBV_WM (sensitivity=0.89, specificity=0.87, p=0.001) and rCBV_GM (sensitivity=0.81, specificity=0.78, p=0.001) measures. Further, it was also observed that, contralateral based method by expert user showed highest agreement with histological grading of tumor (kappa=0.96, agreement 98.33%, p<0.001), however; automatic normalization method showed same percentage of agreement for both expert and non-expert user. rCBV_WM showed an agreement of 88.33% (kappa=0.76,p<0.001) with histopathological grading. It was inferred from this study that, in the absence of expert user, automated normalization of CBV using the proposed method could provide better diagnostic accuracy compared to the manual contralateral based approach. Copyright © 2017 Elsevier Inc. All rights reserved.
Method for stitching microbial images using a neural network
NASA Astrophysics Data System (ADS)
Semenishchev, E. A.; Voronin, V. V.; Marchuk, V. I.; Tolstova, I. V.
2017-05-01
Currently an analog microscope has a wide distribution in the following fields: medicine, animal husbandry, monitoring technological objects, oceanography, agriculture and others. Automatic method is preferred because it will greatly reduce the work involved. Stepper motors are used to move the microscope slide and allow to adjust the focus in semi-automatic or automatic mode view with transfer images of microbiological objects from the eyepiece of the microscope to the computer screen. Scene analysis allows to locate regions with pronounced abnormalities for focusing specialist attention. This paper considers the method for stitching microbial images, obtained of semi-automatic microscope. The method allows to keep the boundaries of objects located in the area of capturing optical systems. Objects searching are based on the analysis of the data located in the area of the camera view. We propose to use a neural network for the boundaries searching. The stitching image boundary is held of the analysis borders of the objects. To auto focus, we use the criterion of the minimum thickness of the line boundaries of object. Analysis produced the object located in the focal axis of the camera. We use method of recovery of objects borders and projective transform for the boundary of objects which are based on shifted relative to the focal axis. Several examples considered in this paper show the effectiveness of the proposed approach on several test images.
Direct volume estimation without segmentation
NASA Astrophysics Data System (ADS)
Zhen, X.; Wang, Z.; Islam, A.; Bhaduri, M.; Chan, I.; Li, S.
2015-03-01
Volume estimation plays an important role in clinical diagnosis. For example, cardiac ventricular volumes including left ventricle (LV) and right ventricle (RV) are important clinical indicators of cardiac functions. Accurate and automatic estimation of the ventricular volumes is essential to the assessment of cardiac functions and diagnosis of heart diseases. Conventional methods are dependent on an intermediate segmentation step which is obtained either manually or automatically. However, manual segmentation is extremely time-consuming, subjective and highly non-reproducible; automatic segmentation is still challenging, computationally expensive, and completely unsolved for the RV. Towards accurate and efficient direct volume estimation, our group has been researching on learning based methods without segmentation by leveraging state-of-the-art machine learning techniques. Our direct estimation methods remove the accessional step of segmentation and can naturally deal with various volume estimation tasks. Moreover, they are extremely flexible to be used for volume estimation of either joint bi-ventricles (LV and RV) or individual LV/RV. We comparatively study the performance of direct methods on cardiac ventricular volume estimation by comparing with segmentation based methods. Experimental results show that direct estimation methods provide more accurate estimation of cardiac ventricular volumes than segmentation based methods. This indicates that direct estimation methods not only provide a convenient and mature clinical tool for cardiac volume estimation but also enables diagnosis of cardiac diseases to be conducted in a more efficient and reliable way.
Automatic Construction of English/Chinese Parallel Corpora.
ERIC Educational Resources Information Center
Yang, Christopher C.; Li, Kar Wing
2003-01-01
Discussion of multilingual corpora and cross-lingual information retrieval focuses on research that constructed English/Chinese parallel corpus automatically from the World Wide Web. Presents an alignment method which is based on dynamic programming to identify one-to-one Chinese and English title pairs and discusses results of experiments…
Golbaz, Isabelle; Ahlers, Christian; Goesseringer, Nina; Stock, Geraldine; Geitzenauer, Wolfgang; Prünte, Christian; Schmidt-Erfurth, Ursula Margarethe
2011-03-01
This study compared automatic- and manual segmentation modalities in the retina of healthy eyes using high-definition optical coherence tomography (HD-OCT). Twenty retinas in 20 healthy individuals were examined using an HD-OCT system (Carl Zeiss Meditec, Inc.). Three-dimensional imaging was performed with an axial resolution of 6 μm at a maximum scanning speed of 25,000 A-scans/second. Volumes of 6 × 6 × 2 mm were scanned. Scans were analysed using a matlab-based algorithm and a manual segmentation software system (3D-Doctor). The volume values calculated by the two methods were compared. Statistical analysis revealed a high correlation between automatic and manual modes of segmentation. The automatic mode of measuring retinal volume and the corresponding three-dimensional images provided similar results to the manual segmentation procedure. Both methods were able to visualize retinal and subretinal features accurately. This study compared two methods of assessing retinal volume using HD-OCT scans in healthy retinas. Both methods were able to provide realistic volumetric data when applied to raster scan sets. Manual segmentation methods represent an adequate tool with which to control automated processes and to identify clinically relevant structures, whereas automatic procedures will be needed to obtain data in larger patient populations. © 2009 The Authors. Journal compilation © 2009 Acta Ophthalmol.
Cai, Lile; Tay, Wei-Liang; Nguyen, Binh P; Chui, Chee-Kong; Ong, Sim-Heng
2013-01-01
Transfer functions play a key role in volume rendering of medical data, but transfer function manipulation is unintuitive and can be time-consuming; achieving an optimal visualization of patient anatomy or pathology is difficult. To overcome this problem, we present a system for automatic transfer function design based on visibility distribution and projective color mapping. Instead of assigning opacity directly based on voxel intensity and gradient magnitude, the opacity transfer function is automatically derived by matching the observed visibility distribution to a target visibility distribution. An automatic color assignment scheme based on projective mapping is proposed to assign colors that allow for the visual discrimination of different structures, while also reflecting the degree of similarity between them. When our method was tested on several medical volumetric datasets, the key structures within the volume were clearly visualized with minimal user intervention. Copyright © 2013 Elsevier Ltd. All rights reserved.
[An automatic peak detection method for LIBS spectrum based on continuous wavelet transform].
Chen, Peng-Fei; Tian, Di; Qiao, Shu-Jun; Yang, Guang
2014-07-01
Spectrum peak detection in the laser-induced breakdown spectroscopy (LIBS) is an essential step, but the presence of background and noise seriously disturb the accuracy of peak position. The present paper proposed a method applied to automatic peak detection for LIBS spectrum in order to enhance the ability of overlapping peaks searching and adaptivity. We introduced the ridge peak detection method based on continuous wavelet transform to LIBS, and discussed the choice of the mother wavelet and optimized the scale factor and the shift factor. This method also improved the ridge peak detection method with a correcting ridge method. The experimental results show that compared with other peak detection methods (the direct comparison method, derivative method and ridge peak search method), our method had a significant advantage on the ability to distinguish overlapping peaks and the precision of peak detection, and could be be applied to data processing in LIBS.
Spanier, A B; Caplan, N; Sosna, J; Acar, B; Joskowicz, L
2018-01-01
The goal of medical content-based image retrieval (M-CBIR) is to assist radiologists in the decision-making process by retrieving medical cases similar to a given image. One of the key interests of radiologists is lesions and their annotations, since the patient treatment depends on the lesion diagnosis. Therefore, a key feature of M-CBIR systems is the retrieval of scans with the most similar lesion annotations. To be of value, M-CBIR systems should be fully automatic to handle large case databases. We present a fully automatic end-to-end method for the retrieval of CT scans with similar liver lesion annotations. The input is a database of abdominal CT scans labeled with liver lesions, a query CT scan, and optionally one radiologist-specified lesion annotation of interest. The output is an ordered list of the database CT scans with the most similar liver lesion annotations. The method starts by automatically segmenting the liver in the scan. It then extracts a histogram-based features vector from the segmented region, learns the features' relative importance, and ranks the database scans according to the relative importance measure. The main advantages of our method are that it fully automates the end-to-end querying process, that it uses simple and efficient techniques that are scalable to large datasets, and that it produces quality retrieval results using an unannotated CT scan. Our experimental results on 9 CT queries on a dataset of 41 volumetric CT scans from the 2014 Image CLEF Liver Annotation Task yield an average retrieval accuracy (Normalized Discounted Cumulative Gain index) of 0.77 and 0.84 without/with annotation, respectively. Fully automatic end-to-end retrieval of similar cases based on image information alone, rather that on disease diagnosis, may help radiologists to better diagnose liver lesions.
Path Searching Based Fault Automated Recovery Scheme for Distribution Grid with DG
NASA Astrophysics Data System (ADS)
Xia, Lin; Qun, Wang; Hui, Xue; Simeng, Zhu
2016-12-01
Applying the method of path searching based on distribution network topology in setting software has a good effect, and the path searching method containing DG power source is also applicable to the automatic generation and division of planned islands after the fault. This paper applies path searching algorithm in the automatic division of planned islands after faults: starting from the switch of fault isolation, ending in each power source, and according to the line load that the searching path traverses and the load integrated by important optimized searching path, forming optimized division scheme of planned islands that uses each DG as power source and is balanced to local important load. Finally, COBASE software and distribution network automation software applied are used to illustrate the effectiveness of the realization of such automatic restoration program.
NASA Astrophysics Data System (ADS)
Wang, Jen-Cheng; Liao, Min-Sheng; Lee, Yeun-Chung; Liu, Cheng-Yue; Kuo, Kun-Chang; Chou, Cheng-Ying; Huang, Chen-Kang; Jiang, Joe-Air
2018-02-01
The performance of photovoltaic (PV) modules under outdoor operation is greatly affected by their location and environmental conditions. The temperature of a PV module gradually increases as it is exposed to solar irradiation, resulting in degradation of its electrical characteristics and power generation efficiency. This study adopts wireless sensor network (WSN) technology to develop an automatic water-cooling system for PV modules in order to improve their PV power generation efficiency. A temperature estimation method is developed to quickly and accurately estimate the PV module temperatures based on weather data provided from the WSN monitoring system. Further, an estimation method is also proposed for evaluation of the electrical characteristics and output power of the PV modules, which is performed remotely via a control platform. The automatic WSN-based water-cooling mechanism is designed to avoid the PV module temperature from reaching saturation. Equipping each PV module with the WSN-based cooling system, the ambient conditions are monitored automatically so that the temperature of the PV module is controlled by sprinkling water on the panel surface. The field-test experiment results show an increase in the energy harvested by the PV modules of approximately 17.75% when using the proposed WSN-based cooling system.
Feng, Jingwen; Lin, Jie; Zhang, Pengquan; Yang, Songnan; Sa, Yu; Feng, Yuanming
2017-08-29
High-content screening is commonly used in studies of the DNA damage response. The double-strand break (DSB) is one of the most harmful types of DNA damage lesions. The conventional method used to quantify DSBs is γH2AX foci counting, which requires manual adjustment and preset parameters and is usually regarded as imprecise, time-consuming, poorly reproducible, and inaccurate. Therefore, a robust automatic alternative method is highly desired. In this manuscript, we present a new method for quantifying DSBs which involves automatic image cropping, automatic foci-segmentation and fluorescent intensity measurement. Furthermore, an additional function was added for standardizing the measurement of DSB response inhibition based on co-localization analysis. We tested the method with a well-known inhibitor of DSB response. The new method requires only one preset parameter, which effectively minimizes operator-dependent variations. Compared with conventional methods, the new method detected a higher percentage difference of foci formation between different cells, which can improve measurement accuracy. The effects of the inhibitor on DSB response were successfully quantified with the new method (p = 0.000). The advantages of this method in terms of reliability, automation and simplicity show its potential in quantitative fluorescence imaging studies and high-content screening for compounds and factors involved in DSB response.
Automatic ultrasound image enhancement for 2D semi-automatic breast-lesion segmentation
NASA Astrophysics Data System (ADS)
Lu, Kongkuo; Hall, Christopher S.
2014-03-01
Breast cancer is the fastest growing cancer, accounting for 29%, of new cases in 2012, and second leading cause of cancer death among women in the United States and worldwide. Ultrasound (US) has been used as an indispensable tool for breast cancer detection/diagnosis and treatment. In computer-aided assistance, lesion segmentation is a preliminary but vital step, but the task is quite challenging in US images, due to imaging artifacts that complicate detection and measurement of the suspect lesions. The lesions usually present with poor boundary features and vary significantly in size, shape, and intensity distribution between cases. Automatic methods are highly application dependent while manual tracing methods are extremely time consuming and have a great deal of intra- and inter- observer variability. Semi-automatic approaches are designed to counterbalance the advantage and drawbacks of the automatic and manual methods. However, considerable user interaction might be necessary to ensure reasonable segmentation for a wide range of lesions. This work proposes an automatic enhancement approach to improve the boundary searching ability of the live wire method to reduce necessary user interaction while keeping the segmentation performance. Based on the results of segmentation of 50 2D breast lesions in US images, less user interaction is required to achieve desired accuracy, i.e. < 80%, when auto-enhancement is applied for live-wire segmentation.
Translating expert system rules into Ada code with validation and verification
NASA Technical Reports Server (NTRS)
Becker, Lee; Duckworth, R. James; Green, Peter; Michalson, Bill; Gosselin, Dave; Nainani, Krishan; Pease, Adam
1991-01-01
The purpose of this ongoing research and development program is to develop software tools which enable the rapid development, upgrading, and maintenance of embedded real-time artificial intelligence systems. The goals of this phase of the research were to investigate the feasibility of developing software tools which automatically translate expert system rules into Ada code and develop methods for performing validation and verification testing of the resultant expert system. A prototype system was demonstrated which automatically translated rules from an Air Force expert system was demonstrated which detected errors in the execution of the resultant system. The method and prototype tools for converting AI representations into Ada code by converting the rules into Ada code modules and then linking them with an Activation Framework based run-time environment to form an executable load module are discussed. This method is based upon the use of Evidence Flow Graphs which are a data flow representation for intelligent systems. The development of prototype test generation and evaluation software which was used to test the resultant code is discussed. This testing was performed automatically using Monte-Carlo techniques based upon a constraint based description of the required performance for the system.
Wang, Lin-Wei; Qu, Ai-Ping; Liu, Wen-Lou; Chen, Jia-Mei; Yuan, Jing-Ping; Wu, Han; Li, Yan; Liu, Juan
2016-02-03
As a widely used proliferative marker, Ki67 has important impacts on cancer prognosis, especially for breast cancer (BC). However, variations in analytical practice make it difficult for pathologists to manually measure Ki67 index. This study is to establish quantum dots (QDs)-based double imaging of nuclear Ki67 as red signal by QDs-655, cytoplasmic cytokeratin (CK) as yellow signal by QDs-585, and organic dye imaging of cell nucleus as blue signal by 4',6-diamidino-2-phenylindole (DAPI), and to develop a computer-aided automatic method for Ki67 index measurement. The newly developed automatic computerized Ki67 measurement could efficiently recognize and count Ki67-positive cancer cell nuclei with red signals and cancer cell nuclei with blue signals within cancer cell cytoplasmic with yellow signals. Comparisons of computerized Ki67 index, visual Ki67 index, and marked Ki67 index for 30 patients of 90 images with Ki67 ≤ 10% (low grade), 10% < Ki67 < 50% (moderate grade), and Ki67 ≥ 50% (high grade) showed computerized Ki67 counting is better than visual Ki67 counting, especially for Ki67 low and moderate grades. Based on QDs-based double imaging and organic dye imaging on BC tissues, this study successfully developed an automatic computerized Ki67 counting method to measure Ki67 index.
Comparison of landmark-based and automatic methods for cortical surface registration
Pantazis, Dimitrios; Joshi, Anand; Jiang, Jintao; Shattuck, David; Bernstein, Lynne E.; Damasio, Hanna; Leahy, Richard M.
2009-01-01
Group analysis of structure or function in cerebral cortex typically involves as a first step the alignment of the cortices. A surface based approach to this problem treats the cortex as a convoluted surface and coregisters across subjects so that cortical landmarks or features are aligned. This registration can be performed using curves representing sulcal fundi and gyral crowns to constrain the mapping. Alternatively, registration can be based on the alignment of curvature metrics computed over the entire cortical surface. The former approach typically involves some degree of user interaction in defining the sulcal and gyral landmarks while the latter methods can be completely automated. Here we introduce a cortical delineation protocol consisting of 26 consistent landmarks spanning the entire cortical surface. We then compare the performance of a landmark-based registration method that uses this protocol with that of two automatic methods implemented in the software packages FreeSurfer and BrainVoyager. We compare performance in terms of discrepancy maps between the different methods, the accuracy with which regions of interest are aligned, and the ability of the automated methods to correctly align standard cortical landmarks. Our results show similar performance for ROIs in the perisylvian region for the landmark based method and FreeSurfer. However, the discrepancy maps showed larger variability between methods in occipital and frontal cortex and also that automated methods often produce misalignment of standard cortical landmarks. Consequently, selection of the registration approach should consider the importance of accurate sulcal alignment for the specific task for which coregistration is being performed. When automatic methods are used, the users should ensure that sulci in regions of interest in their studies are adequately aligned before proceeding with subsequent analysis. PMID:19796696
1997-11-01
status can sometimes be reflected in the infectious potential or drug resistance of those pathogens. For example, in Mycobacterium tuberculosis ... Mycobacterium tuberculosis , its antibiotic resistance and prediction of pathogenicity amongst Mycobacterium spp. based on signature lipid biomarkers ...TITLE AND SUBTITLE Rapid, Potentially Automatable, Method Extract Biomarkers for HPLC/ESI/MS/MS to Detect and Identify BW Agents 5a. CONTRACT NUMBER 5b
Algorithms for the automatic generation of 2-D structured multi-block grids
NASA Technical Reports Server (NTRS)
Schoenfeld, Thilo; Weinerfelt, Per; Jenssen, Carl B.
1995-01-01
Two different approaches to the fully automatic generation of structured multi-block grids in two dimensions are presented. The work aims to simplify the user interactivity necessary for the definition of a multiple block grid topology. The first approach is based on an advancing front method commonly used for the generation of unstructured grids. The original algorithm has been modified toward the generation of large quadrilateral elements. The second method is based on the divide-and-conquer paradigm with the global domain recursively partitioned into sub-domains. For either method each of the resulting blocks is then meshed using transfinite interpolation and elliptic smoothing. The applicability of these methods to practical problems is demonstrated for typical geometries of fluid dynamics.
A Machine Learning-based Method for Question Type Classification in Biomedical Question Answering.
Sarrouti, Mourad; Ouatik El Alaoui, Said
2017-05-18
Biomedical question type classification is one of the important components of an automatic biomedical question answering system. The performance of the latter depends directly on the performance of its biomedical question type classification system, which consists of assigning a category to each question in order to determine the appropriate answer extraction algorithm. This study aims to automatically classify biomedical questions into one of the four categories: (1) yes/no, (2) factoid, (3) list, and (4) summary. In this paper, we propose a biomedical question type classification method based on machine learning approaches to automatically assign a category to a biomedical question. First, we extract features from biomedical questions using the proposed handcrafted lexico-syntactic patterns. Then, we feed these features for machine-learning algorithms. Finally, the class label is predicted using the trained classifiers. Experimental evaluations performed on large standard annotated datasets of biomedical questions, provided by the BioASQ challenge, demonstrated that our method exhibits significant improved performance when compared to four baseline systems. The proposed method achieves a roughly 10-point increase over the best baseline in terms of accuracy. Moreover, the obtained results show that using handcrafted lexico-syntactic patterns as features' provider of support vector machine (SVM) lead to the highest accuracy of 89.40 %. The proposed method can automatically classify BioASQ questions into one of the four categories: yes/no, factoid, list, and summary. Furthermore, the results demonstrated that our method produced the best classification performance compared to four baseline systems.
Automatic Authorship Detection Using Textual Patterns Extracted from Integrated Syntactic Graphs
Gómez-Adorno, Helena; Sidorov, Grigori; Pinto, David; Vilariño, Darnes; Gelbukh, Alexander
2016-01-01
We apply the integrated syntactic graph feature extraction methodology to the task of automatic authorship detection. This graph-based representation allows integrating different levels of language description into a single structure. We extract textual patterns based on features obtained from shortest path walks over integrated syntactic graphs and apply them to determine the authors of documents. On average, our method outperforms the state of the art approaches and gives consistently high results across different corpora, unlike existing methods. Our results show that our textual patterns are useful for the task of authorship attribution. PMID:27589740
Cross spectral, active and passive approach to face recognition for improved performance
NASA Astrophysics Data System (ADS)
Grudzien, A.; Kowalski, M.; Szustakowski, M.
2017-08-01
Biometrics is a technique for automatic recognition of a person based on physiological or behavior characteristics. Since the characteristics used are unique, biometrics can create a direct link between a person and identity, based on variety of characteristics. The human face is one of the most important biometric modalities for automatic authentication. The most popular method of face recognition which relies on processing of visual information seems to be imperfect. Thermal infrared imagery may be a promising alternative or complement to visible range imaging due to its several reasons. This paper presents an approach of combining both methods.
Harati, Vida; Khayati, Rasoul; Farzan, Abdolreza
2011-07-01
Uncontrollable and unlimited cell growth leads to tumor genesis in the brain. If brain tumors are not diagnosed early and cured properly, they could cause permanent brain damage or even death to patients. As in all methods of treatments, any information about tumor position and size is important for successful treatment; hence, finding an accurate and a fully automated method to give information to physicians is necessary. A fully automatic and accurate method for tumor region detection and segmentation in brain magnetic resonance (MR) images is suggested. The presented approach is an improved fuzzy connectedness (FC) algorithm based on a scale in which the seed point is selected automatically. This algorithm is independent of the tumor type in terms of its pixels intensity. Tumor segmentation evaluation results based on similarity criteria (similarity index (SI), overlap fraction (OF), and extra fraction (EF) are 92.89%, 91.75%, and 3.95%, respectively) indicate a higher performance of the proposed approach compared to the conventional methods, especially in MR images, in tumor regions with low contrast. Thus, the suggested method is useful for increasing the ability of automatic estimation of tumor size and position in brain tissues, which provides more accurate investigation of the required surgery, chemotherapy, and radiotherapy procedures. Copyright © 2011 Elsevier Ltd. All rights reserved.
Automated feature detection and identification in digital point-ordered signals
Oppenlander, Jane E.; Loomis, Kent C.; Brudnoy, David M.; Levy, Arthur J.
1998-01-01
A computer-based automated method to detect and identify features in digital point-ordered signals. The method is used for processing of non-destructive test signals, such as eddy current signals obtained from calibration standards. The signals are first automatically processed to remove noise and to determine a baseline. Next, features are detected in the signals using mathematical morphology filters. Finally, verification of the features is made using an expert system of pattern recognition methods and geometric criteria. The method has the advantage that standard features can be, located without prior knowledge of the number or sequence of the features. Further advantages are that standard features can be differentiated from irrelevant signal features such as noise, and detected features are automatically verified by parameters extracted from the signals. The method proceeds fully automatically without initial operator set-up and without subjective operator feature judgement.
Yu, Jin; Abidi, Syed Sibte Raza; Artes, Paul; McIntyre, Andy; Heywood, Malcolm
2005-01-01
The availability of modern imaging techniques such as Confocal Scanning Laser Tomography (CSLT) for capturing high-quality optic nerve images offer the potential for developing automatic and objective methods for diagnosing glaucoma. We present a hybrid approach that features the analysis of CSLT images using moment methods to derive abstract image defining features. The features are then used to train classifers for automatically distinguishing CSLT images of normal and glaucoma patient. As a first, in this paper, we present investigations in feature subset selction methods for reducing the relatively large input space produced by the moment methods. We use neural networks and support vector machines to determine a sub-set of moments that offer high classification accuracy. We demonstratee the efficacy of our methods to discriminate between healthy and glaucomatous optic disks based on shape information automatically derived from optic disk topography and reflectance images.
An automatic rat brain extraction method based on a deformable surface model.
Li, Jiehua; Liu, Xiaofeng; Zhuo, Jiachen; Gullapalli, Rao P; Zara, Jason M
2013-08-15
The extraction of the brain from the skull in medical images is a necessary first step before image registration or segmentation. While pre-clinical MR imaging studies on small animals, such as rats, are increasing, fully automatic imaging processing techniques specific to small animal studies remain lacking. In this paper, we present an automatic rat brain extraction method, the Rat Brain Deformable model method (RBD), which adapts the popular human brain extraction tool (BET) through the incorporation of information on the brain geometry and MR image characteristics of the rat brain. The robustness of the method was demonstrated on T2-weighted MR images of 64 rats and compared with other brain extraction methods (BET, PCNN, PCNN-3D). The results demonstrate that RBD reliably extracts the rat brain with high accuracy (>92% volume overlap) and is robust against signal inhomogeneity in the images. Copyright © 2013 Elsevier B.V. All rights reserved.
Automatic knee cartilage delineation using inheritable segmentation
NASA Astrophysics Data System (ADS)
Dries, Sebastian P. M.; Pekar, Vladimir; Bystrov, Daniel; Heese, Harald S.; Blaffert, Thomas; Bos, Clemens; van Muiswinkel, Arianne M. C.
2008-03-01
We present a fully automatic method for segmentation of knee joint cartilage from fat suppressed MRI. The method first applies 3-D model-based segmentation technology, which allows to reliably segment the femur, patella, and tibia by iterative adaptation of the model according to image gradients. Thin plate spline interpolation is used in the next step to position deformable cartilage models for each of the three bones with reference to the segmented bone models. After initialization, the cartilage models are fine adjusted by automatic iterative adaptation to image data based on gray value gradients. The method has been validated on a collection of 8 (3 left, 5 right) fat suppressed datasets and demonstrated the sensitivity of 83+/-6% compared to manual segmentation on a per voxel basis as primary endpoint. Gross cartilage volume measurement yielded an average error of 9+/-7% as secondary endpoint. For cartilage being a thin structure, already small deviations in distance result in large errors on a per voxel basis, rendering the primary endpoint a hard criterion.
Automatic Recognition of Fetal Facial Standard Plane in Ultrasound Image via Fisher Vector.
Lei, Baiying; Tan, Ee-Leng; Chen, Siping; Zhuo, Liu; Li, Shengli; Ni, Dong; Wang, Tianfu
2015-01-01
Acquisition of the standard plane is the prerequisite of biometric measurement and diagnosis during the ultrasound (US) examination. In this paper, a new algorithm is developed for the automatic recognition of the fetal facial standard planes (FFSPs) such as the axial, coronal, and sagittal planes. Specifically, densely sampled root scale invariant feature transform (RootSIFT) features are extracted and then encoded by Fisher vector (FV). The Fisher network with multi-layer design is also developed to extract spatial information to boost the classification performance. Finally, automatic recognition of the FFSPs is implemented by support vector machine (SVM) classifier based on the stochastic dual coordinate ascent (SDCA) algorithm. Experimental results using our dataset demonstrate that the proposed method achieves an accuracy of 93.27% and a mean average precision (mAP) of 99.19% in recognizing different FFSPs. Furthermore, the comparative analyses reveal the superiority of the proposed method based on FV over the traditional methods.
Experiments in automatic word class and word sense identification for information retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauch, S.; Futrelle, R.P.
Automatic identification of related words and automatic detection of word senses are two long-standing goals of researchers in natural language processing. Word class information and word sense identification may enhance the performance of information retrieval system4ms. Large online corpora and increased computational capabilities make new techniques based on corpus linguisitics feasible. Corpus-based analysis is especially needed for corpora from specialized fields for which no electronic dictionaries or thesauri exist. The methods described here use a combination of mutual information and word context to establish word similarities. Then, unsupervised classification is done using clustering in the word space, identifying word classesmore » without pretagging. We also describe an extension of the method to handle the difficult problems of disambiguation and of determining part-of-speech and semantic information for low-frequency words. The method is powerful enough to produce high-quality results on a small corpus of 200,000 words from abstracts in a field of molecular biology.« less
Automatic segmentation and supervised learning-based selection of nuclei in cancer tissue images.
Nandy, Kaustav; Gudla, Prabhakar R; Amundsen, Ryan; Meaburn, Karen J; Misteli, Tom; Lockett, Stephen J
2012-09-01
Analysis of preferential localization of certain genes within the cell nuclei is emerging as a new technique for the diagnosis of breast cancer. Quantitation requires accurate segmentation of 100-200 cell nuclei in each tissue section to draw a statistically significant result. Thus, for large-scale analysis, manual processing is too time consuming and subjective. Fortuitously, acquired images generally contain many more nuclei than are needed for analysis. Therefore, we developed an integrated workflow that selects, following automatic segmentation, a subpopulation of accurately delineated nuclei for positioning of fluorescence in situ hybridization-labeled genes of interest. Segmentation was performed by a multistage watershed-based algorithm and screening by an artificial neural network-based pattern recognition engine. The performance of the workflow was quantified in terms of the fraction of automatically selected nuclei that were visually confirmed as well segmented and by the boundary accuracy of the well-segmented nuclei relative to a 2D dynamic programming-based reference segmentation method. Application of the method was demonstrated for discriminating normal and cancerous breast tissue sections based on the differential positioning of the HES5 gene. Automatic results agreed with manual analysis in 11 out of 14 cancers, all four normal cases, and all five noncancerous breast disease cases, thus showing the accuracy and robustness of the proposed approach. Published 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Wang, Gaochao; Tse, Peter W.; Yuan, Maodan
2018-02-01
Visual inspection and assessment of the condition of metal structures are essential for safety. Pulse thermography produces visible infrared images, which have been widely applied to detect and characterize defects in structures and materials. When active thermography, a non-destructive testing tool, is applied, the necessity of considerable manual checking can be avoided. However, detecting an internal crack with active thermography remains difficult, since it is usually invisible in the collected sequence of infrared images, which makes the automatic detection of internal cracks even harder. In addition, the detection of an internal crack can be hindered by a complicated inspection environment. With the purpose of putting forward a robust and automatic visual inspection method, a computer vision-based thresholding method is proposed. In this paper, the image signals are a sequence of infrared images collected from the experimental setup with a thermal camera and two flash lamps as stimulus. The contrast of pixels in each frame is enhanced by the Canny operator and then reconstructed by a triple-threshold system. Two features, mean value in the time domain and maximal amplitude in the frequency domain, are extracted from the reconstructed signal to help distinguish the crack pixels from others. Finally, a binary image indicating the location of the internal crack is generated by a K-means clustering method. The proposed procedure has been applied to an iron pipe, which contains two internal cracks and surface abrasion. Some improvements have been made for the computer vision-based automatic crack detection methods. In the future, the proposed method can be applied to realize the automatic detection of internal cracks from many infrared images for the industry.
Automatic Calibration Method for Driver’s Head Orientation in Natural Driving Environment
Fu, Xianping; Guan, Xiao; Peli, Eli; Liu, Hongbo; Luo, Gang
2013-01-01
Gaze tracking is crucial for studying driver’s attention, detecting fatigue, and improving driver assistance systems, but it is difficult in natural driving environments due to nonuniform and highly variable illumination and large head movements. Traditional calibrations that require subjects to follow calibrators are very cumbersome to be implemented in daily driving situations. A new automatic calibration method, based on a single camera for determining the head orientation and which utilizes the side mirrors, the rear-view mirror, the instrument board, and different zones in the windshield as calibration points, is presented in this paper. Supported by a self-learning algorithm, the system tracks the head and categorizes the head pose in 12 gaze zones based on facial features. The particle filter is used to estimate the head pose to obtain an accurate gaze zone by updating the calibration parameters. Experimental results show that, after several hours of driving, the automatic calibration method without driver’s corporation can achieve the same accuracy as a manual calibration method. The mean error of estimated eye gazes was less than 5°in day and night driving. PMID:24639620
Automatic Detection of Student Mental Models during Prior Knowledge Activation in MetaTutor
ERIC Educational Resources Information Center
Rus, Vasile; Lintean, Mihai; Azevedo, Roger
2009-01-01
This paper presents several methods to automatically detecting students' mental models in MetaTutor, an intelligent tutoring system that teaches students self-regulatory processes during learning of complex science topics. In particular, we focus on detecting students' mental models based on student-generated paragraphs during prior knowledge…
Breaking the Cost Barrier in Automatic Classification.
ERIC Educational Resources Information Center
Doyle, L. B.
A low-cost automatic classification method is reported that uses computer time in proportion to NlogN, where N is the number of information items and the base is a parameter, some barriers besides cost are treated briefly in the opening section, including types of intellectual resistance to the idea of doing classification by content-word…
Automatic Text Analysis Based on Transition Phenomena of Word Occurrences
ERIC Educational Resources Information Center
Pao, Miranda Lee
1978-01-01
Describes a method of selecting index terms directly from a word frequency list, an idea originally suggested by Goffman. Results of the analysis of word frequencies of two articles seem to indicate that the automated selection of index terms from a frequency list holds some promise for automatic indexing. (Author/MBR)
Automatic Generation of Tests from Domain and Multimedia Ontologies
ERIC Educational Resources Information Center
Papasalouros, Andreas; Kotis, Konstantinos; Kanaris, Konstantinos
2011-01-01
The aim of this article is to present an approach for generating tests in an automatic way. Although other methods have been already reported in the literature, the proposed approach is based on ontologies, representing both domain and multimedia knowledge. The article also reports on a prototype implementation of this approach, which…
USDA-ARS?s Scientific Manuscript database
Thresholding is an important step in the segmentation of image features, and the existing methods are not all effective when the image histogram exhibits a unimodal pattern, which is common in defect detection of fruit. This study was aimed at developing a general automatic thresholding methodology ...
A new methodology for automatic detection of reference points in 3D cephalometry: A pilot study.
Ed-Dhahraouy, Mohammed; Riri, Hicham; Ezzahmouly, Manal; Bourzgui, Farid; El Moutaoukkil, Abdelmajid
2018-04-05
The aim of this study was to develop a new method for an automatic detection of reference points in 3D cephalometry to overcome the limits of 2D cephalometric analyses. A specific application was designed using the C++ language for automatic and manual identification of 21 (reference) points on the craniofacial structures. Our algorithm is based on the implementation of an anatomical and geometrical network adapted to the craniofacial structure. This network was constructed based on the anatomical knowledge of the 3D cephalometric (reference) points. The proposed algorithm was tested on five CBCT images. The proposed approach for the automatic 3D cephalometric identification was able to detect 21 points with a mean error of 2.32mm. In this pilot study, we propose an automated methodology for the identification of the 3D cephalometric (reference) points. A larger sample will be implemented in the future to assess the method validity and reliability. Copyright © 2018 CEO. Published by Elsevier Masson SAS. All rights reserved.
Grammar-based Automatic 3D Model Reconstruction from Terrestrial Laser Scanning Data
NASA Astrophysics Data System (ADS)
Yu, Q.; Helmholz, P.; Belton, D.; West, G.
2014-04-01
The automatic reconstruction of 3D buildings has been an important research topic during the last years. In this paper, a novel method is proposed to automatically reconstruct the 3D building models from segmented data based on pre-defined formal grammar and rules. Such segmented data can be extracted e.g. from terrestrial or mobile laser scanning devices. Two steps are considered in detail. The first step is to transform the segmented data into 3D shapes, for instance using the DXF (Drawing Exchange Format) format which is a CAD data file format used for data interchange between AutoCAD and other program. Second, we develop a formal grammar to describe the building model structure and integrate the pre-defined grammars into the reconstruction process. Depending on the different segmented data, the selected grammar and rules are applied to drive the reconstruction process in an automatic manner. Compared with other existing approaches, our proposed method allows the model reconstruction directly from 3D shapes and takes the whole building into account.
Automatic three-dimensional measurement of large-scale structure based on vision metrology.
Zhu, Zhaokun; Guan, Banglei; Zhang, Xiaohu; Li, Daokui; Yu, Qifeng
2014-01-01
All relevant key techniques involved in photogrammetric vision metrology for fully automatic 3D measurement of large-scale structure are studied. A new kind of coded target consisting of circular retroreflective discs is designed, and corresponding detection and recognition algorithms based on blob detection and clustering are presented. Then a three-stage strategy starting with view clustering is proposed to achieve automatic network orientation. As for matching of noncoded targets, the concept of matching path is proposed, and matches for each noncoded target are found by determination of the optimal matching path, based on a novel voting strategy, among all possible ones. Experiments on a fixed keel of airship have been conducted to verify the effectiveness and measuring accuracy of the proposed methods.
Study of Burn Scar Extraction Automatically Based on Level Set Method using Remote Sensing Data
Liu, Yang; Dai, Qin; Liu, JianBo; Liu, ShiBin; Yang, Jin
2014-01-01
Burn scar extraction using remote sensing data is an efficient way to precisely evaluate burn area and measure vegetation recovery. Traditional burn scar extraction methodologies have no well effect on burn scar image with blurred and irregular edges. To address these issues, this paper proposes an automatic method to extract burn scar based on Level Set Method (LSM). This method utilizes the advantages of the different features in remote sensing images, as well as considers the practical needs of extracting the burn scar rapidly and automatically. This approach integrates Change Vector Analysis (CVA), Normalized Difference Vegetation Index (NDVI) and the Normalized Burn Ratio (NBR) to obtain difference image and modifies conventional Level Set Method Chan-Vese (C-V) model with a new initial curve which results from a binary image applying K-means method on fitting errors of two near-infrared band images. Landsat 5 TM and Landsat 8 OLI data sets are used to validate the proposed method. Comparison with conventional C-V model, OSTU algorithm, Fuzzy C-mean (FCM) algorithm are made to show that the proposed approach can extract the outline curve of fire burn scar effectively and exactly. The method has higher extraction accuracy and less algorithm complexity than that of the conventional C-V model. PMID:24503563
NASA Astrophysics Data System (ADS)
Maklad, Ahmed S.; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Shimada, Mitsuo; Iinuma, Gen
2017-03-01
In abdominal disease diagnosis and various abdominal surgeries planning, segmentation of abdominal blood vessel (ABVs) is a very imperative task. Automatic segmentation enables fast and accurate processing of ABVs. We proposed a fully automatic approach for segmenting ABVs through contrast enhanced CT images by a hybrid of 3D region growing and 4D curvature analysis. The proposed method comprises three stages. First, candidates of bone, kidneys, ABVs and heart are segmented by an auto-adapted threshold. Second, bone is auto-segmented and classified into spine, ribs and pelvis. Third, ABVs are automatically segmented in two sub-steps: (1) kidneys and abdominal part of the heart are segmented, (2) ABVs are segmented by a hybrid approach that integrates a 3D region growing and 4D curvature analysis. Results are compared with two conventional methods. Results show that the proposed method is very promising in segmenting and classifying bone, segmenting whole ABVs and may have potential utility in clinical use.
GIS Data Based Automatic High-Fidelity 3D Road Network Modeling
NASA Technical Reports Server (NTRS)
Wang, Jie; Shen, Yuzhong
2011-01-01
3D road models are widely used in many computer applications such as racing games and driving simulations_ However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially those existing in the real world. This paper presents a novel approach thai can automatically produce 3D high-fidelity road network models from real 2D road GIS data that mainly contain road. centerline in formation. The proposed method first builds parametric representations of the road centerlines through segmentation and fitting . A basic set of civil engineering rules (e.g., cross slope, superelevation, grade) for road design are then selected in order to generate realistic road surfaces in compliance with these rules. While the proposed method applies to any types of roads, this paper mainly addresses automatic generation of complex traffic interchanges and intersections which are the most sophisticated elements in the road networks
Automatic identification of artifacts in electrodermal activity data.
Taylor, Sara; Jaques, Natasha; Chen, Weixuan; Fedor, Szymon; Sano, Akane; Picard, Rosalind
2015-01-01
Recently, wearable devices have allowed for long term, ambulatory measurement of electrodermal activity (EDA). Despite the fact that ambulatory recording can be noisy, and recording artifacts can easily be mistaken for a physiological response during analysis, to date there is no automatic method for detecting artifacts. This paper describes the development of a machine learning algorithm for automatically detecting EDA artifacts, and provides an empirical evaluation of classification performance. We have encoded our results into a freely available web-based tool for artifact and peak detection.
Automatic extraction of tree crowns from aerial imagery in urban environment
NASA Astrophysics Data System (ADS)
Liu, Jiahang; Li, Deren; Qin, Xunwen; Yang, Jianfeng
2006-10-01
Traditionally, field-based investigation is the main method to investigate greenbelt in urban environment, which is costly and low updating frequency. In higher resolution image, the imagery structure and texture of tree canopy has great similarity in statistics despite the great difference in configurations of tree canopy, and their surface structures and textures of tree crown are very different from the other types. In this paper, we present an automatic method to detect tree crowns using high resolution image in urban environment without any apriori knowledge. Our method catches unique structure and texture of tree crown surface, use variance and mathematical expectation of defined image window to position the candidate canopy blocks coarsely, then analysis their inner structure and texture to refine these candidate blocks. The possible spans of all the feature parameters used in our method automatically generate from the small number of samples, and HOLE and its distribution as an important characteristics are introduced into refining processing. Also the isotropy of candidate image block and holes' distribution is integrated in our method. After introduction the theory of our method, aerial imageries were used ( with a resolution about 0.3m ) to test our method, and the results indicate that our method is an effective approach to automatically detect tree crown in urban environment.
Ravikumar, Ke; Liu, Haibin; Cohn, Judith D; Wall, Michael E; Verspoor, Karin
2012-10-05
We propose a method for automatic extraction of protein-specific residue mentions from the biomedical literature. The method searches text for mentions of amino acids at specific sequence positions and attempts to correctly associate each mention with a protein also named in the text. The methods presented in this work will enable improved protein functional site extraction from articles, ultimately supporting protein function prediction. Our method made use of linguistic patterns for identifying the amino acid residue mentions in text. Further, we applied an automated graph-based method to learn syntactic patterns corresponding to protein-residue pairs mentioned in the text. We finally present an approach to automated construction of relevant training and test data using the distant supervision model. The performance of the method was assessed by extracting protein-residue relations from a new automatically generated test set of sentences containing high confidence examples found using distant supervision. It achieved a F-measure of 0.84 on automatically created silver corpus and 0.79 on a manually annotated gold data set for this task, outperforming previous methods. The primary contributions of this work are to (1) demonstrate the effectiveness of distant supervision for automatic creation of training data for protein-residue relation extraction, substantially reducing the effort and time involved in manual annotation of a data set and (2) show that the graph-based relation extraction approach we used generalizes well to the problem of protein-residue association extraction. This work paves the way towards effective extraction of protein functional residues from the literature.
Jing, Xueping; Zheng, Xiujuan; Song, Shaoli; Liu, Kai
2017-12-01
Glomerular filtration rate (GFR), which can be estimated by Gates method with dynamic kidney single photon emission computed tomography (SPECT) imaging, is a key indicator of renal function. In this paper, an automatic computer tomography (CT)-assisted detection method of kidney region of interest (ROI) is proposed to achieve the objective and accurate GFR calculation. In this method, the CT coronal projection image and the enhanced SPECT synthetic image are firstly generated and registered together. Then, the kidney ROIs are delineated using a modified level set algorithm. Meanwhile, the background ROIs are also obtained based on the kidney ROIs. Finally, the value of GFR is calculated via Gates method. Comparing with the clinical data, the GFR values estimated by the proposed method were consistent with the clinical reports. This automatic method can improve the accuracy and stability of kidney ROI detection for GFR calculation, especially when the kidney function has been severely damaged.
Pulmonary lobar volumetry using novel volumetric computer-aided diagnosis and computed tomography
Iwano, Shingo; Kitano, Mariko; Matsuo, Keiji; Kawakami, Kenichi; Koike, Wataru; Kishimoto, Mariko; Inoue, Tsutomu; Li, Yuanzhong; Naganawa, Shinji
2013-01-01
OBJECTIVES To compare the accuracy of pulmonary lobar volumetry using the conventional number of segments method and novel volumetric computer-aided diagnosis using 3D computed tomography images. METHODS We acquired 50 consecutive preoperative 3D computed tomography examinations for lung tumours reconstructed at 1-mm slice thicknesses. We calculated the lobar volume and the emphysematous lobar volume < −950 HU of each lobe using (i) the slice-by-slice method (reference standard), (ii) number of segments method, and (iii) semi-automatic and (iv) automatic computer-aided diagnosis. We determined Pearson correlation coefficients between the reference standard and the three other methods for lobar volumes and emphysematous lobar volumes. We also compared the relative errors among the three measurement methods. RESULTS Both semi-automatic and automatic computer-aided diagnosis results were more strongly correlated with the reference standard than the number of segments method. The correlation coefficients for automatic computer-aided diagnosis were slightly lower than those for semi-automatic computer-aided diagnosis because there was one outlier among 50 cases (2%) in the right upper lobe and two outliers among 50 cases (4%) in the other lobes. The number of segments method relative error was significantly greater than those for semi-automatic and automatic computer-aided diagnosis (P < 0.001). The computational time for automatic computer-aided diagnosis was 1/2 to 2/3 than that of semi-automatic computer-aided diagnosis. CONCLUSIONS A novel lobar volumetry computer-aided diagnosis system could more precisely measure lobar volumes than the conventional number of segments method. Because semi-automatic computer-aided diagnosis and automatic computer-aided diagnosis were complementary, in clinical use, it would be more practical to first measure volumes by automatic computer-aided diagnosis, and then use semi-automatic measurements if automatic computer-aided diagnosis failed. PMID:23526418
Irregular and adaptive sampling for automatic geophysic measure systems
NASA Astrophysics Data System (ADS)
Avagnina, Davide; Lo Presti, Letizia; Mulassano, Paolo
2000-07-01
In this paper a sampling method, based on an irregular and adaptive strategy, is described. It can be used as automatic guide for rovers designed to explore terrestrial and planetary environments. Starting from the hypothesis that a explorative vehicle is equipped with a payload able to acquire measurements of interesting quantities, the method is able to detect objects of interest from measured points and to realize an adaptive sampling, while badly describing the not interesting background.
Efficient content-based low-altitude images correlated network and strips reconstruction
NASA Astrophysics Data System (ADS)
He, Haiqing; You, Qi; Chen, Xiaoyong
2017-01-01
The manual intervention method is widely used to reconstruct strips for further aerial triangulation in low-altitude photogrammetry. Clearly the method for fully automatic photogrammetric data processing is not an expected way. In this paper, we explore a content-based approach without manual intervention or external information for strips reconstruction. Feature descriptors in the local spatial patterns are extracted by SIFT to construct vocabulary tree, in which these features are encoded in terms of TF-IDF numerical statistical algorithm to generate new representation for each low-altitude image. Then images correlated network is reconstructed by similarity measure, image matching and geometric graph theory. Finally, strips are reconstructed automatically by tracing straight lines and growing adjacent images gradually. Experimental results show that the proposed approach is highly effective in automatically rearranging strips of lowaltitude images and can provide rough relative orientation for further aerial triangulation.
Registration of Panoramic/Fish-Eye Image Sequence and LiDAR Points Using Skyline Features
Zhu, Ningning; Jia, Yonghong; Ji, Shunping
2018-01-01
We propose utilizing a rigorous registration model and a skyline-based method for automatic registration of LiDAR points and a sequence of panoramic/fish-eye images in a mobile mapping system (MMS). This method can automatically optimize original registration parameters and avoid the use of manual interventions in control point-based registration methods. First, the rigorous registration model between the LiDAR points and the panoramic/fish-eye image was built. Second, skyline pixels from panoramic/fish-eye images and skyline points from the MMS’s LiDAR points were extracted, relying on the difference in the pixel values and the registration model, respectively. Third, a brute force optimization method was used to search for optimal matching parameters between skyline pixels and skyline points. In the experiments, the original registration method and the control point registration method were used to compare the accuracy of our method with a sequence of panoramic/fish-eye images. The result showed: (1) the panoramic/fish-eye image registration model is effective and can achieve high-precision registration of the image and the MMS’s LiDAR points; (2) the skyline-based registration method can automatically optimize the initial attitude parameters, realizing a high-precision registration of a panoramic/fish-eye image and the MMS’s LiDAR points; and (3) the attitude correction values of the sequences of panoramic/fish-eye images are different, and the values must be solved one by one. PMID:29883431
On a program manifold's stability of one contour automatic control systems
NASA Astrophysics Data System (ADS)
Zumatov, S. S.
2017-12-01
Methodology of analysis of stability is expounded to the one contour systems automatic control feedback in the presence of non-linearities. The methodology is based on the use of the simplest mathematical models of the nonlinear controllable systems. Stability of program manifolds of one contour automatic control systems is investigated. The sufficient conditions of program manifold's absolute stability of one contour automatic control systems are obtained. The Hurwitz's angle of absolute stability was determined. The sufficient conditions of program manifold's absolute stability of control systems by the course of plane in the mode of autopilot are obtained by means Lyapunov's second method.
Automatic identification of abstract online groups
Engel, David W; Gregory, Michelle L; Bell, Eric B; Cowell, Andrew J; Piatt, Andrew W
2014-04-15
Online abstract groups, in which members aren't explicitly connected, can be automatically identified by computer-implemented methods. The methods involve harvesting records from social media and extracting content-based and structure-based features from each record. Each record includes a social-media posting and is associated with one or more entities. Each feature is stored on a data storage device and includes a computer-readable representation of an attribute of one or more records. The methods further involve grouping records into record groups according to the features of each record. Further still the methods involve calculating an n-dimensional surface representing each record group and defining an outlier as a record having feature-based distances measured from every n-dimensional surface that exceed a threshold value. Each of the n-dimensional surfaces is described by a footprint that characterizes the respective record group as an online abstract group.
A rule-based automatic sleep staging method.
Liang, Sheng-Fu; Kuo, Chin-En; Hu, Yu-Han; Cheng, Yu-Shian
2012-03-30
In this paper, a rule-based automatic sleep staging method was proposed. Twelve features including temporal and spectrum analyses of the EEG, EOG, and EMG signals were utilized. Normalization was applied to each feature to eliminating individual differences. A hierarchical decision tree with fourteen rules was constructed for sleep stage classification. Finally, a smoothing process considering the temporal contextual information was applied for the continuity. The overall agreement and kappa coefficient of the proposed method applied to the all night polysomnography (PSG) of seventeen healthy subjects compared with the manual scorings by R&K rules can reach 86.68% and 0.79, respectively. This method can integrate with portable PSG system for sleep evaluation at-home in the near future. Copyright © 2012 Elsevier B.V. All rights reserved.
Blood detection in wireless capsule endoscope images based on salient superpixels.
Iakovidis, Dimitris K; Chatzis, Dimitris; Chrysanthopoulos, Panos; Koulaouzidis, Anastasios
2015-08-01
Wireless capsule endoscopy (WCE) enables screening of the gastrointestinal (GI) tract with a miniature, optical endoscope packed within a small swallowable capsule, wirelessly transmitting color images. In this paper we propose a novel method for automatic blood detection in contemporary WCE images. Blood is an alarming indication for the presence of pathologies requiring further treatment. The proposed method is based on a new definition of superpixel saliency. The saliency of superpixels is assessed upon their color, enabling the identification of image regions that are likely to contain blood. The blood patterns are recognized by their color features using a supervised learning machine. Experiments performed on a public dataset using automatically selected first-order statistical features from various color components indicate that the proposed method outperforms state-of-the-art methods.
Zhou, Yongxin; Bai, Jing
2007-01-01
A framework that combines atlas registration, fuzzy connectedness (FC) segmentation, and parametric bias field correction (PABIC) is proposed for the automatic segmentation of brain magnetic resonance imaging (MRI). First, the atlas is registered onto the MRI to initialize the following FC segmentation. Original techniques are proposed to estimate necessary initial parameters of FC segmentation. Further, the result of the FC segmentation is utilized to initialize a following PABIC algorithm. Finally, we re-apply the FC technique on the PABIC corrected MRI to get the final segmentation. Thus, we avoid expert human intervention and provide a fully automatic method for brain MRI segmentation. Experiments on both simulated and real MRI images demonstrate the validity of the method, as well as the limitation of the method. Being a fully automatic method, it is expected to find wide applications, such as three-dimensional visualization, radiation therapy planning, and medical database construction.
NASA Astrophysics Data System (ADS)
Duan, Fajie; Fu, Xiao; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Zhang, Cong
2018-05-01
In this work, an automatic variable selection method for quantitative analysis of soil samples using laser-induced breakdown spectroscopy (LIBS) is proposed, which is based on full spectrum correction (FSC) and modified iterative predictor weighting-partial least squares (mIPW-PLS). The method features automatic selection without artificial processes. To illustrate the feasibility and effectiveness of the method, a comparison with genetic algorithm (GA) and successive projections algorithm (SPA) for different elements (copper, barium and chromium) detection in soil was implemented. The experimental results showed that all the three methods could accomplish variable selection effectively, among which FSC-mIPW-PLS required significantly shorter computation time (12 s approximately for 40,000 initial variables) than the others. Moreover, improved quantification models were got with variable selection approaches. The root mean square errors of prediction (RMSEP) of models utilizing the new method were 27.47 (copper), 37.15 (barium) and 39.70 (chromium) mg/kg, which showed comparable prediction effect with GA and SPA.
Study on application of adaptive fuzzy control and neural network in the automatic leveling system
NASA Astrophysics Data System (ADS)
Xu, Xiping; Zhao, Zizhao; Lan, Weiyong; Sha, Lei; Qian, Cheng
2015-04-01
This paper discusses the adaptive fuzzy control and neural network BP algorithm in large flat automatic leveling control system application. The purpose is to develop a measurement system with a flat quick leveling, Make the installation on the leveling system of measurement with tablet, to be able to achieve a level in precision measurement work quickly, improve the efficiency of the precision measurement. This paper focuses on the automatic leveling system analysis based on fuzzy controller, Use of the method of combining fuzzy controller and BP neural network, using BP algorithm improve the experience rules .Construct an adaptive fuzzy control system. Meanwhile the learning rate of the BP algorithm has also been run-rate adjusted to accelerate convergence. The simulation results show that the proposed control method can effectively improve the leveling precision of automatic leveling system and shorten the time of leveling.
ERIC Educational Resources Information Center
Jones, Daniel; Alexa, Melina
As part of the development of a completely sub-symbolic machine translation system, a method for automatically identifying German compounds was developed. Given a parallel bilingual corpus, German compounds are identified along with their English word groupings by statistical processing alone. The underlying principles and the design process are…
Natural-Annotation-based Unsupervised Construction of Korean-Chinese Domain Dictionary
NASA Astrophysics Data System (ADS)
Liu, Wuying; Wang, Lin
2018-03-01
The large-scale bilingual parallel resource is significant to statistical learning and deep learning in natural language processing. This paper addresses the automatic construction issue of the Korean-Chinese domain dictionary, and presents a novel unsupervised construction method based on the natural annotation in the raw corpus. We firstly extract all Korean-Chinese word pairs from Korean texts according to natural annotations, secondly transform the traditional Chinese characters into the simplified ones, and finally distill out a bilingual domain dictionary after retrieving the simplified Chinese words in an extra Chinese domain dictionary. The experimental results show that our method can automatically build multiple Korean-Chinese domain dictionaries efficiently.
Navarro, Pedro J; Fernández-Isla, Carlos; Alcover, Pedro María; Suardíaz, Juan
2016-07-27
This paper presents a robust method for defect detection in textures, entropy-based automatic selection of the wavelet decomposition level (EADL), based on a wavelet reconstruction scheme, for detecting defects in a wide variety of structural and statistical textures. Two main features are presented. One of the new features is an original use of the normalized absolute function value (NABS) calculated from the wavelet coefficients derived at various different decomposition levels in order to identify textures where the defect can be isolated by eliminating the texture pattern in the first decomposition level. The second is the use of Shannon's entropy, calculated over detail subimages, for automatic selection of the band for image reconstruction, which, unlike other techniques, such as those based on the co-occurrence matrix or on energy calculation, provides a lower decomposition level, thus avoiding excessive degradation of the image, allowing a more accurate defect segmentation. A metric analysis of the results of the proposed method with nine different thresholding algorithms determined that selecting the appropriate thresholding method is important to achieve optimum performance in defect detection. As a consequence, several different thresholding algorithms depending on the type of texture are proposed.
NASA Astrophysics Data System (ADS)
Tupas, M. E. A.; Dasallas, J. A.; Jiao, B. J. D.; Magallon, B. J. P.; Sempio, J. N. H.; Ramos, M. K. F.; Aranas, R. K. D.; Tamondong, A. M.
2017-10-01
The FAST-SIFT corner detector and descriptor extractor combination was used to automatically georeference DIWATA-1 Spaceborne Multispectral Imager images. Features from the Fast Accelerated Segment Test (FAST) algorithm detects corners or keypoints in an image, and these robustly detected keypoints have well-defined positions. Descriptors were computed using Scale-Invariant Feature Transform (SIFT) extractor. FAST-SIFT method effectively SMI same-subscene images detected by the NIR sensor. The method was also tested in stitching NIR images with varying subscene swept by the camera. The slave images were matched to the master image. The keypoints served as the ground control points. Random sample consensus was used to eliminate fall-out matches and ensure accuracy of the feature points from which the transformation parameters were derived. Keypoints are matched based on their descriptor vector. Nearest-neighbor matching is employed based on a metric distance between the descriptors. The metrics include Euclidean and city block, among others. Rough matching outputs not only the correct matches but also the faulty matches. A previous work in automatic georeferencing incorporates a geometric restriction. In this work, we applied a simplified version of the learning method. RANSAC was used to eliminate fall-out matches and ensure accuracy of the feature points. This method identifies if a point fits the transformation function and returns inlier matches. The transformation matrix was solved by Affine, Projective, and Polynomial models. The accuracy of the automatic georeferencing method were determined by calculating the RMSE of interest points, selected randomly, between the master image and transformed slave image.
Haptic exploratory behavior during object discrimination: a novel automatic annotation method.
Jansen, Sander E M; Bergmann Tiest, Wouter M; Kappers, Astrid M L
2015-01-01
In order to acquire information concerning the geometry and material of handheld objects, people tend to execute stereotypical hand movement patterns called haptic Exploratory Procedures (EPs). Manual annotation of haptic exploration trials with these EPs is a laborious task that is affected by subjectivity, attentional lapses, and viewing angle limitations. In this paper we propose an automatic EP annotation method based on position and orientation data from motion tracking sensors placed on both hands and inside a stimulus. A set of kinematic variables is computed from these data and compared to sets of predefined criteria for each of four EPs. Whenever all criteria for a specific EP are met, it is assumed that that particular hand movement pattern was performed. This method is applied to data from an experiment where blindfolded participants haptically discriminated between objects differing in hardness, roughness, volume, and weight. In order to validate the method, its output is compared to manual annotation based on video recordings of the same trials. Although mean pairwise agreement is less between human-automatic pairs than between human-human pairs (55.7% vs 74.5%), the proposed method performs much better than random annotation (2.4%). Furthermore, each EP is linked to a specific object property for which it is optimal (e.g., Lateral Motion for roughness). We found that the percentage of trials where the expected EP was found does not differ between manual and automatic annotation. For now, this method cannot yet completely replace a manual annotation procedure. However, it could be used as a starting point that can be supplemented by manual annotation.
Lin, Kun-Ju; Huang, Jia-Yann; Chen, Yung-Sheng
2011-12-01
Glomerular filtration rate (GFR) is a common accepted standard estimation of renal function. Gamma camera-based methods for estimating renal uptake of (99m)Tc-diethylenetriaminepentaacetic acid (DTPA) without blood or urine sampling have been widely used. Of these, the method introduced by Gates has been the most common method. Currently, most of gamma cameras are equipped with a commercial program for GFR determination, a semi-quantitative analysis by manually drawing region of interest (ROI) over each kidney. Then, the GFR value can be computed from the scintigraphic determination of (99m)Tc-DTPA uptake within the kidney automatically. Delineating the kidney area is difficult when applying a fixed threshold value. Moreover, hand-drawn ROIs are tedious, time consuming, and dependent highly on operator skill. Thus, we developed a fully automatic renal ROI estimation system based on the temporal changes in intensity counts, intensity-pair distribution image contrast enhancement method, adaptive thresholding, and morphological operations that can locate the kidney area and obtain the GFR value from a (99m)Tc-DTPA renogram. To evaluate the performance of the proposed approach, 30 clinical dynamic renograms were introduced. The fully automatic approach failed in one patient with very poor renal function. Four patients had a unilateral kidney, and the others had bilateral kidneys. The automatic contours from the remaining 54 kidneys were compared with the contours of manual drawing. The 54 kidneys were included for area error and boundary error analyses. There was high correlation between two physicians' manual contours and the contours obtained by our approach. For area error analysis, the mean true positive area overlap is 91%, the mean false negative is 13.4%, and the mean false positive is 9.3%. The boundary error is 1.6 pixels. The GFR calculated using this automatic computer-aided approach is reproducible and may be applied to help nuclear medicine physicians in clinical practice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, W; Wu, Q; Yuan, L
Purpose: To improve the robustness of a knowledge based automatic lung IMRT planning method and to further validate the reliability of this algorithm by utilizing for the planning of clinical cases with non-coplanar beams. Methods: A lung IMRT planning method which automatically determines both plan optimization objectives and beam configurations with non-coplanar beams has been reported previously. A beam efficiency index map is constructed to guide beam angle selection in this algorithm. This index takes into account both the dose contributions from individual beams and the combined effect of multiple beams which is represented by a beam separation score. Wemore » studied the effect of this beam separation score on plan quality and determined the optimal weight for this score.14 clinical plans were re-planned with the knowledge-based algorithm. Significant dosimetric metrics for the PTV and OARs in the automatic plans are compared with those in the clinical plans by the two-sample t-test. In addition, a composite dosimetric quality index was defined to obtain the relationship between the plan quality and the beam separation score. Results: On average, we observed more than 15% reduction on conformity index and homogeneity index for PTV and V{sub 40}, V{sub 60} for heart while an 8% and 3% increase on V{sub 5}, V{sub 20} for lungs, respectively. The variation curve of the composite index as a function of angle spread score shows that 0.6 is the best value for the weight of the beam separation score. Conclusion: Optimal value for beam angle spread score in automatic lung IMRT planning is obtained. With this value, model can result in statistically the “best” achievable plans. This method can potentially improve the quality and planning efficiency for IMRT plans with no-coplanar angles.« less
Forkert, N D; Cheng, B; Kemmling, A; Thomalla, G; Fiehler, J
2014-01-01
The objective of this work is to present the software tool ANTONIA, which has been developed to facilitate a quantitative analysis of perfusion-weighted MRI (PWI) datasets in general as well as the subsequent multi-parametric analysis of additional datasets for the specific purpose of acute ischemic stroke patient dataset evaluation. Three different methods for the analysis of DSC or DCE PWI datasets are currently implemented in ANTONIA, which can be case-specifically selected based on the study protocol. These methods comprise a curve fitting method as well as a deconvolution-based and deconvolution-free method integrating a previously defined arterial input function. The perfusion analysis is extended for the purpose of acute ischemic stroke analysis by additional methods that enable an automatic atlas-based selection of the arterial input function, an analysis of the perfusion-diffusion and DWI-FLAIR mismatch as well as segmentation-based volumetric analyses. For reliability evaluation, the described software tool was used by two observers for quantitative analysis of 15 datasets from acute ischemic stroke patients to extract the acute lesion core volume, FLAIR ratio, perfusion-diffusion mismatch volume with manually as well as automatically selected arterial input functions, and follow-up lesion volume. The results of this evaluation revealed that the described software tool leads to highly reproducible results for all parameters if the automatic arterial input function selection method is used. Due to the broad selection of processing methods that are available in the software tool, ANTONIA is especially helpful to support image-based perfusion and acute ischemic stroke research projects.
Automatic focusing system of BSST in Antarctic
NASA Astrophysics Data System (ADS)
Tang, Peng-Yi; Liu, Jia-Jing; Zhang, Guang-yu; Wang, Jian
2015-10-01
Automatic focusing (AF) technology plays an important role in modern astronomical telescopes. Based on the focusing requirement of BSST (Bright Star Survey Telescope) in Antarctic, an AF system is set up. In this design, functions in OpenCV is used to find stars, the algorithm of area, HFD or FWHM are used to degree the focus metric by choosing. Curve fitting method is used to find focus position as the method of camera moving. All these design are suitable for unattended small telescope.
Automatic item generation implemented for measuring artistic judgment aptitude.
Bezruczko, Nikolaus
2014-01-01
Automatic item generation (AIG) is a broad class of methods that are being developed to address psychometric issues arising from internet and computer-based testing. In general, issues emphasize efficiency, validity, and diagnostic usefulness of large scale mental testing. Rapid prominence of AIG methods and their implicit perspective on mental testing is bringing painful scrutiny to many sacred psychometric assumptions. This report reviews basic AIG ideas, then presents conceptual foundations, image model development, and operational application to artistic judgment aptitude testing.
Automatic choroid cells segmentation and counting in fluorescence microscopic image
NASA Astrophysics Data System (ADS)
Fei, Jianjun; Zhu, Weifang; Shi, Fei; Xiang, Dehui; Lin, Xiao; Yang, Lei; Chen, Xinjian
2016-03-01
In this paper, we proposed a method to automatically segment and count the rhesus choroid-retinal vascular endothelial cells (RF/6A) in fluorescence microscopic images which is based on shape classification, bottleneck detection and accelerated Dijkstra algorithm. The proposed method includes four main steps. First, a thresholding filter and morphological operations are applied to reduce the noise. Second, a shape classifier is used to decide whether a connected component is needed to be segmented. In this step, the AdaBoost classifier is applied with a set of shape features. Third, the bottleneck positions are found based on the contours of the connected components. Finally, the cells segmentation and counting are completed based on the accelerated Dijkstra algorithm with the gradient information between the bottleneck positions. The results show the feasibility and efficiency of the proposed method.
Automatic detection of MLC relative position errors for VMAT using the EPID-based picket fence test
NASA Astrophysics Data System (ADS)
Christophides, Damianos; Davies, Alex; Fleckney, Mark
2016-12-01
Multi-leaf collimators (MLCs) ensure the accurate delivery of treatments requiring complex beam fluences like intensity modulated radiotherapy and volumetric modulated arc therapy. The purpose of this work is to automate the detection of MLC relative position errors ⩾0.5 mm using electronic portal imaging device-based picket fence tests and compare the results to the qualitative assessment currently in use. Picket fence tests with and without intentional MLC errors were measured weekly on three Varian linacs. The picket fence images analysed covered a time period ranging between 14-20 months depending on the linac. An algorithm was developed that calculated the MLC error for each leaf-pair present in the picket fence images. The baseline error distributions of each linac were characterised for an initial period of 6 months and compared with the intentional MLC errors using statistical metrics. The distributions of median and one-sample Kolmogorov-Smirnov test p-value exhibited no overlap between baseline and intentional errors and were used retrospectively to automatically detect MLC errors in routine clinical practice. Agreement was found between the MLC errors detected by the automatic method and the fault reports during clinical use, as well as interventions for MLC repair and calibration. In conclusion the method presented provides for full automation of MLC quality assurance, based on individual linac performance characteristics. The use of the automatic method has been shown to provide early warning for MLC errors that resulted in clinical downtime.
NASA Astrophysics Data System (ADS)
Wang, J.; Feng, B.
2016-12-01
Impervious surface area (ISA) has long been studied as an important input into moisture flux models. In general, ISA impedes groundwater recharge, increases stormflow/flood frequency, and alters in-stream and riparian habitats. Urban area is recognized as one of the richest ISA environment. Urban ISA mapping assists flood prevention and urban planning. Hyperspectral imagery (HI), for its ability to detect subtle spectral signature, becomes an ideal candidate in urban ISA mapping. To map ISA from HI involves endmember (EM) selection. The high degree of spatial and spectral heterogeneity of urban environment puts great difficulty in this task: a compromise point is needed between the automatic degree and the good representativeness of the method. The study tested one manual and two semi-automatic EM selection strategies. The manual and the first semi-automatic methods have been widely used in EM selection. The second semi-automatic EM selection method is rather new and has been only proposed for moderate spatial resolution satellite. The manual method visually selected the EM candidates from eight landcover types in the original image. The first semi-automatic method chose the EM candidates using a threshold over the pixel purity index (PPI) map. The second semi-automatic method used the triangle shape of the HI scatter plot in the n-Dimension visualizer to identify the V-I-S (vegetation-impervious surface-soil) EM candidates: the pixels locate at the triangle points. The initial EM candidates from the three methods were further refined by three indexes (EM average RMSE, minimum average spectral angle, and count based EM selection) and generated three spectral libraries, which were used to classify the test image. Spectral angle mapper was applied. The accuracy reports for the classification results were generated. The overall accuracy are 85% for the manual method, 81% for the PPI method, and 87% for the V-I-S method. The V-I-S EM selection method performs best in this study. This fact proves the value of V-I-S EM selection method in not only moderate spatial resolution satellite image but also the more and more accessible high spatial resolution airborne image. This semi-automatic EM selection method can be adopted into a wide range of remote sensing images and provide ISA map for hydrology analysis.
Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications
Moussa, Adel; El-Sheimy, Naser; Habib, Ayman
2017-01-01
Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research. PMID:29057847
Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications.
Al-Rawabdeh, Abdulla; Moussa, Adel; Foroutan, Marzieh; El-Sheimy, Naser; Habib, Ayman
2017-10-18
Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research.
NASA Astrophysics Data System (ADS)
Wu, T. Y.; Lin, S. F.
2013-10-01
Automatic suspected lesion extraction is an important application in computer-aided diagnosis (CAD). In this paper, we propose a method to automatically extract the suspected parotid regions for clinical evaluation in head and neck CT images. The suspected lesion tissues in low contrast tissue regions can be localized with feature-based segmentation (FBS) based on local texture features, and can be delineated with accuracy by modified active contour models (ACM). At first, stationary wavelet transform (SWT) is introduced. The derived wavelet coefficients are applied to derive the local features for FBS, and to generate enhanced energy maps for ACM computation. Geometric shape features (GSFs) are proposed to analyze each soft tissue region segmented by FBS; the regions with higher similarity GSFs with the lesions are extracted and the information is also applied as the initial conditions for fine delineation computation. Consequently, the suspected lesions can be automatically localized and accurately delineated for aiding clinical diagnosis. The performance of the proposed method is evaluated by comparing with the results outlined by clinical experts. The experiments on 20 pathological CT data sets show that the true-positive (TP) rate on recognizing parotid lesions is about 94%, and the dimension accuracy of delineation results can also approach over 93%.
NASA Astrophysics Data System (ADS)
Sun, Feng-Rong; Wang, Xiao-Jing; Wu, Qiang; Yao, Gui-Hua; Zhang, Yun
2013-01-01
Left ventricular (LV) torsion is a sensitive and global index of LV systolic and diastolic function, but how to noninvasively measure it is challenging. Two-dimensional echocardiography and the block-matching based speckle tracking method were used to measure LV torsion. Main advantages of the proposed method over the previous ones are summarized as follows: (1) The method is automatic, except for manually selecting some endocardium points on the end-diastolic frame in initialization step. (2) The diamond search strategy is applied, with a spatial smoothness constraint introduced into the sum of absolute differences matching criterion; and the reference frame during the search is determined adaptively. (3) The method is capable of removing abnormal measurement data automatically. The proposed method was validated against that using Doppler tissue imaging and some preliminary clinical experimental studies were presented to illustrate clinical values of the proposed method.
[Target volume segmentation of PET images by an iterative method based on threshold value].
Castro, P; Huerga, C; Glaría, L A; Plaza, R; Rodado, S; Marín, M D; Mañas, A; Serrada, A; Núñez, L
2014-01-01
An automatic segmentation method is presented for PET images based on an iterative approximation by threshold value that includes the influence of both lesion size and background present during the acquisition. Optimal threshold values that represent a correct segmentation of volumes were determined based on a PET phantom study that contained different sizes spheres and different known radiation environments. These optimal values were normalized to background and adjusted by regression techniques to a two-variable function: lesion volume and signal-to-background ratio (SBR). This adjustment function was used to build an iterative segmentation method and then, based in this mention, a procedure of automatic delineation was proposed. This procedure was validated on phantom images and its viability was confirmed by retrospectively applying it on two oncology patients. The resulting adjustment function obtained had a linear dependence with the SBR and was inversely proportional and negative with the volume. During the validation of the proposed method, it was found that the volume deviations respect to its real value and CT volume were below 10% and 9%, respectively, except for lesions with a volume below 0.6 ml. The automatic segmentation method proposed can be applied in clinical practice to tumor radiotherapy treatment planning in a simple and reliable way with a precision close to the resolution of PET images. Copyright © 2013 Elsevier España, S.L.U. and SEMNIM. All rights reserved.
NASA Astrophysics Data System (ADS)
Dowling, J. A.; Burdett, N.; Greer, P. B.; Sun, J.; Parker, J.; Pichler, P.; Stanwell, P.; Chandra, S.; Rivest-Hénault, D.; Ghose, S.; Salvado, O.; Fripp, J.
2014-03-01
Our group have been developing methods for MRI-alone prostate cancer radiation therapy treatment planning. To assist with clinical validation of the workflow we are investigating a cloud platform solution for research purposes. Benefits of cloud computing can include increased scalability, performance and extensibility while reducing total cost of ownership. In this paper we demonstrate the generation of DICOM-RT directories containing an automatic average atlas based electron density image and fast pelvic organ contouring from whole pelvis MR scans.
Chen, C; Li, H; Zhou, X; Wong, S T C
2008-05-01
Image-based, high throughput genome-wide RNA interference (RNAi) experiments are increasingly carried out to facilitate the understanding of gene functions in intricate biological processes. Automated screening of such experiments generates a large number of images with great variations in image quality, which makes manual analysis unreasonably time-consuming. Therefore, effective techniques for automatic image analysis are urgently needed, in which segmentation is one of the most important steps. This paper proposes a fully automatic method for cells segmentation in genome-wide RNAi screening images. The method consists of two steps: nuclei and cytoplasm segmentation. Nuclei are extracted and labelled to initialize cytoplasm segmentation. Since the quality of RNAi image is rather poor, a novel scale-adaptive steerable filter is designed to enhance the image in order to extract long and thin protrusions on the spiky cells. Then, constraint factor GCBAC method and morphological algorithms are combined to be an integrated method to segment tight clustered cells. Compared with the results obtained by using seeded watershed and the ground truth, that is, manual labelling results by experts in RNAi screening data, our method achieves higher accuracy. Compared with active contour methods, our method consumes much less time. The positive results indicate that the proposed method can be applied in automatic image analysis of multi-channel image screening data.
Chatter detection in milling process based on VMD and energy entropy
NASA Astrophysics Data System (ADS)
Liu, Changfu; Zhu, Lida; Ni, Chenbing
2018-05-01
This paper presents a novel approach to detect the milling chatter based on Variational Mode Decomposition (VMD) and energy entropy. VMD has already been employed in feature extraction from non-stationary signals. The parameters like number of modes (K) and the quadratic penalty (α) need to be selected empirically when raw signal is decomposed by VMD. Aimed at solving the problem how to select K and α, the automatic selection method of VMD's based on kurtosis is proposed in this paper. When chatter occurs in the milling process, energy will be absorbed to chatter frequency bands. To detect the chatter frequency bands automatically, the chatter detection method based on energy entropy is presented. The vibration signal containing chatter frequency is simulated and three groups of experiments which represent three cutting conditions are conducted. To verify the effectiveness of method presented by this paper, chatter feather extraction has been successfully employed on simulation signals and experimental signals. The simulation and experimental results show that the proposed method can effectively detect the chatter.
NASA Astrophysics Data System (ADS)
Lu, Weihua; Chen, Xinjian; Zhu, Weifang; Yang, Lei; Cao, Zhaoyuan; Chen, Haoyu
2015-03-01
In this paper, we proposed a method based on the Freeman chain code to segment and count rhesus choroid-retinal vascular endothelial cells (RF/6A) automatically for fluorescence microscopy images. The proposed method consists of four main steps. First, a threshold filter and morphological transform were applied to reduce the noise. Second, the boundary information was used to generate the Freeman chain codes. Third, the concave points were found based on the relationship between the difference of the chain code and the curvature. Finally, cells segmentation and counting were completed based on the characteristics of the number of the concave points, the area and shape of the cells. The proposed method was tested on 100 fluorescence microscopic cell images, and the average true positive rate (TPR) is 98.13% and the average false positive rate (FPR) is 4.47%, respectively. The preliminary results showed the feasibility and efficiency of the proposed method.
Fully automatic registration and segmentation of first-pass myocardial perfusion MR image sequences.
Gupta, Vikas; Hendriks, Emile A; Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F
2010-11-01
Derivation of diagnostically relevant parameters from first-pass myocardial perfusion magnetic resonance images involves the tedious and time-consuming manual segmentation of the myocardium in a large number of images. To reduce the manual interaction and expedite the perfusion analysis, we propose an automatic registration and segmentation method for the derivation of perfusion linked parameters. A complete automation was accomplished by first registering misaligned images using a method based on independent component analysis, and then using the registered data to automatically segment the myocardium with active appearance models. We used 18 perfusion studies (100 images per study) for validation in which the automatically obtained (AO) contours were compared with expert drawn contours on the basis of point-to-curve error, Dice index, and relative perfusion upslope in the myocardium. Visual inspection revealed successful segmentation in 15 out of 18 studies. Comparison of the AO contours with expert drawn contours yielded 2.23 ± 0.53 mm and 0.91 ± 0.02 as point-to-curve error and Dice index, respectively. The average difference between manually and automatically obtained relative upslope parameters was found to be statistically insignificant (P = .37). Moreover, the analysis time per slice was reduced from 20 minutes (manual) to 1.5 minutes (automatic). We proposed an automatic method that significantly reduced the time required for analysis of first-pass cardiac magnetic resonance perfusion images. The robustness and accuracy of the proposed method were demonstrated by the high spatial correspondence and statistically insignificant difference in perfusion parameters, when AO contours were compared with expert drawn contours. Copyright © 2010 AUR. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Y; Huang, H; Su, T
Purpose: Texture-based quantification of image heterogeneity has been a popular topic for imaging studies in recent years. As previous studies mainly focus on oncological applications, we report our recent efforts of applying such techniques on cardiac perfusion imaging. A fully automated procedure has been developed to perform texture analysis for measuring the image heterogeneity. Clinical data were used to evaluate the preliminary performance of such methods. Methods: Myocardial perfusion images of Thallium-201 scans were collected from 293 patients with suspected coronary artery disease. Each subject underwent a Tl-201 scan and a percutaneous coronary intervention (PCI) within three months. The PCImore » Result was used as the gold standard of coronary ischemia of more than 70% stenosis. Each Tl-201 scan was spatially normalized to an image template for fully automatic segmentation of the LV. The segmented voxel intensities were then carried into the texture analysis with our open-source software Chang Gung Image Texture Analysis toolbox (CGITA). To evaluate the clinical performance of the image heterogeneity for detecting the coronary stenosis, receiver operating characteristic (ROC) analysis was used to compute the overall accuracy, sensitivity and specificity as well as the area under curve (AUC). Those indices were compared to those obtained from the commercially available semi-automatic software QPS. Results: With the fully automatic procedure to quantify heterogeneity from Tl-201 scans, we were able to achieve a good discrimination with good accuracy (74%), sensitivity (73%), specificity (77%) and AUC of 0.82. Such performance is similar to those obtained from the semi-automatic QPS software that gives a sensitivity of 71% and specificity of 77%. Conclusion: Based on fully automatic procedures of data processing, our preliminary data indicate that the image heterogeneity of myocardial perfusion imaging can provide useful information for automatic determination of the myocardial ischemia.« less
Research in interactive scene analysis
NASA Technical Reports Server (NTRS)
Tenenbaum, J. M.; Garvey, T. D.; Weyl, S. A.; Wolf, H. C.
1975-01-01
An interactive scene interpretation system (ISIS) was developed as a tool for constructing and experimenting with man-machine and automatic scene analysis methods tailored for particular image domains. A recently developed region analysis subsystem based on the paradigm of Brice and Fennema is described. Using this subsystem a series of experiments was conducted to determine good criteria for initially partitioning a scene into atomic regions and for merging these regions into a final partition of the scene along object boundaries. Semantic (problem-dependent) knowledge is essential for complete, correct partitions of complex real-world scenes. An interactive approach to semantic scene segmentation was developed and demonstrated on both landscape and indoor scenes. This approach provides a reasonable methodology for segmenting scenes that cannot be processed completely automatically, and is a promising basis for a future automatic system. A program is described that can automatically generate strategies for finding specific objects in a scene based on manually designated pictorial examples.
Automatic event recognition and anomaly detection with attribute grammar by learning scene semantics
NASA Astrophysics Data System (ADS)
Qi, Lin; Yao, Zhenyu; Li, Li; Dong, Junyu
2007-11-01
In this paper we present a novel framework for automatic event recognition and abnormal behavior detection with attribute grammar by learning scene semantics. This framework combines learning scene semantics by trajectory analysis and constructing attribute grammar-based event representation. The scene and event information is learned automatically. Abnormal behaviors that disobey scene semantics or event grammars rules are detected. By this method, an approach to understanding video scenes is achieved. Further more, with this prior knowledge, the accuracy of abnormal event detection is increased.
Silva Filho, Telmo M; Souza, Renata M C R; Prudêncio, Ricardo B C
2016-08-01
Some complex data types are capable of modeling data variability and imprecision. These data types are studied in the symbolic data analysis field. One such data type is interval data, which represents ranges of values and is more versatile than classic point data for many domains. This paper proposes a new prototype-based classifier for interval data, trained by a swarm optimization method. Our work has two main contributions: a swarm method which is capable of performing both automatic selection of features and pruning of unused prototypes and a generalized weighted squared Euclidean distance for interval data. By discarding unnecessary features and prototypes, the proposed algorithm deals with typical limitations of prototype-based methods, such as the problem of prototype initialization. The proposed distance is useful for learning classes in interval datasets with different shapes, sizes and structures. When compared to other prototype-based methods, the proposed method achieves lower error rates in both synthetic and real interval datasets. Copyright © 2016 Elsevier Ltd. All rights reserved.
Automatic Feature Extraction from Planetary Images
NASA Technical Reports Server (NTRS)
Troglio, Giulia; Le Moigne, Jacqueline; Benediktsson, Jon A.; Moser, Gabriele; Serpico, Sebastiano B.
2010-01-01
With the launch of several planetary missions in the last decade, a large amount of planetary images has already been acquired and much more will be available for analysis in the coming years. The image data need to be analyzed, preferably by automatic processing techniques because of the huge amount of data. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to planetary data that often present low contrast and uneven illumination characteristics. Different methods have already been presented for crater extraction from planetary images, but the detection of other types of planetary features has not been addressed yet. Here, we propose a new unsupervised method for the extraction of different features from the surface of the analyzed planet, based on the combination of several image processing techniques, including a watershed segmentation and the generalized Hough Transform. The method has many applications, among which image registration and can be applied to arbitrary planetary images.
Automatic classification of canine PRG neuronal discharge patterns using K-means clustering.
Zuperku, Edward J; Prkic, Ivana; Stucke, Astrid G; Miller, Justin R; Hopp, Francis A; Stuth, Eckehard A
2015-02-01
Respiratory-related neurons in the parabrachial-Kölliker-Fuse (PB-KF) region of the pons play a key role in the control of breathing. The neuronal activities of these pontine respiratory group (PRG) neurons exhibit a variety of inspiratory (I), expiratory (E), phase spanning and non-respiratory related (NRM) discharge patterns. Due to the variety of patterns, it can be difficult to classify them into distinct subgroups according to their discharge contours. This report presents a method that automatically classifies neurons according to their discharge patterns and derives an average subgroup contour of each class. It is based on the K-means clustering technique and it is implemented via SigmaPlot User-Defined transform scripts. The discharge patterns of 135 canine PRG neurons were classified into seven distinct subgroups. Additional methods for choosing the optimal number of clusters are described. Analysis of the results suggests that the K-means clustering method offers a robust objective means of both automatically categorizing neuron patterns and establishing the underlying archetypical contours of subtypes based on the discharge patterns of group of neurons. Published by Elsevier B.V.
Ventura, Valérie; Todorova, Sonia
2015-05-01
Spike-based brain-computer interfaces (BCIs) have the potential to restore motor ability to people with paralysis and amputation, and have shown impressive performance in the lab. To transition BCI devices from the lab to the clinic, decoding must proceed automatically and in real time, which prohibits the use of algorithms that are computationally intensive or require manual tweaking. A common choice is to avoid spike sorting and treat the signal on each electrode as if it came from a single neuron, which is fast, easy, and therefore desirable for clinical use. But this approach ignores the kinematic information provided by individual neurons recorded on the same electrode. The contribution of this letter is a linear decoding model that extracts kinematic information from individual neurons without spike-sorting the electrode signals. The method relies on modeling sample averages of waveform features as functions of kinematics, which is automatic and requires minimal data storage and computation. In offline reconstruction of arm trajectories of a nonhuman primate performing reaching tasks, the proposed method performs as well as decoders based on expertly manually and automatically sorted spikes.
Burgmans, Mark Christiaan; den Harder, J Michiel; Meershoek, Philippa; van den Berg, Nynke S; Chan, Shaun Xavier Ju Min; van Leeuwen, Fijs W B; van Erkel, Arian R
2017-06-01
To determine the accuracy of automatic and manual co-registration methods for image fusion of three-dimensional computed tomography (CT) with real-time ultrasonography (US) for image-guided liver interventions. CT images of a skills phantom with liver lesions were acquired and co-registered to US using GE Logiq E9 navigation software. Manual co-registration was compared to automatic and semiautomatic co-registration using an active tracker. Also, manual point registration was compared to plane registration with and without an additional translation point. Finally, comparison was made between manual and automatic selection of reference points. In each experiment, accuracy of the co-registration method was determined by measurement of the residual displacement in phantom lesions by two independent observers. Mean displacements for a superficial and deep liver lesion were comparable after manual and semiautomatic co-registration: 2.4 and 2.0 mm versus 2.0 and 2.5 mm, respectively. Both methods were significantly better than automatic co-registration: 5.9 and 5.2 mm residual displacement (p < 0.001; p < 0.01). The accuracy of manual point registration was higher than that of plane registration, the latter being heavily dependent on accurate matching of axial CT and US images by the operator. Automatic reference point selection resulted in significantly lower registration accuracy compared to manual point selection despite lower root-mean-square deviation (RMSD) values. The accuracy of manual and semiautomatic co-registration is better than that of automatic co-registration. For manual co-registration using a plane, choosing the correct plane orientation is an essential first step in the registration process. Automatic reference point selection based on RMSD values is error-prone.
Yang, Zhen; Bogovic, John A; Carass, Aaron; Ye, Mao; Searson, Peter C; Prince, Jerry L
2013-03-13
With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in fluorescence images of confluent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the fluorescence images, the cell junctions are enhanced by applying an order-statistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0.88.
Methods for automatically analyzing humpback song units.
Rickwood, Peter; Taylor, Andrew
2008-03-01
This paper presents mathematical techniques for automatically extracting and analyzing bioacoustic signals. Automatic techniques are described for isolation of target signals from background noise, extraction of features from target signals and unsupervised classification (clustering) of the target signals based on these features. The only user-provided inputs, other than raw sound, is an initial set of signal processing and control parameters. Of particular note is that the number of signal categories is determined automatically. The techniques, applied to hydrophone recordings of humpback whales (Megaptera novaeangliae), produce promising initial results, suggesting that they may be of use in automated analysis of not only humpbacks, but possibly also in other bioacoustic settings where automated analysis is desirable.
ERIC Educational Resources Information Center
Melton, Jessica S.
Objectives of this project were to develop and test a method for automatically processing the text of abstracts for a document retrieval system. The test corpus consisted of 768 abstracts from the metallurgical section of Chemical Abstracts (CA). The system, based on a subject indexing rational, had two components: (1) a stored dictionary of words…
Reliable clarity automatic-evaluation method for optical remote sensing images
NASA Astrophysics Data System (ADS)
Qin, Bangyong; Shang, Ren; Li, Shengyang; Hei, Baoqin; Liu, Zhiwen
2015-10-01
Image clarity, which reflects the sharpness degree at the edge of objects in images, is an important quality evaluate index for optical remote sensing images. Scholars at home and abroad have done a lot of work on estimation of image clarity. At present, common clarity-estimation methods for digital images mainly include frequency-domain function methods, statistical parametric methods, gradient function methods and edge acutance methods. Frequency-domain function method is an accurate clarity-measure approach. However, its calculation process is complicate and cannot be carried out automatically. Statistical parametric methods and gradient function methods are both sensitive to clarity of images, while their results are easy to be affected by the complex degree of images. Edge acutance method is an effective approach for clarity estimate, while it needs picking out the edges manually. Due to the limits in accuracy, consistent or automation, these existing methods are not applicable to quality evaluation of optical remote sensing images. In this article, a new clarity-evaluation method, which is based on the principle of edge acutance algorithm, is proposed. In the new method, edge detection algorithm and gradient search algorithm are adopted to automatically search the object edges in images. Moreover, The calculation algorithm for edge sharpness has been improved. The new method has been tested with several groups of optical remote sensing images. Compared with the existing automatic evaluation methods, the new method perform better both in accuracy and consistency. Thus, the new method is an effective clarity evaluation method for optical remote sensing images.
Automatic evaluation of the Valsalva sinuses from cine-MRI
NASA Astrophysics Data System (ADS)
Blanchard, Cédric; Sliwa, Tadeusz; Lalande, Alain; Mohan, Pauliah; Bouchot, Olivier; Voisin, Yvon
2011-03-01
MRI appears to be particularly attractive for the study of the Sinuses of Valsalva (SV), however there is no global consensus on their suitable measurements. In this paper, we propose a new method, based on the mathematical morphology and combining a numerical geodesic reconstruction with an area estimation, to automatically evaluate the SV from a cine-MRI in a cross-sectional orientation. It consists in the extraction of the shape of the SV, the detection of relevant points (commissures, cusps and the centre of the SV), the measurement of relevant distances and in a classification of the valve as bicuspid or tricuspid by a metric evaluation of the SV. Our method was tested on 23 patient examinations and radii calculations were compared with a manual measurement. The classification of the valve as tricuspid or bicuspid was correct for all the cases. Moreover, there are an excellent correlation and an excellent concordance between manual and automatic measurements for images at diastolic phase (r= 0.97; y = x - 0.02; p=NS; mean of differences = -0.1 mm; standard deviation of differences = 2.3 mm) and at systolic phase (r= 0.96; y = 0.97 x + 0.80; p=NS ; mean of differences = -0.1 mm; standard deviation of differences = 2.4 mm). The cross-sectional orientation of the image acquisition plane conjugated with our automatic method provides a reliable morphometric evaluation of the SV, based on the automatic location of the centre of the SV, the commissure and the cusp positions. Measurements of distances between relevant points allow a precise evaluation of the SV.
Automatic detection and visualisation of MEG ripple oscillations in epilepsy.
van Klink, Nicole; van Rosmalen, Frank; Nenonen, Jukka; Burnos, Sergey; Helle, Liisa; Taulu, Samu; Furlong, Paul Lawrence; Zijlmans, Maeike; Hillebrand, Arjan
2017-01-01
High frequency oscillations (HFOs, 80-500 Hz) in invasive EEG are a biomarker for the epileptic focus. Ripples (80-250 Hz) have also been identified in non-invasive MEG, yet detection is impeded by noise, their low occurrence rates, and the workload of visual analysis. We propose a method that identifies ripples in MEG through noise reduction, beamforming and automatic detection with minimal user effort. We analysed 15 min of presurgical resting-state interictal MEG data of 25 patients with epilepsy. The MEG signal-to-noise was improved by using a cross-validation signal space separation method, and by calculating ~ 2400 beamformer-based virtual sensors in the grey matter. Ripples in these sensors were automatically detected by an algorithm optimized for MEG. A small subset of the identified ripples was visually checked. Ripple locations were compared with MEG spike dipole locations and the resection area if available. Running the automatic detection algorithm resulted in on average 905 ripples per patient, of which on average 148 ripples were visually reviewed. Reviewing took approximately 5 min per patient, and identified ripples in 16 out of 25 patients. In 14 patients the ripple locations showed good or moderate concordance with the MEG spikes. For six out of eight patients who had surgery, the ripple locations showed concordance with the resection area: 4/5 with good outcome and 2/3 with poor outcome. Automatic ripple detection in beamformer-based virtual sensors is a feasible non-invasive tool for the identification of ripples in MEG. Our method requires minimal user effort and is easily applicable in a clinical setting.
Terminologies for text-mining; an experiment in the lipoprotein metabolism domain
Alexopoulou, Dimitra; Wächter, Thomas; Pickersgill, Laura; Eyre, Cecilia; Schroeder, Michael
2008-01-01
Background The engineering of ontologies, especially with a view to a text-mining use, is still a new research field. There does not yet exist a well-defined theory and technology for ontology construction. Many of the ontology design steps remain manual and are based on personal experience and intuition. However, there exist a few efforts on automatic construction of ontologies in the form of extracted lists of terms and relations between them. Results We share experience acquired during the manual development of a lipoprotein metabolism ontology (LMO) to be used for text-mining. We compare the manually created ontology terms with the automatically derived terminology from four different automatic term recognition (ATR) methods. The top 50 predicted terms contain up to 89% relevant terms. For the top 1000 terms the best method still generates 51% relevant terms. In a corpus of 3066 documents 53% of LMO terms are contained and 38% can be generated with one of the methods. Conclusions Given high precision, automatic methods can help decrease development time and provide significant support for the identification of domain-specific vocabulary. The coverage of the domain vocabulary depends strongly on the underlying documents. Ontology development for text mining should be performed in a semi-automatic way; taking ATR results as input and following the guidelines we described. Availability The TFIDF term recognition is available as Web Service, described at PMID:18460175
Automatic liver segmentation from abdominal CT volumes using graph cuts and border marching.
Liao, Miao; Zhao, Yu-Qian; Liu, Xi-Yao; Zeng, Ye-Zhan; Zou, Bei-Ji; Wang, Xiao-Fang; Shih, Frank Y
2017-05-01
Identifying liver regions from abdominal computed tomography (CT) volumes is an important task for computer-aided liver disease diagnosis and surgical planning. This paper presents a fully automatic method for liver segmentation from CT volumes based on graph cuts and border marching. An initial slice is segmented by density peak clustering. Based on pixel- and patch-wise features, an intensity model and a PCA-based regional appearance model are developed to enhance the contrast between liver and background. Then, these models as well as the location constraint estimated iteratively are integrated into graph cuts in order to segment the liver in each slice automatically. Finally, a vessel compensation method based on the border marching is used to increase the segmentation accuracy. Experiments are conducted on a clinical data set we created and also on the MICCAI2007 Grand Challenge liver data. The results show that the proposed intensity, appearance models, and the location constraint are significantly effective for liver recognition, and the undersegmented vessels can be compensated by the border marching based method. The segmentation performances in terms of VOE, RVD, ASD, RMSD, and MSD as well as the average running time achieved by our method on the SLIVER07 public database are 5.8 ± 3.2%, -0.1 ± 4.1%, 1.0 ± 0.5mm, 2.0 ± 1.2mm, 21.2 ± 9.3mm, and 4.7 minutes, respectively, which are superior to those of existing methods. The proposed method does not require time-consuming training process and statistical model construction, and is capable of dealing with complicated shapes and intensity variations successfully. Copyright © 2017 Elsevier B.V. All rights reserved.
Nguyen, Thanh; Bui, Vy; Lam, Van; Raub, Christopher B; Chang, Lin-Ching; Nehmetallah, George
2017-06-26
We propose a fully automatic technique to obtain aberration free quantitative phase imaging in digital holographic microscopy (DHM) based on deep learning. The traditional DHM solves the phase aberration compensation problem by manually detecting the background for quantitative measurement. This would be a drawback in real time implementation and for dynamic processes such as cell migration phenomena. A recent automatic aberration compensation approach using principle component analysis (PCA) in DHM avoids human intervention regardless of the cells' motion. However, it corrects spherical/elliptical aberration only and disregards the higher order aberrations. Traditional image segmentation techniques can be employed to spatially detect cell locations. Ideally, automatic image segmentation techniques make real time measurement possible. However, existing automatic unsupervised segmentation techniques have poor performance when applied to DHM phase images because of aberrations and speckle noise. In this paper, we propose a novel method that combines a supervised deep learning technique with convolutional neural network (CNN) and Zernike polynomial fitting (ZPF). The deep learning CNN is implemented to perform automatic background region detection that allows for ZPF to compute the self-conjugated phase to compensate for most aberrations.
Frisby, Joshua; Smith, Vernon; Traub, Stephen; Patel, Vimla L
2017-01-01
Hospital Emergency Departments (EDs) frequently experience crowding. One of the factors that contributes to this crowding is the "door to doctor time", which is the time from a patient's registration to when the patient is first seen by a physician. This is also one of the Meaningful Use (MU) performance measures that emergency departments report to the Center for Medicare and Medicaid Services (CMS). Current documentation methods for this measure are inaccurate due to the imprecision in manual data collection. We describe a method for automatically (in real time) and more accurately documenting the door to physician time. Using sensor-based technology, the distance between the physician and the computer is calculated by using the single board computers installed in patient rooms that log each time a Bluetooth signal is seen from a device that the physicians carry. This distance is compared automatically with the accepted room radius to determine if the physicians are present in the room at the time logged to provide greater precision. The logged times, accurate to the second, were compared with physicians' handwritten times, showing automatic recordings to be more precise. This real time automatic method will free the physician from extra cognitive load of manually recording data. This method for evaluation of performance is generic and can be used in any other setting outside the ED, and for purposes other than measuring physician time. Copyright © 2016 Elsevier Inc. All rights reserved.
Three-dimensional murine airway segmentation in micro-CT images
NASA Astrophysics Data System (ADS)
Shi, Lijun; Thiesse, Jacqueline; McLennan, Geoffrey; Hoffman, Eric A.; Reinhardt, Joseph M.
2007-03-01
Thoracic imaging for small animals has emerged as an important tool for monitoring pulmonary disease progression and therapy response in genetically engineered animals. Micro-CT is becoming the standard thoracic imaging modality in small animal imaging because it can produce high-resolution images of the lung parenchyma, vasculature, and airways. Segmentation, measurement, and visualization of the airway tree is an important step in pulmonary image analysis. However, manual analysis of the airway tree in micro-CT images can be extremely time-consuming since a typical dataset is usually on the order of several gigabytes in size. Automated and semi-automated tools for micro-CT airway analysis are desirable. In this paper, we propose an automatic airway segmentation method for in vivo micro-CT images of the murine lung and validate our method by comparing the automatic results to manual tracing. Our method is based primarily on grayscale morphology. The results show good visual matches between manually segmented and automatically segmented trees. The average true positive volume fraction compared to manual analysis is 91.61%. The overall runtime for the automatic method is on the order of 30 minutes per volume compared to several hours to a few days for manual analysis.
Automatic Segmentation of High-Throughput RNAi Fluorescent Cellular Images
Yan, Pingkum; Zhou, Xiaobo; Shah, Mubarak; Wong, Stephen T. C.
2010-01-01
High-throughput genome-wide RNA interference (RNAi) screening is emerging as an essential tool to assist biologists in understanding complex cellular processes. The large number of images produced in each study make manual analysis intractable; hence, automatic cellular image analysis becomes an urgent need, where segmentation is the first and one of the most important steps. In this paper, a fully automatic method for segmentation of cells from genome-wide RNAi screening images is proposed. Nuclei are first extracted from the DNA channel by using a modified watershed algorithm. Cells are then extracted by modeling the interaction between them as well as combining both gradient and region information in the Actin and Rac channels. A new energy functional is formulated based on a novel interaction model for segmenting tightly clustered cells with significant intensity variance and specific phenotypes. The energy functional is minimized by using a multiphase level set method, which leads to a highly effective cell segmentation method. Promising experimental results demonstrate that automatic segmentation of high-throughput genome-wide multichannel screening can be achieved by using the proposed method, which may also be extended to other multichannel image segmentation problems. PMID:18270043
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burgmans, Mark Christiaan, E-mail: m.c.burgmans@lumc.nl; Harder, J. Michiel den, E-mail: chiel.den.harder@gmail.com; Meershoek, Philippa, E-mail: P.Meershoek@lumc.nl
PurposeTo determine the accuracy of automatic and manual co-registration methods for image fusion of three-dimensional computed tomography (CT) with real-time ultrasonography (US) for image-guided liver interventions.Materials and MethodsCT images of a skills phantom with liver lesions were acquired and co-registered to US using GE Logiq E9 navigation software. Manual co-registration was compared to automatic and semiautomatic co-registration using an active tracker. Also, manual point registration was compared to plane registration with and without an additional translation point. Finally, comparison was made between manual and automatic selection of reference points. In each experiment, accuracy of the co-registration method was determined bymore » measurement of the residual displacement in phantom lesions by two independent observers.ResultsMean displacements for a superficial and deep liver lesion were comparable after manual and semiautomatic co-registration: 2.4 and 2.0 mm versus 2.0 and 2.5 mm, respectively. Both methods were significantly better than automatic co-registration: 5.9 and 5.2 mm residual displacement (p < 0.001; p < 0.01). The accuracy of manual point registration was higher than that of plane registration, the latter being heavily dependent on accurate matching of axial CT and US images by the operator. Automatic reference point selection resulted in significantly lower registration accuracy compared to manual point selection despite lower root-mean-square deviation (RMSD) values.ConclusionThe accuracy of manual and semiautomatic co-registration is better than that of automatic co-registration. For manual co-registration using a plane, choosing the correct plane orientation is an essential first step in the registration process. Automatic reference point selection based on RMSD values is error-prone.« less
NASA Astrophysics Data System (ADS)
Li, Dengwang; Liu, Li; Chen, Jinhu; Li, Hongsheng; Yin, Yong; Ibragimov, Bulat; Xing, Lei
2017-01-01
Atlas-based segmentation utilizes a library of previously delineated contours of similar cases to facilitate automatic segmentation. The problem, however, remains challenging because of limited information carried by the contours in the library. In this studying, we developed a narrow-shell strategy to enhance the information of each contour in the library and to improve the accuracy of the exiting atlas-based approach. This study presented a new concept of atlas based segmentation method. Instead of using the complete volume of the target organs, only information along the organ contours from the atlas images was used for guiding segmentation of the new image. In setting up an atlas-based library, we included not only the coordinates of contour points, but also the image features adjacent to the contour. In this work, 139 CT images with normal appearing livers collected for radiotherapy treatment planning were used to construct the library. The CT images within the library were first registered to each other using affine registration. The nonlinear narrow shell was generated alongside the object contours of registered images. Matching voxels were selected inside common narrow shell image features of a library case and a new case using a speed-up robust features (SURF) strategy. A deformable registration was then performed using a thin plate splines (TPS) technique. The contour associated with the library case was propagated automatically onto the new image by exploiting the deformation field vectors. The liver contour was finally obtained by employing level set based energy optimization within the narrow shell. The performance of the proposed method was evaluated by comparing quantitatively the auto-segmentation results with that delineated by physicians. A novel atlas-based segmentation technique with inclusion of neighborhood image features through the introduction of a narrow-shell surrounding the target objects was established. Application of the technique to 30 liver cases suggested that the technique was capable to reliably segment liver cases from CT, 4D-CT, and CBCT images with little human interaction. The accuracy and speed of the proposed method are quantitatively validated by comparing automatic segmentation results with the manual delineation results. The Jaccard similarity metric between the automatically generated liver contours obtained by the proposed method and the physician delineated results are on an average 90%-96% for planning images. Incorporation of image features into the library contours improves the currently available atlas-based auto-contouring techniques and provides a clinically practical solution for auto-segmentation. The proposed mountainous narrow shell atlas based method can achieve efficient automatic liver propagation for CT, 4D-CT and CBCT images with following treatment planning and should find widespread application in future treatment planning systems.
Li, Dengwang; Liu, Li; Chen, Jinhu; Li, Hongsheng; Yin, Yong; Ibragimov, Bulat; Xing, Lei
2017-01-07
Atlas-based segmentation utilizes a library of previously delineated contours of similar cases to facilitate automatic segmentation. The problem, however, remains challenging because of limited information carried by the contours in the library. In this studying, we developed a narrow-shell strategy to enhance the information of each contour in the library and to improve the accuracy of the exiting atlas-based approach. This study presented a new concept of atlas based segmentation method. Instead of using the complete volume of the target organs, only information along the organ contours from the atlas images was used for guiding segmentation of the new image. In setting up an atlas-based library, we included not only the coordinates of contour points, but also the image features adjacent to the contour. In this work, 139 CT images with normal appearing livers collected for radiotherapy treatment planning were used to construct the library. The CT images within the library were first registered to each other using affine registration. The nonlinear narrow shell was generated alongside the object contours of registered images. Matching voxels were selected inside common narrow shell image features of a library case and a new case using a speed-up robust features (SURF) strategy. A deformable registration was then performed using a thin plate splines (TPS) technique. The contour associated with the library case was propagated automatically onto the new image by exploiting the deformation field vectors. The liver contour was finally obtained by employing level set based energy optimization within the narrow shell. The performance of the proposed method was evaluated by comparing quantitatively the auto-segmentation results with that delineated by physicians. A novel atlas-based segmentation technique with inclusion of neighborhood image features through the introduction of a narrow-shell surrounding the target objects was established. Application of the technique to 30 liver cases suggested that the technique was capable to reliably segment liver cases from CT, 4D-CT, and CBCT images with little human interaction. The accuracy and speed of the proposed method are quantitatively validated by comparing automatic segmentation results with the manual delineation results. The Jaccard similarity metric between the automatically generated liver contours obtained by the proposed method and the physician delineated results are on an average 90%-96% for planning images. Incorporation of image features into the library contours improves the currently available atlas-based auto-contouring techniques and provides a clinically practical solution for auto-segmentation. The proposed mountainous narrow shell atlas based method can achieve efficient automatic liver propagation for CT, 4D-CT and CBCT images with following treatment planning and should find widespread application in future treatment planning systems.
Analysis of Technique to Extract Data from the Web for Improved Performance
NASA Astrophysics Data System (ADS)
Gupta, Neena; Singh, Manish
2010-11-01
The World Wide Web rapidly guides the world into a newly amazing electronic world, where everyone can publish anything in electronic form and extract almost all the information. Extraction of information from semi structured or unstructured documents, such as web pages, is a useful yet complex task. Data extraction, which is important for many applications, extracts the records from the HTML files automatically. Ontologies can achieve a high degree of accuracy in data extraction. We analyze method for data extraction OBDE (Ontology-Based Data Extraction), which automatically extracts the query result records from the web with the help of agents. OBDE first constructs an ontology for a domain according to information matching between the query interfaces and query result pages from different web sites within the same domain. Then, the constructed domain ontology is used during data extraction to identify the query result section in a query result page and to align and label the data values in the extracted records. The ontology-assisted data extraction method is fully automatic and overcomes many of the deficiencies of current automatic data extraction methods.
Automatic and Direct Identification of Blink Components from Scalp EEG
Kong, Wanzeng; Zhou, Zhanpeng; Hu, Sanqing; Zhang, Jianhai; Babiloni, Fabio; Dai, Guojun
2013-01-01
Eye blink is an important and inevitable artifact during scalp electroencephalogram (EEG) recording. The main problem in EEG signal processing is how to identify eye blink components automatically with independent component analysis (ICA). Taking into account the fact that the eye blink as an external source has a higher sum of correlation with frontal EEG channels than all other sources due to both its location and significant amplitude, in this paper, we proposed a method based on correlation index and the feature of power distribution to automatically detect eye blink components. Furthermore, we prove mathematically that the correlation between independent components and scalp EEG channels can be translating directly from the mixing matrix of ICA. This helps to simplify calculations and understand the implications of the correlation. The proposed method doesn't need to select a template or thresholds in advance, and it works without simultaneously recording an electrooculography (EOG) reference. The experimental results demonstrate that the proposed method can automatically recognize eye blink components with a high accuracy on entire datasets from 15 subjects. PMID:23959240
Automatic detection of Parkinson's disease in running speech spoken in three different languages.
Orozco-Arroyave, J R; Hönig, F; Arias-Londoño, J D; Vargas-Bonilla, J F; Daqrouq, K; Skodda, S; Rusz, J; Nöth, E
2016-01-01
The aim of this study is the analysis of continuous speech signals of people with Parkinson's disease (PD) considering recordings in different languages (Spanish, German, and Czech). A method for the characterization of the speech signals, based on the automatic segmentation of utterances into voiced and unvoiced frames, is addressed here. The energy content of the unvoiced sounds is modeled using 12 Mel-frequency cepstral coefficients and 25 bands scaled according to the Bark scale. Four speech tasks comprising isolated words, rapid repetition of the syllables /pa/-/ta/-/ka/, sentences, and read texts are evaluated. The method proves to be more accurate than classical approaches in the automatic classification of speech of people with PD and healthy controls. The accuracies range from 85% to 99% depending on the language and the speech task. Cross-language experiments are also performed confirming the robustness and generalization capability of the method, with accuracies ranging from 60% to 99%. This work comprises a step forward for the development of computer aided tools for the automatic assessment of dysarthric speech signals in multiple languages.
NASA Astrophysics Data System (ADS)
Pries, V. V.; Proskuriakov, N. E.
2018-04-01
To control the assembly quality of multi-element mass-produced products on automatic rotor lines, control methods with operational feedback are required. However, due to possible failures in the operation of the devices and systems of automatic rotor line, there is always a real probability of getting defective (incomplete) products into the output process stream. Therefore, a continuous sampling control of the products completeness, based on the use of statistical methods, remains an important element in managing the quality of assembly of multi-element mass products on automatic rotor lines. The feature of continuous sampling control of the multi-element products completeness in the assembly process is its breaking sort, which excludes the possibility of returning component parts after sampling control to the process stream and leads to a decrease in the actual productivity of the assembly equipment. Therefore, the use of statistical procedures for continuous sampling control of the multi-element products completeness when assembled on automatic rotor lines requires the use of such sampling plans that ensure a minimum size of control samples. Comparison of the values of the limit of the average output defect level for the continuous sampling plan (CSP) and for the automated continuous sampling plan (ACSP) shows the possibility of providing lower limit values for the average output defects level using the ACSP-1. Also, the average sample size when using the ACSP-1 plan is less than when using the CSP-1 plan. Thus, the application of statistical methods in the assembly quality management of multi-element products on automatic rotor lines, involving the use of proposed plans and methods for continuous selective control, will allow to automating sampling control procedures and the required level of quality of assembled products while minimizing sample size.
Nowinski, Wieslaw L; Thirunavuukarasuu, Arumugam; Ananthasubramaniam, Anand; Chua, Beng Choon; Qian, Guoyu; Nowinska, Natalia G; Marchenko, Yevgen; Volkau, Ihar
2009-10-01
Preparation of tests and student's assessment by the instructor are time consuming. We address these two tasks in neuroanatomy education by employing a digital media application with a three-dimensional (3D), interactive, fully segmented, and labeled brain atlas. The anatomical and vascular models in the atlas are linked to Terminologia Anatomica. Because the cerebral models are fully segmented and labeled, our approach enables automatic and random atlas-derived generation of questions to test location and naming of cerebral structures. This is done in four steps: test individualization by the instructor, test taking by the students at their convenience, automatic student assessment by the application, and communication of the individual assessment to the instructor. A computer-based application with an interactive 3D atlas and a preliminary mobile-based application were developed to realize this approach. The application works in two test modes: instructor and student. In the instructor mode, the instructor customizes the test by setting the scope of testing and student performance criteria, which takes a few seconds. In the student mode, the student is tested and automatically assessed. Self-testing is also feasible at any time and pace. Our approach is automatic both with respect to test generation and student assessment. It is also objective, rapid, and customizable. We believe that this approach is novel from computer-based, mobile-based, and atlas-assisted standpoints.
NASA Astrophysics Data System (ADS)
Ometto, Giovanni; Calivá, Francesco; Al-Diri, Bashir; Bek, Toke; Hunter, Andrew
2016-03-01
Automatic, quick and reliable identification of retinal landmarks from fundus photography is key for measurements used in research, diagnosis, screening and treating of common diseases affecting the eyes. This study presents a fast method for the detection of the centre of mass of the vascular arcades, optic nerve head (ONH) and fovea, used in the definition of five clinically relevant areas in use for screening programmes for diabetic retinopathy (DR). Thirty-eight fundus photographs showing 7203 DR lesions were analysed to find the landmarks manually by two retina-experts and automatically by the proposed method. The automatic identification of the ONH and fovea were performed using template matching based on normalised cross correlation. The centre of mass of the arcades was obtained by fitting an ellipse on sample coordinates of the main vessels. The coordinates were obtained by processing the image with hessian filtering followed by shape analyses and finally sampling the results. The regions obtained manually and automatically were used to count the retinal lesions falling within, and to evaluate the method. 92.7% of the lesions were falling within the same regions based on the landmarks selected by the two experts. 91.7% and 89.0% were counted in the same areas identified by the method and the first and second expert respectively. The inter-repeatability of the proposed method and the experts is comparable, while the 100% intra-repeatability makes the algorithm a valuable tool in tasks like analyses in real-time, of large datasets and of intra-patient variability.
NASA Astrophysics Data System (ADS)
Pena-Verdeal, Hugo; Garcia-Resua, Carlos; Yebra-Pimentel, Eva; Giraldez, Maria J.
2017-08-01
Purpose: Different lower tear meniscus parameters can be clinical assessed on dry eye diagnosis. The aim of this study was to propose and analyse the variability of a semi-automatic method for measuring lower tear meniscus central area (TMCA) by using open source software. Material and methods: On a group of 105 subjects, one video of the lower tear meniscus after fluorescein instillation was generated by a digital camera attached to a slit-lamp. A short light beam (3x5 mm) with moderate illumination in the central portion of the meniscus (6 o'clock) was used. Images were extracted from each video by a masked observer. By using an open source software based on Java (NIH ImageJ), a further observer measured in a masked and randomized order the TMCA in the short light beam illuminated area by two methods: (1) manual method, where TMCA images was "manually" measured; (2) semi-automatic method, where TMCA images were transformed in an 8-bit-binary image, then holes inside this shape were filled and on the isolated shape, the area size was obtained. Finally, both measurements, manual and semi-automatic, were compared. Results: Paired t-test showed no statistical difference between both techniques results (p = 0.102). Pearson correlation between techniques show a significant positive near to perfect correlation (r = 0.99; p < 0.001). Conclusions: This study showed a useful tool to objectively measure the frontal central area of the meniscus in photography by free open source software.
Automatic peak selection by a Benjamini-Hochberg-based algorithm.
Abbas, Ahmed; Kong, Xin-Bing; Liu, Zhi; Jing, Bing-Yi; Gao, Xin
2013-01-01
A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into [Formula: see text]-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx.
Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm
Abbas, Ahmed; Kong, Xin-Bing; Liu, Zhi; Jing, Bing-Yi; Gao, Xin
2013-01-01
A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into -values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. PMID:23308147
Global optimization method based on ray tracing to achieve optimum figure error compensation
NASA Astrophysics Data System (ADS)
Liu, Xiaolin; Guo, Xuejia; Tang, Tianjin
2017-02-01
Figure error would degrade the performance of optical system. When predicting the performance and performing system assembly, compensation by clocking of optical components around the optical axis is a conventional but user-dependent method. Commercial optical software cannot optimize this clocking. Meanwhile existing automatic figure-error balancing methods can introduce approximate calculation error and the build process of optimization model is complex and time-consuming. To overcome these limitations, an accurate and automatic global optimization method of figure error balancing is proposed. This method is based on precise ray tracing to calculate the wavefront error, not approximate calculation, under a given elements' rotation angles combination. The composite wavefront error root-mean-square (RMS) acts as the cost function. Simulated annealing algorithm is used to seek the optimal combination of rotation angles of each optical element. This method can be applied to all rotational symmetric optics. Optimization results show that this method is 49% better than previous approximate analytical method.
Automatic vibration mode selection and excitation; combining modal filtering with autoresonance
NASA Astrophysics Data System (ADS)
Davis, Solomon; Bucher, Izhak
2018-02-01
Autoresonance is a well-known nonlinear feedback method used for automatically exciting a system at its natural frequency. Though highly effective in exciting single degree of freedom systems, in its simplest form it lacks a mechanism for choosing the mode of excitation when more than one is present. In this case a single mode will be automatically excited, but this mode cannot be chosen or changed. In this paper a new method for automatically exciting a general second-order system at any desired natural frequency using Autoresonance is proposed. The article begins by deriving a concise expression for the frequency of the limit cycle induced by an Autoresonance feedback loop enclosed on the system. The expression is based on modal decomposition, and provides valuable insight into the behavior of a system controlled in this way. With this expression, a method for selecting and exciting a desired mode naturally follows by combining Autoresonance with Modal Filtering. By taking various linear combinations of the sensor signals, by orthogonality one can "filter out" all the unwanted modes effectively. The desired mode's natural frequency is then automatically reflected in the limit cycle. In experiment the technique has proven extremely robust, even if the amplitude of the desired mode is significantly smaller than the others and the modal filters are greatly inaccurate.
Measuring signal-to-noise ratio automatically
NASA Technical Reports Server (NTRS)
Bergman, L. A.; Johnston, A. R.
1980-01-01
Automated method of measuring signal-to-noise ratio in digital communication channels is more precise and 100 times faster than previous methods used. Method based on bit-error-rate (B&R) measurement can be used with cable, microwave radio, or optical links.
Automatic corpus callosum segmentation for standardized MR brain scanning
NASA Astrophysics Data System (ADS)
Xu, Qing; Chen, Hong; Zhang, Li; Novak, Carol L.
2007-03-01
Magnetic Resonance (MR) brain scanning is often planned manually with the goal of aligning the imaging plane with key anatomic landmarks. The planning is time-consuming and subject to inter- and intra- operator variability. An automatic and standardized planning of brain scans is highly useful for clinical applications, and for maximum utility should work on patients of all ages. In this study, we propose a method for fully automatic planning that utilizes the landmarks from two orthogonal images to define the geometry of the third scanning plane. The corpus callosum (CC) is segmented in sagittal images by an active shape model (ASM), and the result is further improved by weighting the boundary movement with confidence scores and incorporating region based refinement. Based on the extracted contour of the CC, several important landmarks are located and then combined with landmarks from the coronal or transverse plane to define the geometry of the third plane. Our automatic method is tested on 54 MR images from 24 patients and 3 healthy volunteers, with ages ranging from 4 months to 70 years old. The average accuracy with respect to two manually labeled points on the CC is 3.54 mm and 4.19 mm, and differed by an average of 2.48 degrees from the orientation of the line connecting them, demonstrating that our method is sufficiently accurate for clinical use.
A robust automatic phase correction method for signal dense spectra
NASA Astrophysics Data System (ADS)
Bao, Qingjia; Feng, Jiwen; Chen, Li; Chen, Fang; Liu, Zao; Jiang, Bin; Liu, Chaoyang
2013-09-01
A robust automatic phase correction method for Nuclear Magnetic Resonance (NMR) spectra is presented. In this work, a new strategy combining ‘coarse tuning' with ‘fine tuning' is introduced to correct various spectra accurately. In the ‘coarse tuning' procedure, a new robust baseline recognition method is proposed for determining the positions of the tail ends of the peaks, and then the preliminary phased spectra are obtained by minimizing the objective function based on the height difference of these tail ends. After the ‘coarse tuning', the peaks in the preliminary corrected spectra can be categorized into three classes: positive, negative, and distorted. Based on the classification result, a new custom negative penalty function used in the step of ‘fine tuning' is constructed to avoid the negative peak points in the spectra excluded in the negative peaks and distorted peaks. Finally, the fine phased spectra can be obtained by minimizing the custom negative penalty function. This method is proven to be very robust for it is tolerant to low signal-to-noise ratio, large baseline distortion and independent of the starting search points of phasing parameters. The experimental results on both 1D metabonomics spectra with over-crowded peaks and 2D spectra demonstrate the high efficiency of this automatic method.
Link, Daphna; Braginsky, Michael B; Joskowicz, Leo; Ben Sira, Liat; Harel, Shaul; Many, Ariel; Tarrasch, Ricardo; Malinger, Gustavo; Artzi, Moran; Kapoor, Cassandra; Miller, Elka; Ben Bashat, Dafna
2018-01-01
Accurate fetal brain volume estimation is of paramount importance in evaluating fetal development. The aim of this study was to develop an automatic method for fetal brain segmentation from magnetic resonance imaging (MRI) data, and to create for the first time a normal volumetric growth chart based on a large cohort. A semi-automatic segmentation method based on Seeded Region Growing algorithm was developed and applied to MRI data of 199 typically developed fetuses between 18 and 37 weeks' gestation. The accuracy of the algorithm was tested against a sub-cohort of ground truth manual segmentations. A quadratic regression analysis was used to create normal growth charts. The sensitivity of the method to identify developmental disorders was demonstrated on 9 fetuses with intrauterine growth restriction (IUGR). The developed method showed high correlation with manual segmentation (r2 = 0.9183, p < 0.001) as well as mean volume and volume overlap differences of 4.77 and 18.13%, respectively. New reference data on 199 normal fetuses were created, and all 9 IUGR fetuses were at or below the third percentile of the normal growth chart. The proposed method is fast, accurate, reproducible, user independent, applicable with retrospective data, and is suggested for use in routine clinical practice. © 2017 S. Karger AG, Basel.
Automatic detection and classification of obstacles with applications in autonomous mobile robots
NASA Astrophysics Data System (ADS)
Ponomaryov, Volodymyr I.; Rosas-Miranda, Dario I.
2016-04-01
Hardware implementation of an automatic detection and classification of objects that can represent an obstacle for an autonomous mobile robot using stereo vision algorithms is presented. We propose and evaluate a new method to detect and classify objects for a mobile robot in outdoor conditions. This method is divided in two parts, the first one is the object detection step based on the distance from the objects to the camera and a BLOB analysis. The second part is the classification step that is based on visuals primitives and a SVM classifier. The proposed method is performed in GPU in order to reduce the processing time values. This is performed with help of hardware based on multi-core processors and GPU platform, using a NVIDIA R GeForce R GT640 graphic card and Matlab over a PC with Windows 10.
A novel murmur-based heart sound feature extraction technique using envelope-morphological analysis
NASA Astrophysics Data System (ADS)
Yao, Hao-Dong; Ma, Jia-Li; Fu, Bin-Bin; Wang, Hai-Yang; Dong, Ming-Chui
2015-07-01
Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.
Höink, Anna Janina; Schülke, Christoph; Koch, Raphael; Löhnert, Annika; Kammerer, Sara; Fortkamp, Rasmus; Heindel, Walter; Buerke, Boris
2017-11-01
Purpose To compare measurement precision and interobserver variability in the evaluation of hepatocellular carcinoma (HCC) and liver metastases in MSCT before and after transarterial local ablative therapies. Materials and Methods Retrospective study of 72 patients with malignant liver lesions (42 metastases; 30 HCCs) before and after therapy (43 SIRT procedures; 29 TACE procedures). Established (LAD; SAD; WHO) and vitality-based parameters (mRECIST; mLAD; mSAD; EASL) were assessed manually and semi-automatically by two readers. The relative interobserver difference (RID) and intraclass correlation coefficient (ICC) were calculated. Results The median RID for vitality-based parameters was lower from semi-automatic than from manual measurement of mLAD (manual 12.5 %; semi-automatic 3.4 %), mSAD (manual 12.7 %; semi-automatic 5.7 %) and EASL (manual 10.4 %; semi-automatic 1.8 %). The difference in established parameters was not statistically noticeable (p > 0.05). The ICCs of LAD (manual 0.984; semi-automatic 0.982), SAD (manual 0.975; semi-automatic 0.958) and WHO (manual 0.984; semi-automatic 0.978) are high, both in manual and semi-automatic measurements. The ICCs of manual measurements of mLAD (0.897), mSAD (0.844) and EASL (0.875) are lower. This decrease cannot be found in semi-automatic measurements of mLAD (0.997), mSAD (0.992) and EASL (0.998). Conclusion Vitality-based tumor measurements of HCC and metastases after transarterial local therapies should be performed semi-automatically due to greater measurement precision, thus increasing the reproducibility and in turn the reliability of therapeutic decisions. Key points · Liver lesion measurements according to EASL and mRECIST are more precise when performed semi-automatically.. · The higher reproducibility may facilitate a more reliable classification of therapy response.. · Measurements according to RECIST and WHO offer equivalent precision semi-automatically and manually.. Citation Format · Höink AJ, Schülke C, Koch R et al. Response Evaluation of Malignant Liver Lesions After TACE/SIRT: Comparison of Manual and Semi-Automatic Measurement of Different Response Criteria in Multislice CT. Fortschr Röntgenstr 2017; 189: 1067 - 1075. © Georg Thieme Verlag KG Stuttgart · New York.
Efficient Semi-Automatic 3D Segmentation for Neuron Tracing in Electron Microscopy Images
Jones, Cory; Liu, Ting; Cohan, Nathaniel Wood; Ellisman, Mark; Tasdizen, Tolga
2015-01-01
0.1. Background In the area of connectomics, there is a significant gap between the time required for data acquisition and dense reconstruction of the neural processes contained in the same dataset. Automatic methods are able to eliminate this timing gap, but the state-of-the-art accuracy so far is insufficient for use without user corrections. If completed naively, this process of correction can be tedious and time consuming. 0.2. New Method We present a new semi-automatic method that can be used to perform 3D segmentation of neurites in EM image stacks. It utilizes an automatic method that creates a hierarchical structure for recommended merges of superpixels. The user is then guided through each predicted region to quickly identify errors and establish correct links. 0.3. Results We tested our method on three datasets with both novice and expert users. Accuracy and timing were compared with published automatic, semi-automatic, and manual results. 0.4. Comparison with Existing Methods Post-automatic correction methods have also been used in [1] and [2]. These methods do not provide navigation or suggestions in the manner we present. Other semi-automatic methods require user input prior to the automatic segmentation such as [3] and [4] and are inherently different than our method. 0.5. Conclusion Using this method on the three datasets, novice users achieved accuracy exceeding state-of-the-art automatic results, and expert users achieved accuracy on par with full manual labeling but with a 70% time improvement when compared with other examples in publication. PMID:25769273
Intelligent E-Learning Systems: Automatic Construction of Ontologies
NASA Astrophysics Data System (ADS)
Peso, Jesús del; de Arriaga, Fernando
2008-05-01
During the last years a new generation of Intelligent E-Learning Systems (ILS) has emerged with enhanced functionality due, mainly, to influences from Distributed Artificial Intelligence, to the use of cognitive modelling, to the extensive use of the Internet, and to new educational ideas such as the student-centered education and Knowledge Management. The automatic construction of ontologies provides means of automatically updating the knowledge bases of their respective ILS, and of increasing their interoperability and communication among them, sharing the same ontology. The paper presents a new approach, able to produce ontologies from a small number of documents such as those obtained from the Internet, without the assistance of large corpora, by using simple syntactic rules and some semantic information. The method is independent of the natural language used. The use of a multi-agent system increases the flexibility and capability of the method. Although the method can be easily improved, the results so far obtained, are promising.
Automatic Detection and Vulnerability Analysis of Areas Endangered by Heavy Rain
NASA Astrophysics Data System (ADS)
Krauß, Thomas; Fischer, Peter
2016-08-01
In this paper we present a new method for fully automatic detection and derivation of areas endangered by heavy rainfall based only on digital elevation models. Tracking news show that the majority of occuring natural hazards are flood events. So already many flood prediction systems were developed. But most of these existing systems for deriving areas endangered by flooding events are based only on horizontal and vertical distances to existing rivers and lakes. Typically such systems take not into account dangers arising directly from heavy rain events. In a study conducted by us together with a german insurance company a new approach for detection of areas endangered by heavy rain was proven to give a high correlation of the derived endangered areas and the losses claimed at the insurance company. Here we describe three methods for classification of digital terrain models and analyze their usability for automatic detection and vulnerability analysis for areas endangered by heavy rainfall and analyze the results using the available insurance data.
Automatic sentence extraction for the detection of scientific paper relations
NASA Astrophysics Data System (ADS)
Sibaroni, Y.; Prasetiyowati, S. S.; Miftachudin, M.
2018-03-01
The relations between scientific papers are very useful for researchers to see the interconnection between scientific papers quickly. By observing the inter-article relationships, researchers can identify, among others, the weaknesses of existing research, performance improvements achieved to date, and tools or data typically used in research in specific fields. So far, methods that have been developed to detect paper relations include machine learning and rule-based methods. However, a problem still arises in the process of sentence extraction from scientific paper documents, which is still done manually. This manual process causes the detection of scientific paper relations longer and inefficient. To overcome this problem, this study performs an automatic sentences extraction while the paper relations are identified based on the citation sentence. The performance of the built system is then compared with that of the manual extraction system. The analysis results suggested that the automatic sentence extraction indicates a very high level of performance in the detection of paper relations, which is close to that of manual sentence extraction.
Lopez-Meyer, Paulo; Schuckers, Stephanie; Makeyev, Oleksandr; Fontana, Juan M; Sazonov, Edward
2012-09-01
The number of distinct foods consumed in a meal is of significant clinical concern in the study of obesity and other eating disorders. This paper proposes the use of information contained in chewing and swallowing sequences for meal segmentation by food types. Data collected from experiments of 17 volunteers were analyzed using two different clustering techniques. First, an unsupervised clustering technique, Affinity Propagation (AP), was used to automatically identify the number of segments within a meal. Second, performance of the unsupervised AP method was compared to a supervised learning approach based on Agglomerative Hierarchical Clustering (AHC). While the AP method was able to obtain 90% accuracy in predicting the number of food items, the AHC achieved an accuracy >95%. Experimental results suggest that the proposed models of automatic meal segmentation may be utilized as part of an integral application for objective Monitoring of Ingestive Behavior in free living conditions.
Automatic low-order aberration correction based on geometrical optics for slab lasers.
Yu, Xin; Dong, Lizhi; Lai, Boheng; Yang, Ping; Liu, Yong; Kong, Qingfeng; Yang, Kangjian; Tang, Guomao; Xu, Bing
2017-02-20
In this paper, we present a method based on geometry optics to simultaneously correct low-order aberrations and reshape the beams of slab lasers. A coaxial optical system with three lenses is adapted. The positions of the three lenses are directly calculated based on the beam parameters detected by wavefront sensors. The initial sizes of the input beams are 1.8 mm×11 mm, and peak-to-valley (PV) values of the wavefront range up to several tens of microns. After automatic correction, the dimensions may reach nearly 22 mm×22 mm as expected, and PV values of the wavefront are less than 2 μm. The effectiveness and precision of this method are verified with experiments.
NASA Astrophysics Data System (ADS)
Zhenying, Xu; Jiandong, Zhu; Qi, Zhang; Yamba, Philip
2018-06-01
Metallographic microscopy shows that the vast majority of metal materials are composed of many small grains; the grain size of a metal is important for determining the tensile strength, toughness, plasticity, and other mechanical properties. In order to quantitatively evaluate grain size in metals, grain boundaries must be identified in metallographic images. Based on the phenomenon of grain boundary blurring or disconnection in metallographic images, this study develops an algorithm based on regional separation for automatically extracting grain boundaries by an improved mean shift method. Experimental observation shows that the grain boundaries obtained by the proposed algorithm are highly complete and accurate. This research has practical value because the proposed algorithm is suitable for grain boundary extraction from most metallographic images.
Automated encoding of clinical documents based on natural language processing.
Friedman, Carol; Shagina, Lyudmila; Lussier, Yves; Hripcsak, George
2004-01-01
The aim of this study was to develop a method based on natural language processing (NLP) that automatically maps an entire clinical document to codes with modifiers and to quantitatively evaluate the method. An existing NLP system, MedLEE, was adapted to automatically generate codes. The method involves matching of structured output generated by MedLEE consisting of findings and modifiers to obtain the most specific code. Recall and precision applied to Unified Medical Language System (UMLS) coding were evaluated in two separate studies. Recall was measured using a test set of 150 randomly selected sentences, which were processed using MedLEE. Results were compared with a reference standard determined manually by seven experts. Precision was measured using a second test set of 150 randomly selected sentences from which UMLS codes were automatically generated by the method and then validated by experts. Recall of the system for UMLS coding of all terms was .77 (95% CI.72-.81), and for coding terms that had corresponding UMLS codes recall was .83 (.79-.87). Recall of the system for extracting all terms was .84 (.81-.88). Recall of the experts ranged from .69 to .91 for extracting terms. The precision of the system was .89 (.87-.91), and precision of the experts ranged from .61 to .91. Extraction of relevant clinical information and UMLS coding were accomplished using a method based on NLP. The method appeared to be comparable to or better than six experts. The advantage of the method is that it maps text to codes along with other related information, rendering the coded output suitable for effective retrieval.
Analysis of facial expressions in parkinson's disease through video-based automatic methods.
Bandini, Andrea; Orlandi, Silvia; Escalante, Hugo Jair; Giovannelli, Fabio; Cincotta, Massimo; Reyes-Garcia, Carlos A; Vanni, Paola; Zaccara, Gaetano; Manfredi, Claudia
2017-04-01
The automatic analysis of facial expressions is an evolving field that finds several clinical applications. One of these applications is the study of facial bradykinesia in Parkinson's disease (PD), which is a major motor sign of this neurodegenerative illness. Facial bradykinesia consists in the reduction/loss of facial movements and emotional facial expressions called hypomimia. In this work we propose an automatic method for studying facial expressions in PD patients relying on video-based METHODS: 17 Parkinsonian patients and 17 healthy control subjects were asked to show basic facial expressions, upon request of the clinician and after the imitation of a visual cue on a screen. Through an existing face tracker, the Euclidean distance of the facial model from a neutral baseline was computed in order to quantify the changes in facial expressivity during the tasks. Moreover, an automatic facial expressions recognition algorithm was trained in order to study how PD expressions differed from the standard expressions. Results show that control subjects reported on average higher distances than PD patients along the tasks. This confirms that control subjects show larger movements during both posed and imitated facial expressions. Moreover, our results demonstrate that anger and disgust are the two most impaired expressions in PD patients. Contactless video-based systems can be important techniques for analyzing facial expressions also in rehabilitation, in particular speech therapy, where patients could get a definite advantage from a real-time feedback about the proper facial expressions/movements to perform. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wasserthal, Christian; Engel, Karin; Rink, Karsten; Brechmann, Andr'e.
We propose an automatic procedure for the correct segmentation of grey and white matter in MR data sets of the human brain. Our method exploits general anatomical knowledge for the initial segmentation and for the subsequent refinement of the estimation of the cortical grey matter. Our results are comparable to manual segmentations.
An automatic method for retrieving and indexing catalogues of biomedical courses.
Maojo, Victor; de la Calle, Guillermo; García-Remesal, Miguel; Bankauskaite, Vaida; Crespo, Jose
2008-11-06
Although there is wide information about Biomedical Informatics education and courses in different Websites, information is usually not exhaustive and difficult to update. We propose a new methodology based on information retrieval techniques for extracting, indexing and retrieving automatically information about educational offers. A web application has been developed to make available such information in an inventory of courses and educational offers.
Queiroz, Polyane Mazucatto; Rovaris, Karla; Santaella, Gustavo Machado; Haiter-Neto, Francisco; Freitas, Deborah Queiroz
2017-01-01
To calculate root canal volume and surface area in microCT images, an image segmentation by selecting threshold values is required, which can be determined by visual or automatic methods. Visual determination is influenced by the operator's visual acuity, while the automatic method is done entirely by computer algorithms. To compare between visual and automatic segmentation, and to determine the influence of the operator's visual acuity on the reproducibility of root canal volume and area measurements. Images from 31 extracted human anterior teeth were scanned with a μCT scanner. Three experienced examiners performed visual image segmentation, and threshold values were recorded. Automatic segmentation was done using the "Automatic Threshold Tool" available in the dedicated software provided by the scanner's manufacturer. Volume and area measurements were performed using the threshold values determined both visually and automatically. The paired Student's t-test showed no significant difference between visual and automatic segmentation methods regarding root canal volume measurements (p=0.93) and root canal surface (p=0.79). Although visual and automatic segmentation methods can be used to determine the threshold and calculate root canal volume and surface, the automatic method may be the most suitable for ensuring the reproducibility of threshold determination.
Integration of tools for binding archetypes to SNOMED CT
Sundvall, Erik; Qamar, Rahil; Nyström, Mikael; Forss, Mattias; Petersson, Håkan; Karlsson, Daniel; Åhlfeldt, Hans; Rector, Alan
2008-01-01
Background The Archetype formalism and the associated Archetype Definition Language have been proposed as an ISO standard for specifying models of components of electronic healthcare records as a means of achieving interoperability between clinical systems. This paper presents an archetype editor with support for manual or semi-automatic creation of bindings between archetypes and terminology systems. Methods Lexical and semantic methods are applied in order to obtain automatic mapping suggestions. Information visualisation methods are also used to assist the user in exploration and selection of mappings. Results An integrated tool for archetype authoring, semi-automatic SNOMED CT terminology binding assistance and terminology visualization was created and released as open source. Conclusion Finding the right terms to bind is a difficult task but the effort to achieve terminology bindings may be reduced with the help of the described approach. The methods and tools presented are general, but here only bindings between SNOMED CT and archetypes based on the openEHR reference model are presented in detail. PMID:19007444
Research on the Construction of Remote Sensing Automatic Interpretation Symbol Big Data
NASA Astrophysics Data System (ADS)
Gao, Y.; Liu, R.; Liu, J.; Cheng, T.
2018-04-01
Remote sensing automatic interpretation symbol (RSAIS) is an inexpensive and fast method in providing precise in-situ information for image interpretation and accuracy. This study designed a scientific and precise RSAIS data characterization method, as well as a distributed and cloud architecture massive data storage method. Additionally, it introduced an offline and online data update mode and a dynamic data evaluation mechanism, with the aim to create an efficient approach for RSAIS big data construction. Finally, a national RSAIS database with more than 3 million samples covering 86 land types was constructed during 2013-2015 based on the National Geographic Conditions Monitoring Project of China and then annually updated since the 2016 period. The RSAIS big data has proven to be a good method for large scale image interpretation and field validation. It is also notable that it has the potential to solve image automatic interpretation with the assistance of deep learning technology in the remote sensing big data era.
Methods for automatic detection of artifacts in microelectrode recordings.
Bakštein, Eduard; Sieger, Tomáš; Wild, Jiří; Novák, Daniel; Schneider, Jakub; Vostatek, Pavel; Urgošík, Dušan; Jech, Robert
2017-10-01
Extracellular microelectrode recording (MER) is a prominent technique for studies of extracellular single-unit neuronal activity. In order to achieve robust results in more complex analysis pipelines, it is necessary to have high quality input data with a low amount of artifacts. We show that noise (mainly electromagnetic interference and motion artifacts) may affect more than 25% of the recording length in a clinical MER database. We present several methods for automatic detection of noise in MER signals, based on (i) unsupervised detection of stationary segments, (ii) large peaks in the power spectral density, and (iii) a classifier based on multiple time- and frequency-domain features. We evaluate the proposed methods on a manually annotated database of 5735 ten-second MER signals from 58 Parkinson's disease patients. The existing methods for artifact detection in single-channel MER that have been rigorously tested, are based on unsupervised change-point detection. We show on an extensive real MER database that the presented techniques are better suited for the task of artifact identification and achieve much better results. The best-performing classifiers (bagging and decision tree) achieved artifact classification accuracy of up to 89% on an unseen test set and outperformed the unsupervised techniques by 5-10%. This was close to the level of agreement among raters using manual annotation (93.5%). We conclude that the proposed methods are suitable for automatic MER denoising and may help in the efficient elimination of undesirable signal artifacts. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Álvarez, Charlens; Martínez, Fabio; Romero, Eduardo
2015-01-01
The pelvic magnetic Resonance images (MRI) are used in Prostate cancer radiotherapy (RT), a process which is part of the radiation planning. Modern protocols require a manual delineation, a tedious and variable activity that may take about 20 minutes per patient, even for trained experts. That considerable time is an important work ow burden in most radiological services. Automatic or semi-automatic methods might improve the efficiency by decreasing the measure times while conserving the required accuracy. This work presents a fully automatic atlas- based segmentation strategy that selects the more similar templates for a new MRI using a robust multi-scale SURF analysis. Then a new segmentation is achieved by a linear combination of the selected templates, which are previously non-rigidly registered towards the new image. The proposed method shows reliable segmentations, obtaining an average DICE Coefficient of 79%, when comparing with the expert manual segmentation, under a leave-one-out scheme with the training database.
Semi Automatic Ontology Instantiation in the domain of Risk Management
NASA Astrophysics Data System (ADS)
Makki, Jawad; Alquier, Anne-Marie; Prince, Violaine
One of the challenging tasks in the context of Ontological Engineering is to automatically or semi-automatically support the process of Ontology Learning and Ontology Population from semi-structured documents (texts). In this paper we describe a Semi-Automatic Ontology Instantiation method from natural language text, in the domain of Risk Management. This method is composed from three steps 1 ) Annotation with part-of-speech tags, 2) Semantic Relation Instances Extraction, 3) Ontology instantiation process. It's based on combined NLP techniques using human intervention between steps 2 and 3 for control and validation. Since it heavily relies on linguistic knowledge it is not domain dependent which is a good feature for portability between the different fields of risk management application. The proposed methodology uses the ontology of the PRIMA1 project (supported by the European community) as a Generic Domain Ontology and populates it via an available corpus. A first validation of the approach is done through an experiment with Chemical Fact Sheets from Environmental Protection Agency2.
NASA Astrophysics Data System (ADS)
Masson, Josiane; Soille, Pierre; Mueller, Rick
2004-10-01
In the context of the Common Agricultural Policy (CAP) there is a strong interest of the European Commission for counting and individually locating fruit trees. An automatic counting algorithm developed by the JRC (OLICOUNT) was used in the past for olive trees only, on 1m black and white orthophotos but with limits in case of young trees or irregular groves. This study investigates the improvement of fruit tree identification using VHR images on a large set of data in three test sites, one in Creta (Greece; one in the south-east of France with a majority of olive trees and associated fruit trees, and the last one in Florida on citrus trees. OLICOUNT was compared with two other automatic tree counting, applications, one using the CRISP software on citrus trees and the other completely automatic based on regional minima (morphological image analysis). Additional investigation was undertaken to refine the methods. This paper describes the automatic methods and presents the results derived from the tests.
Automatic non-proliferative diabetic retinopathy screening system based on color fundus image.
Xiao, Zhitao; Zhang, Xinpeng; Geng, Lei; Zhang, Fang; Wu, Jun; Tong, Jun; Ogunbona, Philip O; Shan, Chunyan
2017-10-26
Non-proliferative diabetic retinopathy is the early stage of diabetic retinopathy. Automatic detection of non-proliferative diabetic retinopathy is significant for clinical diagnosis, early screening and course progression of patients. This paper introduces the design and implementation of an automatic system for screening non-proliferative diabetic retinopathy based on color fundus images. Firstly, the fundus structures, including blood vessels, optic disc and macula, are extracted and located, respectively. In particular, a new optic disc localization method using parabolic fitting is proposed based on the physiological structure characteristics of optic disc and blood vessels. Then, early lesions, such as microaneurysms, hemorrhages and hard exudates, are detected based on their respective characteristics. An equivalent optical model simulating human eyes is designed based on the anatomical structure of retina. Main structures and early lesions are reconstructed in the 3D space for better visualization. Finally, the severity of each image is evaluated based on the international criteria of diabetic retinopathy. The system has been tested on public databases and images from hospitals. Experimental results demonstrate that the proposed system achieves high accuracy for main structures and early lesions detection. The results of severity classification for non-proliferative diabetic retinopathy are also accurate and suitable. Our system can assist ophthalmologists for clinical diagnosis, automatic screening and course progression of patients.
NASA Astrophysics Data System (ADS)
Wang, Jianing; Liu, Yuan; Noble, Jack H.; Dawant, Benoit M.
2017-02-01
Medical image registration establishes a correspondence between images of biological structures and it is at the core of many applications. Commonly used deformable image registration methods are dependent on a good preregistration initialization. The initialization can be performed by localizing homologous landmarks and calculating a point-based transformation between the images. The selection of landmarks is however important. In this work, we present a learning-based method to automatically find a set of robust landmarks in 3D MR image volumes of the head to initialize non-rigid transformations. To validate our method, these selected landmarks are localized in unknown image volumes and they are used to compute a smoothing thin-plate splines transformation that registers the atlas to the volumes. The transformed atlas image is then used as the preregistration initialization of an intensity-based non-rigid registration algorithm. We show that the registration accuracy of this algorithm is statistically significantly improved when using the presented registration initialization over a standard intensity-based affine registration.
An Automated Parallel Image Registration Technique Based on the Correlation of Wavelet Features
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline; Campbell, William J.; Cromp, Robert F.; Zukor, Dorothy (Technical Monitor)
2001-01-01
With the increasing importance of multiple platform/multiple remote sensing missions, fast and automatic integration of digital data from disparate sources has become critical to the success of these endeavors. Our work utilizes maxima of wavelet coefficients to form the basic features of a correlation-based automatic registration algorithm. Our wavelet-based registration algorithm is tested successfully with data from the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) and the Landsat/Thematic Mapper(TM), which differ by translation and/or rotation. By the choice of high-frequency wavelet features, this method is similar to an edge-based correlation method, but by exploiting the multi-resolution nature of a wavelet decomposition, our method achieves higher computational speeds for comparable accuracies. This algorithm has been implemented on a Single Instruction Multiple Data (SIMD) massively parallel computer, the MasPar MP-2, as well as on the CrayT3D, the Cray T3E and a Beowulf cluster of Pentium workstations.
Mimicry and automatic imitation are not correlated
van Den Bossche, Sofie; Cracco, Emiel; Bardi, Lara; Rigoni, Davide; Brass, Marcel
2017-01-01
It is widely known that individuals have a tendency to imitate each other. However, different psychological disciplines assess imitation in different manners. While social psychologists assess mimicry by means of action observation, cognitive psychologists assess automatic imitation with reaction time based measures on a trial-by-trial basis. Although these methods differ in crucial methodological aspects, both phenomena are assumed to rely on similar underlying mechanisms. This raises the fundamental question whether mimicry and automatic imitation are actually correlated. In the present research we assessed both phenomena and did not find a meaningful correlation. Moreover, personality traits such as empathy, autism traits, and traits related to self- versus other-focus did not correlate with mimicry or automatic imitation either. Theoretical implications are discussed. PMID:28877197
NASA Astrophysics Data System (ADS)
Alfadhlani; Samadhi, T. M. A. Ari; Ma’ruf, Anas; Setiasyah Toha, Isa
2018-03-01
Assembly is a part of manufacturing processes that must be considered at the product design stage. Design for Assembly (DFA) is a method to evaluate product design in order to make it simpler, easier and quicker to assemble, so that assembly cost is reduced. This article discusses a framework for developing a computer-based DFA method. The method is expected to aid product designer to extract data, evaluate assembly process, and provide recommendation for the product design improvement. These three things are desirable to be performed without interactive process or user intervention, so product design evaluation process could be done automatically. Input for the proposed framework is a 3D solid engineering drawing. Product design evaluation is performed by: minimizing the number of components; generating assembly sequence alternatives; selecting the best assembly sequence based on the minimum number of assembly reorientations; and providing suggestion for design improvement.
Multifractal-based nuclei segmentation in fish images.
Reljin, Nikola; Slavkovic-Ilic, Marijeta; Tapia, Coya; Cihoric, Nikola; Stankovic, Srdjan
2017-09-01
The method for nuclei segmentation in fluorescence in-situ hybridization (FISH) images, based on the inverse multifractal analysis (IMFA) is proposed. From the blue channel of the FISH image in RGB format, the matrix of Holder exponents, with one-by-one correspondence with the image pixels, is determined first. The following semi-automatic procedure is proposed: initial nuclei segmentation is performed automatically from the matrix of Holder exponents by applying predefined hard thresholding; then the user evaluates the result and is able to refine the segmentation by changing the threshold, if necessary. After successful nuclei segmentation, the HER2 (human epidermal growth factor receptor 2) scoring can be determined in usual way: by counting red and green dots within segmented nuclei, and finding their ratio. The IMFA segmentation method is tested over 100 clinical cases, evaluated by skilled pathologist. Testing results show that the new method has advantages compared to already reported methods.
NASA Astrophysics Data System (ADS)
Kamangir, H.; Momeni, M.; Satari, M.
2017-09-01
This paper presents an automatic method to extract road centerline networks from high and very high resolution satellite images. The present paper addresses the automated extraction roads covered with multiple natural and artificial objects such as trees, vehicles and either shadows of buildings or trees. In order to have a precise road extraction, this method implements three stages including: classification of images based on maximum likelihood algorithm to categorize images into interested classes, modification process on classified images by connected component and morphological operators to extract pixels of desired objects by removing undesirable pixels of each class, and finally line extraction based on RANSAC algorithm. In order to evaluate performance of the proposed method, the generated results are compared with ground truth road map as a reference. The evaluation performance of the proposed method using representative test images show completeness values ranging between 77% and 93%.
Counting the number of Feynman graphs in QCD
NASA Astrophysics Data System (ADS)
Kaneko, T.
2018-05-01
Information about the number of Feynman graphs for a given physical process in a given field theory is especially useful for confirming the result of a Feynman graph generator used in an automatic system of perturbative calculations. A method of counting the number of Feynman graphs with weight of symmetry factor was established based on zero-dimensional field theory, and was used in scalar theories and QED. In this article this method is generalized to more complicated models by direct calculation of generating functions on a computer algebra system. This method is applied to QCD with and without counter terms, where many higher order are being calculated automatically.
Research and applications: Artificial intelligence
NASA Technical Reports Server (NTRS)
Raphael, B.; Fikes, R. E.; Chaitin, L. J.; Hart, P. E.; Duda, R. O.; Nilsson, N. J.
1971-01-01
A program of research in the field of artificial intelligence is presented. The research areas discussed include automatic theorem proving, representations of real-world environments, problem-solving methods, the design of a programming system for problem-solving research, techniques for general scene analysis based upon television data, and the problems of assembling an integrated robot system. Major accomplishments include the development of a new problem-solving system that uses both formal logical inference and informal heuristic methods, the development of a method of automatic learning by generalization, and the design of the overall structure of a new complete robot system. Eight appendices to the report contain extensive technical details of the work described.
Automatic learning-based beam angle selection for thoracic IMRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amit, Guy; Marshall, Andrea; Purdie, Thomas G., E-mail: tom.purdie@rmp.uhn.ca
Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationallymore » efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume coverage and organ at risk sparing and were superior over plans produced with fixed sets of common beam angles. The great majority of the automatic plans (93%) were approved as clinically acceptable by three radiation therapy specialists. Conclusions: The results demonstrated the feasibility of utilizing a learning-based approach for automatic selection of beam angles in thoracic IMRT planning. The proposed method may assist in reducing the manual planning workload, while sustaining plan quality.« less
NASA Astrophysics Data System (ADS)
Osmanoglu, B.; Ozkan, C.; Sunar, F.
2013-10-01
After air strikes on July 14 and 15, 2006 the Jiyeh Power Station started leaking oil into the eastern Mediterranean Sea. The power station is located about 30 km south of Beirut and the slick covered about 170 km of coastline threatening the neighboring countries Turkey and Cyprus. Due to the ongoing conflict between Israel and Lebanon, cleaning efforts could not start immediately resulting in 12 000 to 15 000 tons of fuel oil leaking into the sea. In this paper we compare results from automatic and semi-automatic slick detection algorithms. The automatic detection method combines the probabilities calculated for each pixel from each image to obtain a joint probability, minimizing the adverse effects of atmosphere on oil spill detection. The method can readily utilize X-, C- and L-band data where available. Furthermore wind and wave speed observations can be used for a more accurate analysis. For this study, we utilize Envisat ASAR ScanSAR data. A probability map is generated based on the radar backscatter, effect of wind and dampening value. The semi-automatic algorithm is based on supervised classification. As a classifier, Artificial Neural Network Multilayer Perceptron (ANN MLP) classifier is used since it is more flexible and efficient than conventional maximum likelihood classifier for multisource and multi-temporal data. The learning algorithm for ANN MLP is chosen as the Levenberg-Marquardt (LM). Training and test data for supervised classification are composed from the textural information created from SAR images. This approach is semiautomatic because tuning the parameters of classifier and composing training data need a human interaction. We point out the similarities and differences between the two methods and their results as well as underlining their advantages and disadvantages. Due to the lack of ground truth data, we compare obtained results to each other, as well as other published oil slick area assessments.
Research on time synchronization scheme of MES systems in manufacturing enterprise
NASA Astrophysics Data System (ADS)
Yuan, Yuan; Wu, Kun; Sui, Changhao; Gu, Jin
2018-04-01
With the popularity of information and automatic production in the manufacturing enterprise, data interaction between business systems is more and more frequent. Therefore, the accuracy of time is getting higher and higher. However, the NTP network time synchronization methods lack the corresponding redundancy and monitoring mechanisms. When failure occurs, it can only make up operations after the event, which has a great effect on production data and systems interaction. Based on this, the paper proposes a RHCS-based NTP server architecture, automatically detect NTP status and failover by script.
Automatic comic page image understanding based on edge segment analysis
NASA Astrophysics Data System (ADS)
Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai
2013-12-01
Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pasquier, David; Lacornerie, Thomas; Vermandel, Maximilien
Purpose: Target-volume and organ-at-risk delineation is a time-consuming task in radiotherapy planning. The development of automated segmentation tools remains problematic, because of pelvic organ shape variability. We evaluate a three-dimensional (3D), deformable-model approach and a seeded region-growing algorithm for automatic delineation of the prostate and organs-at-risk on magnetic resonance images. Methods and Materials: Manual and automatic delineation were compared in 24 patients using a sagittal T2-weighted (T2-w) turbo spin echo (TSE) sequence and an axial T1-weighted (T1-w) 3D fast-field echo (FFE) or TSE sequence. For automatic prostate delineation, an organ model-based method was used. Prostates without seminal vesicles were delineatedmore » as the clinical target volume (CTV). For automatic bladder and rectum delineation, a seeded region-growing method was used. Manual contouring was considered the reference method. The following parameters were measured: volume ratio (Vr) (automatic/manual), volume overlap (Vo) (ratio of the volume of intersection to the volume of union; optimal value = 1), and correctly delineated volume (Vc) (percent ratio of the volume of intersection to the manually defined volume; optimal value 100). Results: For the CTV, the Vr, Vo, and Vc were 1.13 ({+-}0.1 SD), 0.78 ({+-}0.05 SD), and 94.75 ({+-}3.3 SD), respectively. For the rectum, the Vr, Vo, and Vc were 0.97 ({+-}0.1 SD), 0.78 ({+-}0.06 SD), and 86.52 ({+-}5 SD), respectively. For the bladder, the Vr, Vo, and Vc were 0.95 ({+-}0.03 SD), 0.88 ({+-}0.03 SD), and 91.29 ({+-}3.1 SD), respectively. Conclusions: Our results show that the organ-model method is robust, and results in reproducible prostate segmentation with minor interactive corrections. For automatic bladder and rectum delineation, magnetic resonance imaging soft-tissue contrast enables the use of region-growing methods.« less
Tan, Zhengguo; Hohage, Thorsten; Kalentev, Oleksandr; Joseph, Arun A; Wang, Xiaoqing; Voit, Dirk; Merboldt, K Dietmar; Frahm, Jens
2017-12-01
The purpose of this work is to develop an automatic method for the scaling of unknowns in model-based nonlinear inverse reconstructions and to evaluate its application to real-time phase-contrast (RT-PC) flow magnetic resonance imaging (MRI). Model-based MRI reconstructions of parametric maps which describe a physical or physiological function require the solution of a nonlinear inverse problem, because the list of unknowns in the extended MRI signal equation comprises multiple functional parameters and all coil sensitivity profiles. Iterative solutions therefore rely on an appropriate scaling of unknowns to numerically balance partial derivatives and regularization terms. The scaling of unknowns emerges as a self-adjoint and positive-definite matrix which is expressible by its maximal eigenvalue and solved by power iterations. The proposed method is applied to RT-PC flow MRI based on highly undersampled acquisitions. Experimental validations include numerical phantoms providing ground truth and a wide range of human studies in the ascending aorta, carotid arteries, deep veins during muscular exercise and cerebrospinal fluid during deep respiration. For RT-PC flow MRI, model-based reconstructions with automatic scaling not only offer velocity maps with high spatiotemporal acuity and much reduced phase noise, but also ensure fast convergence as well as accurate and precise velocities for all conditions tested, i.e. for different velocity ranges, vessel sizes and the simultaneous presence of signals with velocity aliasing. In summary, the proposed automatic scaling of unknowns in model-based MRI reconstructions yields quantitatively reliable velocities for RT-PC flow MRI in various experimental scenarios. Copyright © 2017 John Wiley & Sons, Ltd.
Almeida, Diogo F; Ruben, Rui B; Folgado, João; Fernandes, Paulo R; Audenaert, Emmanuel; Verhegghe, Benedict; De Beule, Matthieu
2016-12-01
Femur segmentation can be an important tool in orthopedic surgical planning. However, in order to overcome the need of an experienced user with extensive knowledge on the techniques, segmentation should be fully automatic. In this paper a new fully automatic femur segmentation method for CT images is presented. This method is also able to define automatically the medullary canal and performs well even in low resolution CT scans. Fully automatic femoral segmentation was performed adapting a template mesh of the femoral volume to medical images. In order to achieve this, an adaptation of the active shape model (ASM) technique based on the statistical shape model (SSM) and local appearance model (LAM) of the femur with a novel initialization method was used, to drive the template mesh deformation in order to fit the in-image femoral shape in a time effective approach. With the proposed method a 98% convergence rate was achieved. For high resolution CT images group the average error is less than 1mm. For the low resolution image group the results are also accurate and the average error is less than 1.5mm. The proposed segmentation pipeline is accurate, robust and completely user free. The method is robust to patient orientation, image artifacts and poorly defined edges. The results excelled even in CT images with a significant slice thickness, i.e., above 5mm. Medullary canal segmentation increases the geometric information that can be used in orthopedic surgical planning or in finite element analysis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Feasibility of automatic evaluation of clinical rules in general practice.
Opondo, Dedan; Visscher, Stefan; Eslami, Saied; Medlock, Stephanie; Verheij, Robert; Korevaar, Joke C; Abu-Hanna, Ameen
2017-04-01
To assess the extent to which clinical rules (CRs) can be implemented for automatic evaluation of quality of care in general practice. We assessed 81 clinical rules (CRs) adapted from a subset of Assessing Care of Vulnerable Elders (ACOVE) clinical rules, against Dutch College of General Practitioners (NHG) data model. Each CR was analyzed using the Logical Elements Rule METHOD: (LERM). LERM is a stepwise method of assessing and formalizing clinical rules for decision support. Clinical rules that satisfied the criteria outlined in the LERM method were judged to be implementable in automatic evaluation in general practice. Thirty-three out of 81 (40.7%) Dutch-translated ACOVE clinical rules can be automatically evaluated in electronic medical record systems. Seven out of 7 CRs (100%) in the domain of diabetes can be automatically evaluated, 9/17 (52.9%) in medication use, 5/10 (50%) in depression care, 3/6 (50%) in nutrition care, 6/13 (46.1%) in dementia care, 1/6 (16.6%) in end of life care, 2/13 (15.3%) in continuity of care, and 0/9 (0%) in the fall-related care. Lack of documentation of care activities between primary and secondary health facilities and ambiguous formulation of clinical rules were the main reasons for the inability to automate the clinical rules. Approximately two-fifths of the primary care Dutch ACOVE-based clinical rules can be automatically evaluated. Clear definition of clinical rules, improved GP database design and electronic linkage of primary and secondary healthcare facilities can improve prospects of automatic assessment of quality of care. These findings are relevant especially because the Netherlands has very high automation of primary care. Copyright © 2017 Elsevier B.V. All rights reserved.
The analysis of selected orientation methods of architectural objects' scans
NASA Astrophysics Data System (ADS)
Markiewicz, Jakub S.; Kajdewicz, Irmina; Zawieska, Dorota
2015-05-01
The terrestrial laser scanning is commonly used in different areas, inter alia in modelling architectural objects. One of the most important part of TLS data processing is scans registration. It significantly affects the accuracy of generation of high resolution photogrammetric documentation. This process is time consuming, especially in case of a large number of scans. It is mostly based on an automatic detection and a semi-automatic measurement of control points placed on the object. In case of the complicated historical buildings, sometimes it is forbidden to place survey targets on an object or it may be difficult to distribute survey targets in the optimal way. Such problems encourage the search for the new methods of scan registration which enable to eliminate the step of placing survey targets on the object. In this paper the results of target-based registration method are presented The survey targets placed on the walls of historical chambers of the Museum of King Jan III's Palace at Wilanów and on the walls of ruins of the Bishops Castle in Iłża were used for scan orientation. Several variants of orientation were performed, taking into account different placement and different number of survey marks. Afterwards, during next research works, raster images were generated from scans and the SIFT and SURF algorithms for image processing were used to automatically search for corresponding natural points. The case of utilisation of automatically identified points for TLS data orientation was analysed. The results of both methods for TLS data registration were summarized and presented in numerical and graphical forms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morshed, Nader; Lawrence Berkeley National Laboratory, Berkeley, CA 94720; Echols, Nathaniel, E-mail: nechols@lbl.gov
2015-05-01
A method to automatically identify possible elemental ions in X-ray crystal structures has been extended to use support vector machine (SVM) classifiers trained on selected structures in the PDB, with significantly improved sensitivity over manually encoded heuristics. In the process of macromolecular model building, crystallographers must examine electron density for isolated atoms and differentiate sites containing structured solvent molecules from those containing elemental ions. This task requires specific knowledge of metal-binding chemistry and scattering properties and is prone to error. A method has previously been described to identify ions based on manually chosen criteria for a number of elements. Here,more » the use of support vector machines (SVMs) to automatically classify isolated atoms as either solvent or one of various ions is described. Two data sets of protein crystal structures, one containing manually curated structures deposited with anomalous diffraction data and another with automatically filtered, high-resolution structures, were constructed. On the manually curated data set, an SVM classifier was able to distinguish calcium from manganese, zinc, iron and nickel, as well as all five of these ions from water molecules, with a high degree of accuracy. Additionally, SVMs trained on the automatically curated set of high-resolution structures were able to successfully classify most common elemental ions in an independent validation test set. This method is readily extensible to other elemental ions and can also be used in conjunction with previous methods based on a priori expectations of the chemical environment and X-ray scattering.« less
Digital focusing of OCT images based on scalar diffraction theory and information entropy.
Liu, Guozhong; Zhi, Zhongwei; Wang, Ruikang K
2012-11-01
This paper describes a digital method that is capable of automatically focusing optical coherence tomography (OCT) en face images without prior knowledge of the point spread function of the imaging system. The method utilizes a scalar diffraction model to simulate wave propagation from out-of-focus scatter to the focal plane, from which the propagation distance between the out-of-focus plane and the focal plane is determined automatically via an image-definition-evaluation criterion based on information entropy theory. By use of the proposed approach, we demonstrate that the lateral resolution close to that at the focal plane can be recovered from the imaging planes outside the depth of field region with minimal loss of resolution. Fresh onion tissues and mouse fat tissues are used in the experiments to show the performance of the proposed method.
Mane, Vijay Mahadeo; Jadhav, D V
2017-05-24
Diabetic retinopathy (DR) is the most common diabetic eye disease. Doctors are using various test methods to detect DR. But, the availability of test methods and requirements of domain experts pose a new challenge in the automatic detection of DR. In order to fulfill this objective, a variety of algorithms has been developed in the literature. In this paper, we propose a system consisting of a novel sparking process and a holoentropy-based decision tree for automatic classification of DR images to further improve the effectiveness. The sparking process algorithm is developed for automatic segmentation of blood vessels through the estimation of optimal threshold. The holoentropy enabled decision tree is newly developed for automatic classification of retinal images into normal or abnormal using hybrid features which preserve the disease-level patterns even more than the signal level of the feature. The effectiveness of the proposed system is analyzed using standard fundus image databases DIARETDB0 and DIARETDB1 for sensitivity, specificity and accuracy. The proposed system yields sensitivity, specificity and accuracy values of 96.72%, 97.01% and 96.45%, respectively. The experimental result reveals that the proposed technique outperforms the existing algorithms.
Method for 3D profilometry measurement based on contouring moire fringe
NASA Astrophysics Data System (ADS)
Shi, Zhiwei; Lin, Juhua
2007-12-01
3D shape measurement is one of the most active branches of optical research recently. A method of 3D profilometry measurement by the combination of Moire projection method and phase-shifting technology based on SCM (Single Chip Microcomputer) control is presented in the paper. Automatic measurement of 3D surface profiles can be carried out by applying this method with high speed and high precision.
Galaxy morphology - An unsupervised machine learning approach
NASA Astrophysics Data System (ADS)
Schutter, A.; Shamir, L.
2015-09-01
Structural properties poses valuable information about the formation and evolution of galaxies, and are important for understanding the past, present, and future universe. Here we use unsupervised machine learning methodology to analyze a network of similarities between galaxy morphological types, and automatically deduce a morphological sequence of galaxies. Application of the method to the EFIGI catalog show that the morphological scheme produced by the algorithm is largely in agreement with the De Vaucouleurs system, demonstrating the ability of computer vision and machine learning methods to automatically profile galaxy morphological sequences. The unsupervised analysis method is based on comprehensive computer vision techniques that compute the visual similarities between the different morphological types. Rather than relying on human cognition, the proposed system deduces the similarities between sets of galaxy images in an automatic manner, and is therefore not limited by the number of galaxies being analyzed. The source code of the method is publicly available, and the protocol of the experiment is included in the paper so that the experiment can be replicated, and the method can be used to analyze user-defined datasets of galaxy images.
A new idea for visualization of lesions distribution in mammogram based on CPD registration method.
Pan, Xiaoguang; Qi, Buer; Yu, Hongfei; Wei, Haiping; Kang, Yan
2017-07-20
Mammography is currently the most effective technique for breast cancer. Lesions distribution can provide support for clinical diagnosis and epidemiological studies. We presented a new idea to help radiologists study breast lesions distribution conveniently. We also developed an automatic tool based on this idea which could show visualization of lesions distribution in a standard mammogram. Firstly, establishing a lesion database to study; then, extracting breast contours and match different women's mammograms to a standard mammogram; finally, showing the lesion distribution in the standard mammogram, and providing the distribution statistics. The crucial process of developing this tool was matching different women's mammograms correctly. We used a hybrid breast contour extraction method combined with coherent point drift method to match different women's mammograms. We tested our automatic tool by four mass datasets of 641 images. The distribution results shown by the tool were consistent with the results counted according to their reports and mammograms by manual. We also discussed the registration error that was less than 3.3 mm in average distance. The new idea is effective and the automatic tool can provide lesions distribution results which are consistent with radiologists simply and conveniently.
Gloger, Oliver; Kühn, Jens; Stanski, Adam; Völzke, Henry; Puls, Ralf
2010-07-01
Automatic 3D liver segmentation in magnetic resonance (MR) data sets has proven to be a very challenging task in the domain of medical image analysis. There exist numerous approaches for automatic 3D liver segmentation on computer tomography data sets that have influenced the segmentation of MR images. In contrast to previous approaches to liver segmentation in MR data sets, we use all available MR channel information of different weightings and formulate liver tissue and position probabilities in a probabilistic framework. We apply multiclass linear discriminant analysis as a fast and efficient dimensionality reduction technique and generate probability maps then used for segmentation. We develop a fully automatic three-step 3D segmentation approach based upon a modified region growing approach and a further threshold technique. Finally, we incorporate characteristic prior knowledge to improve the segmentation results. This novel 3D segmentation approach is modularized and can be applied for normal and fat accumulated liver tissue properties. Copyright 2010 Elsevier Inc. All rights reserved.
Oost, Elco; Koning, Gerhard; Sonka, Milan; Oemrawsingh, Pranobe V; Reiber, Johan H C; Lelieveldt, Boudewijn P F
2006-09-01
This paper describes a new approach to the automated segmentation of X-ray left ventricular (LV) angiograms, based on active appearance models (AAMs) and dynamic programming. A coupling of shape and texture information between the end-diastolic (ED) and end-systolic (ES) frame was achieved by constructing a multiview AAM. Over-constraining of the model was compensated for by employing dynamic programming, integrating both intensity and motion features in the cost function. Two applications are compared: a semi-automatic method with manual model initialization, and a fully automatic algorithm. The first proved to be highly robust and accurate, demonstrating high clinical relevance. Based on experiments involving 70 patient data sets, the algorithm's success rate was 100% for ED and 99% for ES, with average unsigned border positioning errors of 0.68 mm for ED and 1.45 mm for ES. Calculated volumes were accurate and unbiased. The fully automatic algorithm, with intrinsically less user interaction was less robust, but showed a high potential, mostly due to a controlled gradient descent in updating the model parameters. The success rate of the fully automatic method was 91% for ED and 83% for ES, with average unsigned border positioning errors of 0.79 mm for ED and 1.55 mm for ES.
Determination of the human spine curve based on laser triangulation.
Poredoš, Primož; Čelan, Dušan; Možina, Janez; Jezeršek, Matija
2015-02-05
The main objective of the present method was to automatically obtain a spatial curve of the thoracic and lumbar spine based on a 3D shape measurement of a human torso with developed scoliosis. Manual determination of the spine curve, which was based on palpation of the thoracic and lumbar spinous processes, was found to be an appropriate way to validate the method. Therefore a new, noninvasive, optical 3D method for human torso evaluation in medical practice is introduced. Twenty-four patients with confirmed clinical diagnosis of scoliosis were scanned using a specially developed 3D laser profilometer. The measuring principle of the system is based on laser triangulation with one-laser-plane illumination. The measurement took approximately 10 seconds at 700 mm of the longitudinal translation along the back. The single point measurement accuracy was 0.1 mm. Computer analysis of the measured surface returned two 3D curves. The first curve was determined by manual marking (manual curve), and the second was determined by detecting surface curvature extremes (automatic curve). The manual and automatic curve comparison was given as the root mean square deviation (RMSD) for each patient. The intra-operator study involved assessing 20 successive measurements of the same person, and the inter-operator study involved assessing measurements from 8 operators. The results obtained for the 24 patients showed that the typical RMSD between the manual and automatic curve was 5.0 mm in the frontal plane and 1.0 mm in the sagittal plane, which is a good result compared with palpatory accuracy (9.8 mm). The intra-operator repeatability of the presented method in the frontal and sagittal planes was 0.45 mm and 0.06 mm, respectively. The inter-operator repeatability assessment shows that that the presented method is invariant to the operator of the computer program with the presented method. The main novelty of the presented paper is the development of a new, non-contact method that provides a quick, precise and non-invasive way to determine the spatial spine curve for patients with developed scoliosis and the validation of the presented method using the palpation of the spinous processes, where no harmful ionizing radiation is present.
An algorithm for automatic parameter adjustment for brain extraction in BrainSuite
NASA Astrophysics Data System (ADS)
Rajagopal, Gautham; Joshi, Anand A.; Leahy, Richard M.
2017-02-01
Brain Extraction (classification of brain and non-brain tissue) of MRI brain images is a crucial pre-processing step necessary for imaging-based anatomical studies of the human brain. Several automated methods and software tools are available for performing this task, but differences in MR image parameters (pulse sequence, resolution) and instrumentand subject-dependent noise and artefacts affect the performance of these automated methods. We describe and evaluate a method that automatically adapts the default parameters of the Brain Surface Extraction (BSE) algorithm to optimize a cost function chosen to reflect accurate brain extraction. BSE uses a combination of anisotropic filtering, Marr-Hildreth edge detection, and binary morphology for brain extraction. Our algorithm automatically adapts four parameters associated with these steps to maximize the brain surface area to volume ratio. We evaluate the method on a total of 109 brain volumes with ground truth brain masks generated by an expert user. A quantitative evaluation of the performance of the proposed algorithm showed an improvement in the mean (s.d.) Dice coefficient from 0.8969 (0.0376) for default parameters to 0.9509 (0.0504) for the optimized case. These results indicate that automatic parameter optimization can result in significant improvements in definition of the brain mask.
A novel automatic segmentation workflow of axial breast DCE-MRI
NASA Astrophysics Data System (ADS)
Besbes, Feten; Gargouri, Norhene; Damak, Alima; Sellami, Dorra
2018-04-01
In this paper we propose a novel process of a fully automatic breast tissue segmentation which is independent from expert calibration and contrast. The proposed algorithm is composed by two major steps. The first step consists in the detection of breast boundaries. It is based on image content analysis and Moore-Neighbour tracing algorithm. As a processing step, Otsu thresholding and neighbors algorithm are applied. Then, the external area of breast is removed to get an approximated breast region. The second preprocessing step is the delineation of the chest wall which is considered as the lowest cost path linking three key points; These points are located automatically at the breast. They are respectively, the left and right boundary points and the middle upper point placed at the sternum region using statistical method. For the minimum cost path search problem, we resolve it through Dijkstra algorithm. Evaluation results reveal the robustness of our process face to different breast densities, complex forms and challenging cases. In fact, the mean overlap between manual segmentation and automatic segmentation through our method is 96.5%. A comparative study shows that our proposed process is competitive and faster than existing methods. The segmentation of 120 slices with our method is achieved at least in 20.57+/-5.2s.
Retrieval Algorithms for Road Surface Modelling Using Laser-Based Mobile Mapping.
Jaakkola, Anttoni; Hyyppä, Juha; Hyyppä, Hannu; Kukko, Antero
2008-09-01
Automated processing of the data provided by a laser-based mobile mapping system will be a necessity due to the huge amount of data produced. In the future, vehiclebased laser scanning, here called mobile mapping, should see considerable use for road environment modelling. Since the geometry of the scanning and point density is different from airborne laser scanning, new algorithms are needed for information extraction. In this paper, we propose automatic methods for classifying the road marking and kerbstone points and modelling the road surface as a triangulated irregular network. On the basis of experimental tests, the mean classification accuracies obtained using automatic method for lines, zebra crossings and kerbstones were 80.6%, 92.3% and 79.7%, respectively.
Song, Dandan; Li, Ning; Liao, Lejian
2015-01-01
Due to the generation of enormous amounts of data at both lower costs as well as in shorter times, whole-exome sequencing technologies provide dramatic opportunities for identifying disease genes implicated in Mendelian disorders. Since upwards of thousands genomic variants can be sequenced in each exome, it is challenging to filter pathogenic variants in protein coding regions and reduce the number of missing true variants. Therefore, an automatic and efficient pipeline for finding disease variants in Mendelian disorders is designed by exploiting a combination of variants filtering steps to analyze the family-based exome sequencing approach. Recent studies on the Freeman-Sheldon disease are revisited and show that the proposed method outperforms other existing candidate gene identification methods.
Independent component analysis for automatic note extraction from musical trills
NASA Astrophysics Data System (ADS)
Brown, Judith C.; Smaragdis, Paris
2004-05-01
The method of principal component analysis, which is based on second-order statistics (or linear independence), has long been used for redundancy reduction of audio data. The more recent technique of independent component analysis, enforcing much stricter statistical criteria based on higher-order statistical independence, is introduced and shown to be far superior in separating independent musical sources. This theory has been applied to piano trills and a database of trill rates was assembled from experiments with a computer-driven piano, recordings of a professional pianist, and commercially available compact disks. The method of independent component analysis has thus been shown to be an outstanding, effective means of automatically extracting interesting musical information from a sea of redundant data.
NASA Astrophysics Data System (ADS)
Shahzad, Rahil; Bos, Daniel; Budde, Ricardo P. J.; Pellikaan, Karlijn; Niessen, Wiro J.; van der Lugt, Aad; van Walsum, Theo
2017-05-01
Early structural changes to the heart, including the chambers and the coronary arteries, provide important information on pre-clinical heart disease like cardiac failure. Currently, contrast-enhanced cardiac computed tomography angiography (CCTA) is the preferred modality for the visualization of the cardiac chambers and the coronaries. In clinical practice not every patient undergoes a CCTA scan; many patients receive only a non-contrast-enhanced calcium scoring CT scan (CTCS), which has less radiation dose and does not require the administration of contrast agent. Quantifying cardiac structures in such images is challenging, as they lack the contrast present in CCTA scans. Such quantification would however be relevant, as it enables population based studies with only a CTCS scan. The purpose of this work is therefore to investigate the feasibility of automatic segmentation and quantification of cardiac structures viz whole heart, left atrium, left ventricle, right atrium, right ventricle and aortic root from CTCS scans. A fully automatic multi-atlas-based segmentation approach is used to segment the cardiac structures. Results show that the segmentation overlap between the automatic method and that of the reference standard have a Dice similarity coefficient of 0.91 on average for the cardiac chambers. The mean surface-to-surface distance error over all the cardiac structures is 1.4+/- 1.7 mm. The automatically obtained cardiac chamber volumes using the CTCS scans have an excellent correlation when compared to the volumes in corresponding CCTA scans, a Pearson correlation coefficient (R) of 0.95 is obtained. Our fully automatic method enables large-scale assessment of cardiac structures on non-contrast-enhanced CT scans.
Automated prediction of protein function and detection of functional sites from structure.
Pazos, Florencio; Sternberg, Michael J E
2004-10-12
Current structural genomics projects are yielding structures for proteins whose functions are unknown. Accordingly, there is a pressing requirement for computational methods for function prediction. Here we present PHUNCTIONER, an automatic method for structure-based function prediction using automatically extracted functional sites (residues associated to functions). The method relates proteins with the same function through structural alignments and extracts 3D profiles of conserved residues. Functional features to train the method are extracted from the Gene Ontology (GO) database. The method extracts these features from the entire GO hierarchy and hence is applicable across the whole range of function specificity. 3D profiles associated with 121 GO annotations were extracted. We tested the power of the method both for the prediction of function and for the extraction of functional sites. The success of function prediction by our method was compared with the standard homology-based method. In the zone of low sequence similarity (approximately 15%), our method assigns the correct GO annotation in 90% of the protein structures considered, approximately 20% higher than inheritance of function from the closest homologue.
Kim, Young Jae; Kim, Kwang Gi
2018-01-01
Existing drusen measurement is difficult to use in clinic because it requires a lot of time and effort for visual inspection. In order to resolve this problem, we propose an automatic drusen detection method to help clinical diagnosis of age-related macular degeneration. First, we changed the fundus image to a green channel and extracted the ROI of the macular area based on the optic disk. Next, we detected the candidate group using the difference image of the median filter within the ROI. We also segmented vessels and removed them from the image. Finally, we detected the drusen through Renyi's entropy threshold algorithm. We performed comparisons and statistical analysis between the manual detection results and automatic detection results for 30 cases in order to verify validity. As a result, the average sensitivity was 93.37% (80.95%~100%) and the average DSC was 0.73 (0.3~0.98). In addition, the value of the ICC was 0.984 (CI: 0.967~0.993, p < 0.01), showing the high reliability of the proposed automatic method. We expect that the automatic drusen detection helps clinicians to improve the diagnostic performance in the detection of drusen on fundus image.
Somasundaram, Karuppanagounder; Ezhilarasan, Kamalanathan
2015-01-01
To develop an automatic skull stripping method for magnetic resonance imaging (MRI) of human head scans. The proposed method is based on gray scale transformation and morphological operations. The proposed method has been tested with 20 volumes of normal T1-weighted images taken from Internet Brain Segmentation Repository. Experimental results show that the proposed method gives better results than the popular skull stripping methods Brain Extraction Tool and Brain Surface Extractor. The average value of Jaccard and Dice coefficients are 0.93 and 0.962 respectively. In this article, we have proposed a novel skull stripping method using intensity transformation and morphological operations. This is a low computational complexity method but gives competitive or better results than that of the popular skull stripping methods Brain Surface Extractor and Brain Extraction Tool.
NASA Astrophysics Data System (ADS)
Rieder, Christian; Wirtz, Stefan; Strehlow, Jan; Zidowitz, Stephan; Bruners, Philipp; Isfort, Peter; Mahnken, Andreas H.; Peitgen, Heinz-Otto
2012-02-01
Image-guided radiofrequency ablation (RFA) is becoming a standard procedure for minimally invasive tumor treatment in clinical practice. To verify the treatment success of the therapy, reliable post-interventional assessment of the ablation zone (coagulation) is essential. Typically, pre- and post-interventional CT images have to be aligned to compare the shape, size, and position of tumor and coagulation zone. In this work, we present an automatic workflow for masking liver tissue, enabling a rigid registration algorithm to perform at least as accurate as experienced medical experts. To minimize the effect of global liver deformations, the registration is computed in a local region of interest around the pre-interventional lesion and post-interventional coagulation necrosis. A registration mask excluding lesions and neighboring organs is calculated to prevent the registration algorithm from matching both lesion shapes instead of the surrounding liver anatomy. As an initial registration step, the centers of gravity from both lesions are aligned automatically. The subsequent rigid registration method is based on the Local Cross Correlation (LCC) similarity measure and Newton-type optimization. To assess the accuracy of our method, 41 RFA cases are registered and compared with the manually aligned cases from four medical experts. Furthermore, the registration results are compared with ground truth transformations based on averaged anatomical landmark pairs. In the evaluation, we show that our method allows to automatic alignment of the data sets with equal accuracy as medical experts, but requiring significancy less time consumption and variability.
3D automatic anatomy recognition based on iterative graph-cut-ASM
NASA Astrophysics Data System (ADS)
Chen, Xinjian; Udupa, Jayaram K.; Bagci, Ulas; Alavi, Abass; Torigian, Drew A.
2010-02-01
We call the computerized assistive process of recognizing, delineating, and quantifying organs and tissue regions in medical imaging, occurring automatically during clinical image interpretation, automatic anatomy recognition (AAR). The AAR system we are developing includes five main parts: model building, object recognition, object delineation, pathology detection, and organ system quantification. In this paper, we focus on the delineation part. For the modeling part, we employ the active shape model (ASM) strategy. For recognition and delineation, we integrate several hybrid strategies of combining purely image based methods with ASM. In this paper, an iterative Graph-Cut ASM (IGCASM) method is proposed for object delineation. An algorithm called GC-ASM was presented at this symposium last year for object delineation in 2D images which attempted to combine synergistically ASM and GC. Here, we extend this method to 3D medical image delineation. The IGCASM method effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. We propose a new GC cost function, which effectively integrates the specific image information with the ASM shape model information. The proposed methods are tested on a clinical abdominal CT data set. The preliminary results show that: (a) it is feasible to explicitly bring prior 3D statistical shape information into the GC framework; (b) the 3D IGCASM delineation method improves on ASM and GC and can provide practical operational time on clinical images.
Automatic liver segmentation in computed tomography using general-purpose shape modeling methods.
Spinczyk, Dominik; Krasoń, Agata
2018-05-29
Liver segmentation in computed tomography is required in many clinical applications. The segmentation methods used can be classified according to a number of criteria. One important criterion for method selection is the shape representation of the segmented organ. The aim of the work is automatic liver segmentation using general purpose shape modeling methods. As part of the research, methods based on shape information at various levels of advancement were used. The single atlas based segmentation method was used as the simplest shape-based method. This method is derived from a single atlas using the deformable free-form deformation of the control point curves. Subsequently, the classic and modified Active Shape Model (ASM) was used, using medium body shape models. As the most advanced and main method generalized statistical shape models, Gaussian Process Morphable Models was used, which are based on multi-dimensional Gaussian distributions of the shape deformation field. Mutual information and sum os square distance were used as similarity measures. The poorest results were obtained for the single atlas method. For the ASM method in 10 analyzed cases for seven test images, the Dice coefficient was above 55[Formula: see text], of which for three of them the coefficient was over 70[Formula: see text], which placed the method in second place. The best results were obtained for the method of generalized statistical distribution of the deformation field. The DICE coefficient for this method was 88.5[Formula: see text] CONCLUSIONS: This value of 88.5 [Formula: see text] Dice coefficient can be explained by the use of general-purpose shape modeling methods with a large variance of the shape of the modeled object-the liver and limitations on the size of our training data set, which was limited to 10 cases. The obtained results in presented fully automatic method are comparable with dedicated methods for liver segmentation. In addition, the deforamtion features of the model can be modeled mathematically by using various kernel functions, which allows to segment the liver on a comparable level using a smaller learning set.
Validity of radiographic assessment of the knee joint space using automatic image analysis.
Komatsu, Daigo; Hasegawa, Yukiharu; Kojima, Toshihisa; Seki, Taisuke; Ikeuchi, Kazuma; Takegami, Yasuhiko; Amano, Takafumi; Higuchi, Yoshitoshi; Kasai, Takehiro; Ishiguro, Naoki
2016-09-01
The present study investigated whether there were differences between automatic and manual measurements of the minimum joint space width (mJSW) on knee radiographs. Knee radiographs of 324 participants in a systematic health screening were analyzed using the following three methods: manual measurement of film-based radiographs (Manual), manual measurement of digitized radiographs (Digital), and automatic measurement of digitized radiographs (Auto). The mean mJSWs on the medial and lateral sides of the knees were determined using each method, and measurement reliability was evaluated using intra-class correlation coefficients. Measurement errors were compared between normal knees and knees with radiographic osteoarthritis. All three methods demonstrated good reliability, although the reliability was slightly lower with the Manual method than with the other methods. On the medial and lateral sides of the knees, the mJSWs were the largest in the Manual method and the smallest in the Auto method. The measurement errors of each method were significantly larger for normal knees than for radiographic osteoarthritis knees. The mJSW measurements are more accurate and reliable with the Auto method than with the Manual or Digital method, especially for normal knees. Therefore, the Auto method is ideal for the assessment of the knee joint space.
Tóth, László; Hoffmann, Ildikó; Gosztolya, Gábor; Vincze, Veronika; Szatlóczki, Gréta; Bánréti, Zoltán; Pákáski, Magdolna; Kálmán, János
2018-01-01
Background: Even today the reliable diagnosis of the prodromal stages of Alzheimer’s disease (AD) remains a great challenge. Our research focuses on the earliest detectable indicators of cognitive de-cline in mild cognitive impairment (MCI). Since the presence of language impairment has been reported even in the mild stage of AD, the aim of this study is to develop a sensitive neuropsychological screening method which is based on the analysis of spontaneous speech production during performing a memory task. In the future, this can form the basis of an Internet-based interactive screening software for the recognition of MCI. Methods: Participants were 38 healthy controls and 48 clinically diagnosed MCI patients. The provoked spontaneous speech by asking the patients to recall the content of 2 short black and white films (one direct, one delayed), and by answering one question. Acoustic parameters (hesitation ratio, speech tempo, length and number of silent and filled pauses, length of utterance) were extracted from the recorded speech sig-nals, first manually (using the Praat software), and then automatically, with an automatic speech recogni-tion (ASR) based tool. First, the extracted parameters were statistically analyzed. Then we applied machine learning algorithms to see whether the MCI and the control group can be discriminated automatically based on the acoustic features. Results: The statistical analysis showed significant differences for most of the acoustic parameters (speech tempo, articulation rate, silent pause, hesitation ratio, length of utterance, pause-per-utterance ratio). The most significant differences between the two groups were found in the speech tempo in the delayed recall task, and in the number of pauses for the question-answering task. The fully automated version of the analysis process – that is, using the ASR-based features in combination with machine learning - was able to separate the two classes with an F1-score of 78.8%. Conclusion: The temporal analysis of spontaneous speech can be exploited in implementing a new, auto-matic detection-based tool for screening MCI for the community. PMID:29165085
[Application of automatic photography in Schistosoma japonicum miracidium hatching experiments].
Ming-Li, Zhou; Ai-Ling, Cai; Xue-Feng, Wang
2016-05-20
To explore the value of automatic photography in the observation of results of Schistosoma japonicum miracidium hatching experiments. Some fresh S. japonicum eggs were added into cow feces, and the samples of feces were divided into a low infested experimental group and a high infested group (40 samples each group). In addition, there was a negative control group with 40 samples of cow feces without S. japonicum eggs. The conventional nylon bag S. japonicum miracidium hatching experiments were performed. The process was observed with the method of flashlight and magnifying glass combined with automatic video (automatic photography method), and, at the same time, with the naked eye observation method. The results were compared. In the low infested group, the miracidium positive detection rates were 57.5% and 85.0% by the naked eye observation method and automatic photography method, respectively ( χ 2 = 11.723, P < 0.05). In the high infested group, the positive detection rates were 97.5% and 100% by the naked eye observation method and automatic photography method, respectively ( χ 2 = 1.253, P > 0.05). In the two infested groups, the average positive detection rates were 77.5% and 92.5% by the naked eye observation method and automatic photography method, respectively ( χ 2 = 6.894, P < 0.05). The automatic photography can effectively improve the positive detection rate in the S. japonicum miracidium hatching experiments.
Exogean: a framework for annotating protein-coding genes in eukaryotic genomic DNA
Djebali, Sarah; Delaplace, Franck; Crollius, Hugues Roest
2006-01-01
Background Accurate and automatic gene identification in eukaryotic genomic DNA is more than ever of crucial importance to efficiently exploit the large volume of assembled genome sequences available to the community. Automatic methods have always been considered less reliable than human expertise. This is illustrated in the EGASP project, where reference annotations against which all automatic methods are measured are generated by human annotators and experimentally verified. We hypothesized that replicating the accuracy of human annotators in an automatic method could be achieved by formalizing the rules and decisions that they use, in a mathematical formalism. Results We have developed Exogean, a flexible framework based on directed acyclic colored multigraphs (DACMs) that can represent biological objects (for example, mRNA, ESTs, protein alignments, exons) and relationships between them. Graphs are analyzed to process the information according to rules that replicate those used by human annotators. Simple individual starting objects given as input to Exogean are thus combined and synthesized into complex objects such as protein coding transcripts. Conclusion We show here, in the context of the EGASP project, that Exogean is currently the method that best reproduces protein coding gene annotations from human experts, in terms of identifying at least one exact coding sequence per gene. We discuss current limitations of the method and several avenues for improvement. PMID:16925841
Automatic 2.5-D Facial Landmarking and Emotion Annotation for Social Interaction Assistance.
Zhao, Xi; Zou, Jianhua; Li, Huibin; Dellandrea, Emmanuel; Kakadiaris, Ioannis A; Chen, Liming
2016-09-01
People with low vision, Alzheimer's disease, and autism spectrum disorder experience difficulties in perceiving or interpreting facial expression of emotion in their social lives. Though automatic facial expression recognition (FER) methods on 2-D videos have been extensively investigated, their performance was constrained by challenges in head pose and lighting conditions. The shape information in 3-D facial data can reduce or even overcome these challenges. However, high expenses of 3-D cameras prevent their widespread use. Fortunately, 2.5-D facial data from emerging portable RGB-D cameras provide a good balance for this dilemma. In this paper, we propose an automatic emotion annotation solution on 2.5-D facial data collected from RGB-D cameras. The solution consists of a facial landmarking method and a FER method. Specifically, we propose building a deformable partial face model and fit the model to a 2.5-D face for localizing facial landmarks automatically. In FER, a novel action unit (AU) space-based FER method has been proposed. Facial features are extracted using landmarks and further represented as coordinates in the AU space, which are classified into facial expressions. Evaluated on three publicly accessible facial databases, namely EURECOM, FRGC, and Bosphorus databases, the proposed facial landmarking and expression recognition methods have achieved satisfactory results. Possible real-world applications using our algorithms have also been discussed.
Automatic QRS complex detection using two-level convolutional neural network.
Xiang, Yande; Lin, Zhitao; Meng, Jianyi
2018-01-29
The QRS complex is the most noticeable feature in the electrocardiogram (ECG) signal, therefore, its detection is critical for ECG signal analysis. The existing detection methods largely depend on hand-crafted manual features and parameters, which may introduce significant computational complexity, especially in the transform domains. In addition, fixed features and parameters are not suitable for detecting various kinds of QRS complexes under different circumstances. In this study, based on 1-D convolutional neural network (CNN), an accurate method for QRS complex detection is proposed. The CNN consists of object-level and part-level CNNs for extracting different grained ECG morphological features automatically. All the extracted morphological features are used by multi-layer perceptron (MLP) for QRS complex detection. Additionally, a simple ECG signal preprocessing technique which only contains difference operation in temporal domain is adopted. Based on the MIT-BIH arrhythmia (MIT-BIH-AR) database, the proposed detection method achieves overall sensitivity Sen = 99.77%, positive predictivity rate PPR = 99.91%, and detection error rate DER = 0.32%. In addition, the performance variation is performed according to different signal-to-noise ratio (SNR) values. An automatic QRS detection method using two-level 1-D CNN and simple signal preprocessing technique is proposed for QRS complex detection. Compared with the state-of-the-art QRS complex detection approaches, experimental results show that the proposed method acquires comparable accuracy.
Woie, Leik; Måløy, Frode; Eftestøl, Trygve; Engan, Kjersti; Edvardsen, Thor; Kvaløy, Jan Terje; Ørn, Stein
2014-02-01
Current methods for the estimation of infarct size by late-enhanced cardiac magnetic imaging are based upon 2D analysis that first determines the size of the infarction in each slice, and thereafter adds the infarct sizes from each slice to generate a volume. We present a novel, automatic 3D method that estimates infarct size by a simultaneous analysis of all pixels from all slices. In a population of 54 patients with ischemic scars, the infarct size estimated by the automatic 3D method was compared with four established 2D methods. The new 3D method defined scar as the sum of all pixels with signal intensity (SI) ≥35 % of max SI from the complete myocardium, border zone: SI 35-50 % of max SI and core as SI ≥50 % of max SI. The 3D method yielded smaller infarct size (-2.8 ± 2.3 %) and core size (-3.0 ± 1.7 %) than the 2D method most similar to ours. There was no difference in the size of the border zone (0.2 ± 1.4 %). The 3D method demonstrated stronger correlations between scar size and left ventricular (LV) remodelling parameters (LV ejection fraction: r = -0.71, p < 0.0005, LV end-diastolic index: r = 0.54, p < 0.0005, and LV end-systolic index: r = 0.59, p < 0.0005) compared with conventional 2D methods. Infarct size estimation by our novel 3D automatic method is without the need for manual demarcation of the scar; it is less time-consuming and has a stronger correlation with remodelling parameters compared with existing methods.
Rapid automatic keyword extraction for information retrieval and analysis
Rose, Stuart J [Richland, WA; Cowley,; E, Wendy [Richland, WA; Crow, Vernon L [Richland, WA; Cramer, Nicholas O [Richland, WA
2012-03-06
Methods and systems for rapid automatic keyword extraction for information retrieval and analysis. Embodiments can include parsing words in an individual document by delimiters, stop words, or both in order to identify candidate keywords. Word scores for each word within the candidate keywords are then calculated based on a function of co-occurrence degree, co-occurrence frequency, or both. Based on a function of the word scores for words within the candidate keyword, a keyword score is calculated for each of the candidate keywords. A portion of the candidate keywords are then extracted as keywords based, at least in part, on the candidate keywords having the highest keyword scores.
NASA Astrophysics Data System (ADS)
Jiang, Jingtao; Sui, Rendong; Shi, Yan; Li, Furong; Hu, Caiqi
In this paper 3-D models of combined fixture elements are designed, classified by their functions, and saved in computer as supporting elements library, jointing elements library, basic elements library, localization elements library, clamping elements library, and adjusting elements library etc. Then automatic assembly of 3-D combined checking fixture for auto-body part is presented based on modularization theory. And in virtual auto-body assembly space, Locating constraint mapping technique and assembly rule-based reasoning technique are used to calculate the position of modular elements according to localization points and clamp points of auto-body part. Auto-body part model is transformed from itself coordinate system space to virtual assembly space by homogeneous transformation matrix. Automatic assembly of different functional fixture elements and auto-body part is implemented with API function based on the second development of UG. It is proven in practice that the method in this paper is feasible and high efficiency.
NASA Astrophysics Data System (ADS)
Wesemann, Johannes; Burgholzer, Reinhard; Herrnegger, Mathew; Schulz, Karsten
2017-04-01
In recent years, a lot of research in hydrological modelling has been invested to improve the automatic calibration of rainfall-runoff models. This includes for example (1) the implementation of new optimisation methods, (2) the incorporation of new and different objective criteria and signatures in the optimisation and (3) the usage of auxiliary data sets apart from runoff. Nevertheless, in many applications manual calibration is still justifiable and frequently applied. The hydrologist performing the manual calibration, with his expert knowledge, is able to judge the hydrographs simultaneously concerning details but also in a holistic view. This integrated eye-ball verification procedure available to man can be difficult to formulate in objective criteria, even when using a multi-criteria approach. Comparing the results of automatic and manual calibration is not straightforward. Automatic calibration often solely involves objective criteria such as Nash-Sutcliffe Efficiency Coefficient or the Kling-Gupta-Efficiency as a benchmark during the calibration. Consequently, a comparison based on such measures is intrinsically biased towards automatic calibration. Additionally, objective criteria do not cover all aspects of a hydrograph leaving questions concerning the quality of a simulation open. This contribution therefore seeks to examine the quality of manually and automatically calibrated hydrographs by interactively involving expert knowledge in the evaluation. Simulations have been performed for the Mur catchment in Austria with the rainfall-runoff model COSERO using two parameter sets evolved from a manual and an automatic calibration. A subset of resulting hydrographs for observation and simulation, representing the typical flow conditions and events, will be evaluated in this study. In an interactive crowdsourcing approach experts attending the session can vote for their preferred simulated hydrograph without having information on the calibration method that produced the respective hydrograph. Therefore, the result of the poll can be seen as an additional quality criterion for the comparison of the two different approaches and help in the evaluation of the automatic calibration method.
NASA Astrophysics Data System (ADS)
Li, Xiaobing; Qiu, Tianshuang; Lebonvallet, Stephane; Ruan, Su
2010-02-01
This paper presents a brain tumor segmentation method which automatically segments tumors from human brain MRI image volume. The presented model is based on the symmetry of human brain and level set method. Firstly, the midsagittal plane of an MRI volume is searched, the slices with potential tumor of the volume are checked out according to their symmetries, and an initial boundary of the tumor in the slice, in which the tumor is in the largest size, is determined meanwhile by watershed and morphological algorithms; Secondly, the level set method is applied to the initial boundary to drive the curve evolving and stopping to the appropriate tumor boundary; Lastly, the tumor boundary is projected one by one to its adjacent slices as initial boundaries through the volume for the whole tumor. The experiment results are compared with hand tracking of the expert and show relatively good accordance between both.
Semi-automatic mapping of cultural heritage from airborne laser scanning using deep learning
NASA Astrophysics Data System (ADS)
Due Trier, Øivind; Salberg, Arnt-Børre; Holger Pilø, Lars; Tonning, Christer; Marius Johansen, Hans; Aarsten, Dagrun
2016-04-01
This paper proposes to use deep learning to improve semi-automatic mapping of cultural heritage from airborne laser scanning (ALS) data. Automatic detection methods, based on traditional pattern recognition, have been applied in a number of cultural heritage mapping projects in Norway for the past five years. Automatic detection of pits and heaps have been combined with visual interpretation of the ALS data for the mapping of deer hunting systems, iron production sites, grave mounds and charcoal kilns. However, the performance of the automatic detection methods varies substantially between ALS datasets. For the mapping of deer hunting systems on flat gravel and sand sediment deposits, the automatic detection results were almost perfect. However, some false detections appeared in the terrain outside of the sediment deposits. These could be explained by other pit-like landscape features, like parts of river courses, spaces between boulders, and modern terrain modifications. However, these were easy to spot during visual interpretation, and the number of missed individual pitfall traps was still low. For the mapping of grave mounds, the automatic method produced a large number of false detections, reducing the usefulness of the semi-automatic approach. The mound structure is a very common natural terrain feature, and the grave mounds are less distinct in shape than the pitfall traps. Still, applying automatic mound detection on an entire municipality did lead to a new discovery of an Iron Age grave field with more than 15 individual mounds. Automatic mound detection also proved to be useful for a detailed re-mapping of Norway's largest Iron Age grave yard, which contains almost 1000 individual graves. Combined pit and mound detection has been applied to the mapping of more than 1000 charcoal kilns that were used by an iron work 350-200 years ago. The majority of charcoal kilns were indirectly detected as either pits on the circumference, a central mound, or both. However, kilns with a flat interior and a shallow ditch along the circumference were often missed by the automatic detection method. The successfulness of automatic detection seems to depend on two factors: (1) the density of ALS ground hits on the cultural heritage structures being sought, and (2) to what extent these structures stand out from natural terrain structures. The first factor may, to some extent, be improved by using a higher number of ALS pulses per square meter. The second factor is difficult to change, and also highlights another challenge: how to make a general automatic method that is applicable in all types of terrain within a country. The mixed experience with traditional pattern recognition for semi-automatic mapping of cultural heritage led us to consider deep learning as an alternative approach. The main principle is that a general feature detector has been trained on a large image database. The feature detector is then tailored to a specific task by using a modest number of images of true and false examples of the features being sought. Results of using deep learning are compared with previous results using traditional pattern recognition.
Posteriori error determination and grid adaptation for AMR and ALE computational fluid dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lapenta, G. M.
2002-01-01
We discuss grid adaptation for application to AMR and ALE codes. Two new contributions are presented. First, a new method to locate the regions where the truncation error is being created due to an insufficient accuracy: the operator recovery error origin (OREO) detector. The OREO detector is automatic, reliable, easy to implement and extremely inexpensive. Second, a new grid motion technique is presented for application to ALE codes. The method is based on the Brackbill-Saltzman approach but it is directly linked to the OREO detector and moves the grid automatically to minimize the error.
Automatic prediction of protein domains from sequence information using a hybrid learning system.
Nagarajan, Niranjan; Yona, Golan
2004-06-12
We describe a novel method for detecting the domain structure of a protein from sequence information alone. The method is based on analyzing multiple sequence alignments that are derived from a database search. Multiple measures are defined to quantify the domain information content of each position along the sequence and are combined into a single predictor using a neural network. The output is further smoothed and post-processed using a probabilistic model to predict the most likely transition positions between domains. The method was assessed using the domain definitions in SCOP and CATH for proteins of known structure and was compared with several other existing methods. Our method performs well both in terms of accuracy and sensitivity. It improves significantly over the best methods available, even some of the semi-manual ones, while being fully automatic. Our method can also be used to suggest and verify domain partitions based on structural data. A few examples of predicted domain definitions and alternative partitions, as suggested by our method, are also discussed. An online domain-prediction server is available at http://biozon.org/tools/domains/
Ilunga-Mbuyamba, Elisee; Avina-Cervantes, Juan Gabriel; Cepeda-Negrete, Jonathan; Ibarra-Manzano, Mario Alberto; Chalopin, Claire
2017-12-01
Brain tumor segmentation is a routine process in a clinical setting and provides useful information for diagnosis and treatment planning. Manual segmentation, performed by physicians or radiologists, is a time-consuming task due to the large quantity of medical data generated presently. Hence, automatic segmentation methods are needed, and several approaches have been introduced in recent years including the Localized Region-based Active Contour Model (LRACM). There are many popular LRACM, but each of them presents strong and weak points. In this paper, the automatic selection of LRACM based on image content and its application on brain tumor segmentation is presented. Thereby, a framework to select one of three LRACM, i.e., Local Gaussian Distribution Fitting (LGDF), localized Chan-Vese (C-V) and Localized Active Contour Model with Background Intensity Compensation (LACM-BIC), is proposed. Twelve visual features are extracted to properly select the method that may process a given input image. The system is based on a supervised approach. Applied specifically to Magnetic Resonance Imaging (MRI) images, the experiments showed that the proposed system is able to correctly select the suitable LRACM to handle a specific image. Consequently, the selection framework achieves better accuracy performance than the three LRACM separately. Copyright © 2017 Elsevier Ltd. All rights reserved.
Truck circuits diagnosis for railway lines equipped with an automatic block signalling system
NASA Astrophysics Data System (ADS)
Spunei, E.; Piroi, I.; Muscai, C.; Răduca, E.; Piroi, F.
2018-01-01
This work presents a diagnosis method for detecting track circuits failures on a railway traffic line equipped with an Automatic Block Signalling installation. The diagnosis method uses the installation’s electrical schemas, based on which a series of diagnosis charts have been created. Further, the diagnosis charts were used to develop a software package, CDCBla, which substantially contributes to reducing the diagnosis time and human error during failure remedies. The proposed method can also be used as a training package for the maintenance staff. Since the diagnosis method here does not need signal or measurement inputs, using it does not necessitate additional IT knowledge and can be deployed on a mobile computing device (tablet, smart phone).
Unsupervised Learning —A Novel Clustering Method for Rolling Bearing Faults Identification
NASA Astrophysics Data System (ADS)
Kai, Li; Bo, Luo; Tao, Ma; Xuefeng, Yang; Guangming, Wang
2017-12-01
To promptly process the massive fault data and automatically provide accurate diagnosis results, numerous studies have been conducted on intelligent fault diagnosis of rolling bearing. Among these studies, such as artificial neural networks, support vector machines, decision trees and other supervised learning methods are used commonly. These methods can detect the failure of rolling bearing effectively, but to achieve better detection results, it often requires a lot of training samples. Based on above, a novel clustering method is proposed in this paper. This novel method is able to find the correct number of clusters automatically the effectiveness of the proposed method is validated using datasets from rolling element bearings. The diagnosis results show that the proposed method can accurately detect the fault types of small samples. Meanwhile, the diagnosis results are also relative high accuracy even for massive samples.
An Automatic Orthonormalization Method for Solving Stiff Boundary-Value Problems
NASA Astrophysics Data System (ADS)
Davey, A.
1983-08-01
A new initial-value method is described, based on a remark by Drury, for solving stiff linear differential two-point cigenvalue and boundary-value problems. The method is extremely reliable, it is especially suitable for high-order differential systems, and it is capable of accommodating realms of stiffness which other methods cannot reach. The key idea behind the method is to decompose the stiff differential operator into two non-stiff operators, one of which is nonlinear. The nonlinear one is specially chosen so that it advances an orthonormal frame, indeed the method is essentially a kind of automatic orthonormalization; the second is auxiliary but it is needed to determine the required function. The usefulness of the method is demonstrated by calculating some eigenfunctions for an Orr-Sommerfeld problem when the Reynolds number is as large as 10°.
Dudik, Joshua M.; Kurosu, Atsuko; Coyle, James L
2015-01-01
Background Cervical auscultation with high resolution sensors is currently under consideration as a method of automatically screening for specific swallowing abnormalities. To be clinically useful without human involvement, any devices based on cervical auscultation should be able to detect specified swallowing events in an automatic manner. Methods In this paper, we comparatively analyze the density-based spatial clustering of applications with noise algorithm (DBSCAN), a k-means based algorithm, and an algorithm based on quadratic variation as methods of differentiating periods of swallowing activity from periods of time without swallows. These algorithms utilized swallowing vibration data exclusively and compared the results to a gold standard measure of swallowing duration. Data was collected from 23 subjects that were actively suffering from swallowing difficulties. Results Comparing the performance of the DBSCAN algorithm with a proven segmentation algorithm that utilizes k-means clustering demonstrated that the DBSCAN algorithm had a higher sensitivity and correctly segmented more swallows. Comparing its performance with a threshold-based algorithm that utilized the quadratic variation of the signal showed that the DBSCAN algorithm offered no direct increase in performance. However, it offered several other benefits including a faster run time and more consistent performance between patients. All algorithms showed noticeable differen-tiation from the endpoints provided by a videofluoroscopy examination as well as reduced sensitivity. Conclusions In summary, we showed that the DBSCAN algorithm is a viable method for detecting the occurrence of a swallowing event using cervical auscultation signals, but significant work must be done to improve its performance before it can be implemented in an unsupervised manner. PMID:25658505
Markov random field based automatic image alignment for electron tomography.
Amat, Fernando; Moussavi, Farshid; Comolli, Luis R; Elidan, Gal; Downing, Kenneth H; Horowitz, Mark
2008-03-01
We present a method for automatic full-precision alignment of the images in a tomographic tilt series. Full-precision automatic alignment of cryo electron microscopy images has remained a difficult challenge to date, due to the limited electron dose and low image contrast. These facts lead to poor signal to noise ratio (SNR) in the images, which causes automatic feature trackers to generate errors, even with high contrast gold particles as fiducial features. To enable fully automatic alignment for full-precision reconstructions, we frame the problem probabilistically as finding the most likely particle tracks given a set of noisy images, using contextual information to make the solution more robust to the noise in each image. To solve this maximum likelihood problem, we use Markov Random Fields (MRF) to establish the correspondence of features in alignment and robust optimization for projection model estimation. The resulting algorithm, called Robust Alignment and Projection Estimation for Tomographic Reconstruction, or RAPTOR, has not needed any manual intervention for the difficult datasets we have tried, and has provided sub-pixel alignment that is as good as the manual approach by an expert user. We are able to automatically map complete and partial marker trajectories and thus obtain highly accurate image alignment. Our method has been applied to challenging cryo electron tomographic datasets with low SNR from intact bacterial cells, as well as several plastic section and X-ray datasets.
NASA Astrophysics Data System (ADS)
Yu, Xin; Dong, Lizhi; Lai, Boheng; Yang, Ping; Wang, Shuai; Wang, Xun; Liu, Yong; Tang, Guomao; Xu, Bing
2017-11-01
In order to solve the problem of large low-order aberrations with solid-state zigzag slab lasers, an automatic compensator has been developed in this paper. In this compensator, three lenses are mounted on a motorized rail, whose positions can be obtained using ray tracing method based on the beam parameters detected by a wave-front sensor. The initial peak to valley (PV) values of the wave-front range up to several tens of microns. Both simulated and experimental results show that the PV values of the wave-front can be reduced to around 1 . 6 μm with the proposed automatic compensator.
CERES: A new cerebellum lobule segmentation method.
Romero, Jose E; Coupé, Pierrick; Giraud, Rémi; Ta, Vinh-Thong; Fonov, Vladimir; Park, Min Tae M; Chakravarty, M Mallar; Voineskos, Aristotle N; Manjón, Jose V
2017-02-15
The human cerebellum is involved in language, motor tasks and cognitive processes such as attention or emotional processing. Therefore, an automatic and accurate segmentation method is highly desirable to measure and understand the cerebellum role in normal and pathological brain development. In this work, we propose a patch-based multi-atlas segmentation tool called CERES (CEREbellum Segmentation) that is able to automatically parcellate the cerebellum lobules. The proposed method works with standard resolution magnetic resonance T1-weighted images and uses the Optimized PatchMatch algorithm to speed up the patch matching process. The proposed method was compared with related recent state-of-the-art methods showing competitive results in both accuracy (average DICE of 0.7729) and execution time (around 5 minutes). Copyright © 2016 Elsevier Inc. All rights reserved.
SU-E-J-15: Automatically Detect Patient Treatment Position and Orientation in KV Portal Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, J; Yang, D
2015-06-15
Purpose: In the course of radiation therapy, the complex information processing workflow will Result in potential errors, such as incorrect or inaccurate patient setups. With automatic image check and patient identification, such errors could be effectively reduced. For this purpose, we developed a simple and rapid image processing method, to automatically detect the patient position and orientation in 2D portal images, so to allow automatic check of positions and orientations for patient daily RT treatments. Methods: Based on the principle of portal image formation, a set of whole body DRR images were reconstructed from multiple whole body CT volume datasets,more » and fused together to be used as the matching template. To identify the patient setup position and orientation shown in a 2D portal image, the 2D portal image was preprocessed (contrast enhancement, down-sampling and couch table detection), then matched to the template image so to identify the laterality (left or right), position, orientation and treatment site. Results: Five day’s clinical qualified portal images were gathered randomly, then were processed by the automatic detection and matching method without any additional information. The detection results were visually checked by physicists. 182 images were correct detection in a total of 200kV portal images. The correct rate was 91%. Conclusion: The proposed method can detect patient setup and orientation quickly and automatically. It only requires the image intensity information in KV portal images. This method can be useful in the framework of Electronic Chart Check (ECCK) to reduce the potential errors in workflow of radiation therapy and so to improve patient safety. In addition, the auto-detection results, as the patient treatment site position and patient orientation, could be useful to guide the sequential image processing procedures, e.g. verification of patient daily setup accuracy. This work was partially supported by research grant from Varian Medical System.« less
Automatic Extraction of Planetary Image Features
NASA Technical Reports Server (NTRS)
Troglio, G.; LeMoigne, J.; Moser, G.; Serpico, S. B.; Benediktsson, J. A.
2009-01-01
With the launch of several Lunar missions such as the Lunar Reconnaissance Orbiter (LRO) and Chandrayaan-1, a large amount of Lunar images will be acquired and will need to be analyzed. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to Lunar data that often present low contrast and uneven illumination characteristics. In this paper, we propose a new method for the extraction of Lunar features (that can be generalized to other planetary images), based on the combination of several image processing techniques, a watershed segmentation and the generalized Hough Transform. This feature extraction has many applications, among which image registration.
An automatic graph-based approach for artery/vein classification in retinal images.
Dashtbozorg, Behdad; Mendonça, Ana Maria; Campilho, Aurélio
2014-03-01
The classification of retinal vessels into artery/vein (A/V) is an important phase for automating the detection of vascular changes, and for the calculation of characteristic signs associated with several systemic diseases such as diabetes, hypertension, and other cardiovascular conditions. This paper presents an automatic approach for A/V classification based on the analysis of a graph extracted from the retinal vasculature. The proposed method classifies the entire vascular tree deciding on the type of each intersection point (graph nodes) and assigning one of two labels to each vessel segment (graph links). Final classification of a vessel segment as A/V is performed through the combination of the graph-based labeling results with a set of intensity features. The results of this proposed method are compared with manual labeling for three public databases. Accuracy values of 88.3%, 87.4%, and 89.8% are obtained for the images of the INSPIRE-AVR, DRIVE, and VICAVR databases, respectively. These results demonstrate that our method outperforms recent approaches for A/V classification.
Plexiform neurofibroma tissue classification
NASA Astrophysics Data System (ADS)
Weizman, L.; Hoch, L.; Ben Sira, L.; Joskowicz, L.; Pratt, L.; Constantini, S.; Ben Bashat, D.
2011-03-01
Plexiform Neurofibroma (PN) is a major complication of NeuroFibromatosis-1 (NF1), a common genetic disease that involving the nervous system. PNs are peripheral nerve sheath tumors extending along the length of the nerve in various parts of the body. Treatment decision is based on tumor volume assessment using MRI, which is currently time consuming and error prone, with limited semi-automatic segmentation support. We present in this paper a new method for the segmentation and tumor mass quantification of PN from STIR MRI scans. The method starts with a user-based delineation of the tumor area in a single slice and automatically detects the PN lesions in the entire image based on the tumor connectivity. Experimental results on seven datasets yield a mean volume overlap difference of 25% as compared to manual segmentation by expert radiologist with a mean computation and interaction time of 12 minutes vs. over an hour for manual annotation. Since the user interaction in the segmentation process is minimal, our method has the potential to successfully become part of the clinical workflow.
Real-time automatic registration in optical surgical navigation
NASA Astrophysics Data System (ADS)
Lin, Qinyong; Yang, Rongqian; Cai, Ken; Si, Xuan; Chen, Xiuwen; Wu, Xiaoming
2016-05-01
An image-guided surgical navigation system requires the improvement of the patient-to-image registration time to enhance the convenience of the registration procedure. A critical step in achieving this aim is performing a fully automatic patient-to-image registration. This study reports on a design of custom fiducial markers and the performance of a real-time automatic patient-to-image registration method using these markers on the basis of an optical tracking system for rigid anatomy. The custom fiducial markers are designed to be automatically localized in both patient and image spaces. An automatic localization method is performed by registering a point cloud sampled from the three dimensional (3D) pedestal model surface of a fiducial marker to each pedestal of fiducial markers searched in image space. A head phantom is constructed to estimate the performance of the real-time automatic registration method under four fiducial configurations. The head phantom experimental results demonstrate that the real-time automatic registration method is more convenient, rapid, and accurate than the manual method. The time required for each registration is approximately 0.1 s. The automatic localization method precisely localizes the fiducial markers in image space. The averaged target registration error for the four configurations is approximately 0.7 mm. The automatic registration performance is independent of the positions relative to the tracking system and the movement of the patient during the operation.
Faure, Elodie; Danjou, Aurélie M N; Clavel-Chapelon, Françoise; Boutron-Ruault, Marie-Christine; Dossus, Laure; Fervers, Béatrice
2017-02-24
Environmental exposure assessment based on Geographic Information Systems (GIS) and study participants' residential proximity to environmental exposure sources relies on the positional accuracy of subjects' residences to avoid misclassification bias. Our study compared the positional accuracy of two automatic geocoding methods to a manual reference method. We geocoded 4,247 address records representing the residential history (1990-2008) of 1,685 women from the French national E3N cohort living in the Rhône-Alpes region. We compared two automatic geocoding methods, a free-online geocoding service (method A) and an in-house geocoder (method B), to a reference layer created by manually relocating addresses from method A (method R). For each automatic geocoding method, positional accuracy levels were compared according to the urban/rural status of addresses and time-periods (1990-2000, 2001-2008), using Chi Square tests. Kappa statistics were performed to assess agreement of positional accuracy of both methods A and B with the reference method, overall, by time-periods and by urban/rural status of addresses. Respectively 81.4% and 84.4% of addresses were geocoded to the exact address (65.1% and 61.4%) or to the street segment (16.3% and 23.0%) with methods A and B. In the reference layer, geocoding accuracy was higher in urban areas compared to rural areas (74.4% vs. 10.5% addresses geocoded to the address or interpolated address level, p < 0.0001); no difference was observed according to the period of residence. Compared to the reference method, median positional errors were 0.0 m (IQR = 0.0-37.2 m) and 26.5 m (8.0-134.8 m), with positional errors <100 m for 82.5% and 71.3% of addresses, for method A and method B respectively. Positional agreement of method A and method B with method R was 'substantial' for both methods, with kappa coefficients of 0.60 and 0.61 for methods A and B, respectively. Our study demonstrates the feasibility of geocoding residential addresses in epidemiological studies not initially recorded for environmental exposure assessment, for both recent addresses and residence locations more than 20 years ago. Accuracy of the two automatic geocoding methods was comparable. The in-house method (B) allowed a better control of the geocoding process and was less time consuming.
Automatic Spatio-Temporal Flow Velocity Measurement in Small Rivers Using Thermal Image Sequences
NASA Astrophysics Data System (ADS)
Lin, D.; Eltner, A.; Sardemann, H.; Maas, H.-G.
2018-05-01
An automatic spatio-temporal flow velocity measurement approach, using an uncooled thermal camera, is proposed in this paper. The basic principle of the method is to track visible thermal features at the water surface in thermal camera image sequences. Radiometric and geometric calibrations are firstly implemented to remove vignetting effects in thermal imagery and to get the interior orientation parameters of the camera. An object-based unsupervised classification approach is then applied to detect the interest regions for data referencing and thermal feature tracking. Subsequently, GCPs are extracted to orient the river image sequences and local hot points are identified as tracking features. Afterwards, accurate dense tracking outputs are obtained using pyramidal Lucas-Kanade method. To validate the accuracy potential of the method, measurements obtained from thermal feature tracking are compared with reference measurements taken by a propeller gauge. Results show a great potential of automatic flow velocity measurement in small rivers using imagery from a thermal camera.
Integration of tools for binding archetypes to SNOMED CT.
Sundvall, Erik; Qamar, Rahil; Nyström, Mikael; Forss, Mattias; Petersson, Håkan; Karlsson, Daniel; Ahlfeldt, Hans; Rector, Alan
2008-10-27
The Archetype formalism and the associated Archetype Definition Language have been proposed as an ISO standard for specifying models of components of electronic healthcare records as a means of achieving interoperability between clinical systems. This paper presents an archetype editor with support for manual or semi-automatic creation of bindings between archetypes and terminology systems. Lexical and semantic methods are applied in order to obtain automatic mapping suggestions. Information visualisation methods are also used to assist the user in exploration and selection of mappings. An integrated tool for archetype authoring, semi-automatic SNOMED CT terminology binding assistance and terminology visualization was created and released as open source. Finding the right terms to bind is a difficult task but the effort to achieve terminology bindings may be reduced with the help of the described approach. The methods and tools presented are general, but here only bindings between SNOMED CT and archetypes based on the openEHR reference model are presented in detail.
A preliminary study of DTI Fingerprinting on stroke analysis.
Ma, Heather T; Ye, Chenfei; Wu, Jun; Yang, Pengfei; Chen, Xuhui; Yang, Zhengyi; Ma, Jingbo
2014-01-01
DTI (Diffusion Tensor Imaging) is a well-known MRI (Magnetic Resonance Imaging) technique which provides useful structural information about human brain. However, the quantitative measurement to physiological variation of subtypes of ischemic stroke is not available. An automatically quantitative method for DTI analysis will enhance the DTI application in clinics. In this study, we proposed a DTI Fingerprinting technology to quantitatively analyze white matter tissue, which was applied in stroke classification. The TBSS (Tract Based Spatial Statistics) method was employed to generate mask automatically. To evaluate the clustering performance of the automatic method, lesion ROI (Region of Interest) is manually drawn on the DWI images as a reference. The results from the DTI Fingerprinting were compared with those obtained from the reference ROIs. It indicates that the DTI Fingerprinting could identify different states of ischemic stroke and has promising potential to provide a more comprehensive measure of the DTI data. Further development should be carried out to improve DTI Fingerprinting technology in clinics.
The Automatic Recognition of the Abnormal Sky-subtraction Spectra Based on Hadoop
NASA Astrophysics Data System (ADS)
An, An; Pan, Jingchang
2017-10-01
The skylines, superimposing on the target spectrum as a main noise, If the spectrum still contains a large number of high strength skylight residuals after sky-subtraction processing, it will not be conducive to the follow-up analysis of the target spectrum. At the same time, the LAMOST can observe a quantity of spectroscopic data in every night. We need an efficient platform to proceed the recognition of the larger numbers of abnormal sky-subtraction spectra quickly. Hadoop, as a distributed parallel data computing platform, can deal with large amounts of data effectively. In this paper, we conduct the continuum normalization firstly and then a simple and effective method will be presented to automatic recognize the abnormal sky-subtraction spectra based on Hadoop platform. Obtain through the experiment, the Hadoop platform can implement the recognition with more speed and efficiency, and the simple method can recognize the abnormal sky-subtraction spectra and find the abnormal skyline positions of different residual strength effectively, can be applied to the automatic detection of abnormal sky-subtraction of large number of spectra.
Characterizing artifacts in RR stress test time series.
Astudillo-Salinas, Fabian; Palacio-Baus, Kenneth; Solano-Quinde, Lizandro; Medina, Ruben; Wong, Sara
2016-08-01
Electrocardiographic stress test records have a lot of artifacts. In this paper we explore a simple method to characterize the amount of artifacts present in unprocessed RR stress test time series. Four time series classes were defined: Very good lead, Good lead, Low quality lead and Useless lead. 65 ECG, 8 lead, records of stress test series were analyzed. Firstly, RR-time series were annotated by two experts. The automatic methodology is based on dividing the RR-time series in non-overlapping windows. Each window is marked as noisy whenever it exceeds an established standard deviation threshold (SDT). Series are classified according to the percentage of windows that exceeds a given value, based upon the first manual annotation. Different SDT were explored. Results show that SDT close to 20% (as a percentage of the mean) provides the best results. The coincidence between annotators classification is 70.77% whereas, the coincidence between the second annotator and the automatic method providing the best matches is larger than 63%. Leads classified as Very good leads and Good leads could be combined to improve automatic heartbeat labeling.
New approaches to optimization in aerospace conceptual design
NASA Technical Reports Server (NTRS)
Gage, Peter J.
1995-01-01
Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.
Multi-class SVM model for fMRI-based classification and grading of liver fibrosis
NASA Astrophysics Data System (ADS)
Freiman, M.; Sela, Y.; Edrei, Y.; Pappo, O.; Joskowicz, L.; Abramovitch, R.
2010-03-01
We present a novel non-invasive automatic method for the classification and grading of liver fibrosis from fMRI maps based on hepatic hemodynamic changes. This method automatically creates a model for liver fibrosis grading based on training datasets. Our supervised learning method evaluates hepatic hemodynamics from an anatomical MRI image and three T2*-W fMRI signal intensity time-course scans acquired during the breathing of air, air-carbon dioxide, and carbogen. It constructs a statistical model of liver fibrosis from these fMRI scans using a binary-based one-against-all multi class Support Vector Machine (SVM) classifier. We evaluated the resulting classification model with the leave-one out technique and compared it to both full multi-class SVM and K-Nearest Neighbor (KNN) classifications. Our experimental study analyzed 57 slice sets from 13 mice, and yielded a 98.2% separation accuracy between healthy and low grade fibrotic subjects, and an overall accuracy of 84.2% for fibrosis grading. These results are better than the existing image-based methods which can only discriminate between healthy and high grade fibrosis subjects. With appropriate extensions, our method may be used for non-invasive classification and progression monitoring of liver fibrosis in human patients instead of more invasive approaches, such as biopsy or contrast-enhanced imaging.
Virus Particle Detection by Convolutional Neural Network in Transmission Electron Microscopy Images.
Ito, Eisuke; Sato, Takaaki; Sano, Daisuke; Utagawa, Etsuko; Kato, Tsuyoshi
2018-06-01
A new computational method for the detection of virus particles in transmission electron microscopy (TEM) images is presented. Our approach is to use a convolutional neural network that transforms a TEM image to a probabilistic map that indicates where virus particles exist in the image. Our proposed approach automatically and simultaneously learns both discriminative features and classifier for virus particle detection by machine learning, in contrast to existing methods that are based on handcrafted features that yield many false positives and require several postprocessing steps. The detection performance of the proposed method was assessed against a dataset of TEM images containing feline calicivirus particles and compared with several existing detection methods, and the state-of-the-art performance of the developed method for detecting virus was demonstrated. Since our method is based on supervised learning that requires both the input images and their corresponding annotations, it is basically used for detection of already-known viruses. However, the method is highly flexible, and the convolutional networks can adapt themselves to any virus particles by learning automatically from an annotated dataset.
RDFBuilder: a tool to automatically build RDF-based interfaces for MAGE-OM microarray data sources.
Anguita, Alberto; Martin, Luis; Garcia-Remesal, Miguel; Maojo, Victor
2013-07-01
This paper presents RDFBuilder, a tool that enables RDF-based access to MAGE-ML-compliant microarray databases. We have developed a system that automatically transforms the MAGE-OM model and microarray data stored in the ArrayExpress database into RDF format. Additionally, the system automatically enables a SPARQL endpoint. This allows users to execute SPARQL queries for retrieving microarray data, either from specific experiments or from more than one experiment at a time. Our system optimizes response times by caching and reusing information from previous queries. In this paper, we describe our methods for achieving this transformation. We show that our approach is complementary to other existing initiatives, such as Bio2RDF, for accessing and retrieving data from the ArrayExpress database. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Digital focusing of OCT images based on scalar diffraction theory and information entropy
Liu, Guozhong; Zhi, Zhongwei; Wang, Ruikang K.
2012-01-01
This paper describes a digital method that is capable of automatically focusing optical coherence tomography (OCT) en face images without prior knowledge of the point spread function of the imaging system. The method utilizes a scalar diffraction model to simulate wave propagation from out-of-focus scatter to the focal plane, from which the propagation distance between the out-of-focus plane and the focal plane is determined automatically via an image-definition-evaluation criterion based on information entropy theory. By use of the proposed approach, we demonstrate that the lateral resolution close to that at the focal plane can be recovered from the imaging planes outside the depth of field region with minimal loss of resolution. Fresh onion tissues and mouse fat tissues are used in the experiments to show the performance of the proposed method. PMID:23162717
Automatic pickup of arrival time of channel wave based on multi-channel constraints
NASA Astrophysics Data System (ADS)
Wang, Bao-Li
2018-03-01
Accurately detecting the arrival time of a channel wave in a coal seam is very important for in-seam seismic data processing. The arrival time greatly affects the accuracy of the channel wave inversion and the computed tomography (CT) result. However, because the signal-to-noise ratio of in-seam seismic data is reduced by the long wavelength and strong frequency dispersion, accurately timing the arrival of channel waves is extremely difficult. For this purpose, we propose a method that automatically picks up the arrival time of channel waves based on multi-channel constraints. We first estimate the Jaccard similarity coefficient of two ray paths, then apply it as a weight coefficient for stacking the multichannel dispersion spectra. The reasonableness and effectiveness of the proposed method is verified in an actual data application. Most importantly, the method increases the degree of automation and the pickup precision of the channel-wave arrival time.
Khan, Arif Ul Maula; Torelli, Angelo; Wolf, Ivo; Gretz, Norbert
2018-05-08
In biological assays, automated cell/colony segmentation and counting is imperative owing to huge image sets. Problems occurring due to drifting image acquisition conditions, background noise and high variation in colony features in experiments demand a user-friendly, adaptive and robust image processing/analysis method. We present AutoCellSeg (based on MATLAB) that implements a supervised automatic and robust image segmentation method. AutoCellSeg utilizes multi-thresholding aided by a feedback-based watershed algorithm taking segmentation plausibility criteria into account. It is usable in different operation modes and intuitively enables the user to select object features interactively for supervised image segmentation method. It allows the user to correct results with a graphical interface. This publicly available tool outperforms tools like OpenCFU and CellProfiler in terms of accuracy and provides many additional useful features for end-users.
NASA Astrophysics Data System (ADS)
Feng, Min-nan; Wang, Yu-cong; Wang, Hao; Liu, Guo-quan; Xue, Wei-hua
2017-03-01
Using a total of 297 segmented sections, we reconstructed the three-dimensional (3D) structure of pure iron and obtained the largest dataset of 16254 3D complete grains reported to date. The mean values of equivalent sphere radius and face number of pure iron were observed to be consistent with those of Monte Carlo simulated grains, phase-field simulated grains, Ti-alloy grains, and Ni-based super alloy grains. In this work, by finding a balance between automatic methods and manual refinement, we developed an interactive segmentation method to segment serial sections accurately in the reconstruction of the 3D microstructure; this approach can save time as well as substantially eliminate errors. The segmentation process comprises four operations: image preprocessing, breakpoint detection based on mathematical morphology analysis, optimized automatic connection of the breakpoints, and manual refinement by artificial evaluation.
NASA Astrophysics Data System (ADS)
Zhao, Yan; Yang, Zijiang; Gao, Song; Liu, Jinbiao
2018-02-01
Automatic generation control(AGC) is a key technology to maintain real time power generation and load balance, and to ensure the quality of power supply. Power grids require each power generation unit to have a satisfactory AGC performance, being specified in two detailed rules. The two rules provide a set of indices to measure the AGC performance of power generation unit. However, the commonly-used method to calculate these indices is based on particular data samples from AGC responses and will lead to incorrect results in practice. This paper proposes a new method to estimate the AGC performance indices via system identification techniques. In addition, a nonlinear regression model between performance indices and load command is built in order to predict the AGC performance indices. The effectiveness of the proposed method is validated through industrial case studies.
Improving Cluster Analysis with Automatic Variable Selection Based on Trees
2014-12-01
regression trees Daisy DISsimilAritY PAM partitioning around medoids PMA penalized multivariate analysis SPC sparse principal components UPGMA unweighted...unweighted pair-group average method ( UPGMA ). This method measures dissimilarities between all objects in two clusters and takes the average value
De Matteis, Sara; Jarvis, Deborah; Young, Heather; Young, Alan; Allen, Naomi; Potts, James; Darnton, Andrew; Rushton, Lesley; Cullinan, Paul
2017-03-01
Objectives The standard approach to the assessment of occupational exposures is through the manual collection and coding of job histories. This method is time-consuming and costly and makes it potentially unfeasible to perform high quality analyses on occupational exposures in large population-based studies. Our aim was to develop a novel, efficient web-based tool to collect and code lifetime job histories in the UK Biobank, a population-based cohort of over 500 000 participants. Methods We developed OSCAR (occupations self-coding automatic recording) based on the hierarchical structure of the UK Standard Occupational Classification (SOC) 2000, which allows individuals to collect and automatically code their lifetime job histories via a simple decision-tree model. Participants were asked to find each of their jobs by selecting appropriate job categories until they identified their job title, which was linked to a hidden 4-digit SOC code. For each occupation a job title in free text was also collected to estimate Cohen's kappa (κ) inter-rater agreement between SOC codes assigned by OSCAR and an expert manual coder. Results OSCAR was administered to 324 653 UK Biobank participants with an existing email address between June and September 2015. Complete 4-digit SOC-coded lifetime job histories were collected for 108 784 participants (response rate: 34%). Agreement between the 4-digit SOC codes assigned by OSCAR and the manual coder for a random sample of 400 job titles was moderately good [κ=0.45, 95% confidence interval (95% CI) 0.42-0.49], and improved when broader job categories were considered (κ=0.64, 95% CI 0.61-0.69 at a 1-digit SOC-code level). Conclusions OSCAR is a novel, efficient, and reasonably reliable web-based tool for collecting and automatically coding lifetime job histories in large population-based studies. Further application in other research projects for external validation purposes is warranted.
Kaleem, Muhammad; Gurve, Dharmendra; Guergachi, Aziz; Krishnan, Sridhar
2018-06-25
The objective of the work described in this paper is development of a computationally efficient methodology for patient-specific automatic seizure detection in long-term multi-channel EEG recordings. Approach: A novel patient-specific seizure detection approach based on signal-derived Empirical Mode Decomposition (EMD)-based dictionary approach is proposed. For this purpose, we use an empirical framework for EMD-based dictionary creation and learning, inspired by traditional dictionary learning methods, in which the EMD-based dictionary is learned from the multi-channel EEG data being analyzed for automatic seizure detection. We present the algorithm for dictionary creation and learning, whose purpose is to learn dictionaries with a small number of atoms. Using training signals belonging to seizure and non-seizure classes, an initial dictionary, termed as the raw dictionary, is formed. The atoms of the raw dictionary are composed of intrinsic mode functions obtained after decomposition of the training signals using the empirical mode decomposition algorithm. The raw dictionary is then trained using a learning algorithm, resulting in a substantial decrease in the number of atoms in the trained dictionary. The trained dictionary is then used for automatic seizure detection, such that coefficients of orthogonal projections of test signals against the trained dictionary form the features used for classification of test signals into seizure and non-seizure classes. Thus no hand-engineered features have to be extracted from the data as in traditional seizure detection approaches. Main results: The performance of the proposed approach is validated using the CHB-MIT benchmark database, and averaged accuracy, sensitivity and specificity values of 92.9%, 94.3% and 91.5%, respectively, are obtained using support vector machine classifier and five-fold cross-validation method. These results are compared with other approaches using the same database, and the suitability of the approach for seizure detection in long-term multi-channel EEG recordings is discussed. Significance: The proposed approach describes a computationally efficient method for automatic seizure detection in long-term multi-channel EEG recordings. The method does not rely on hand-engineered features, as are required in traditional approaches. Furthermore, the approach is suitable for scenarios where the dictionary once formed and trained can be used for automatic seizure detection of newly recorded data, making the approach suitable for long-term multi-channel EEG recordings. © 2018 IOP Publishing Ltd.
NASA Astrophysics Data System (ADS)
Lu, Jun; Xiao, Jun; Gao, Dong Jun; Zong, Shu Yu; Li, Zhu
2018-03-01
In the production of the Association of American Railroads (AAR) locomotive wheel-set, the press-fit curve is the most important basis for the reliability of wheel-set assembly. In the past, Most of production enterprises mainly use artificial detection methods to determine the quality of assembly. There are cases of miscarriage of justice appear. For this reason, the research on the standard is carried out. And the automatic judgment of press-fit curve is analysed and designed, so as to provide guidance for the locomotive wheel-set production based on AAR standard.
Automatic indexing of compound words based on mutual information for Korean text retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan Koo Kim; Yoo Kun Cho
In this paper, we present an automatic indexing technique for compound words suitable to an aggulutinative language, specifically Korean. Firstly, we present the construction conditions to compose compound words as indexing terms. Also we present the decomposition rules applicable to consecutive nouns to extract all contents of text. Finally we propose a measure to estimate the usefulness of a term, mutual information, to calculate the degree of word association of compound words, based on the information theoretic notion. By applying this method, our system has raised the precision rate of compound words from 72% to 87%.
Automatic control system of high precision welding of workpieces in mechanical engineering
NASA Astrophysics Data System (ADS)
Kuznetsov, I. N.; Zvezdin, V. V.; Israfilov, I. H.; Portnov, S. M.
2014-12-01
In this paper, based on the conducted patent research, the system of laser welding control with different geometry of weld and shapes of parts is developed. The method of monitoring the position of the spot of laser radiation in relation to the curved weld is worked out; it is based on the tracking the edges of the welded parts by low-power laser radiation reflected from the surface of the parts. It allows to make the positioning of the focus of laser radiation in relation to the juncture of the welded parts automatically.
User-guided segmentation for volumetric retinal optical coherence tomography images
Yin, Xin; Chao, Jennifer R.; Wang, Ruikang K.
2014-01-01
Abstract. Despite the existence of automatic segmentation techniques, trained graders still rely on manual segmentation to provide retinal layers and features from clinical optical coherence tomography (OCT) images for accurate measurements. To bridge the gap between this time-consuming need of manual segmentation and currently available automatic segmentation techniques, this paper proposes a user-guided segmentation method to perform the segmentation of retinal layers and features in OCT images. With this method, by interactively navigating three-dimensional (3-D) OCT images, the user first manually defines user-defined (or sketched) lines at regions where the retinal layers appear very irregular for which the automatic segmentation method often fails to provide satisfactory results. The algorithm is then guided by these sketched lines to trace the entire 3-D retinal layer and anatomical features by the use of novel layer and edge detectors that are based on robust likelihood estimation. The layer and edge boundaries are finally obtained to achieve segmentation. Segmentation of retinal layers in mouse and human OCT images demonstrates the reliability and efficiency of the proposed user-guided segmentation method. PMID:25147962
Automatic selection of arterial input function using tri-exponential models
NASA Astrophysics Data System (ADS)
Yao, Jianhua; Chen, Jeremy; Castro, Marcelo; Thomasson, David
2009-02-01
Dynamic Contrast Enhanced MRI (DCE-MRI) is one method for drug and tumor assessment. Selecting a consistent arterial input function (AIF) is necessary to calculate tissue and tumor pharmacokinetic parameters in DCE-MRI. This paper presents an automatic and robust method to select the AIF. The first stage is artery detection and segmentation, where knowledge about artery structure and dynamic signal intensity temporal properties of DCE-MRI is employed. The second stage is AIF model fitting and selection. A tri-exponential model is fitted for every candidate AIF using the Levenberg-Marquardt method, and the best fitted AIF is selected. Our method has been applied in DCE-MRIs of four different body parts: breast, brain, liver and prostate. The success rates in artery segmentation for 19 cases are 89.6%+/-15.9%. The pharmacokinetic parameters computed from the automatically selected AIFs are highly correlated with those from manually determined AIFs (R2=0.946, P(T<=t)=0.09). Our imaging-based tri-exponential AIF model demonstrated significant improvement over a previously proposed bi-exponential model.
User-guided segmentation for volumetric retinal optical coherence tomography images.
Yin, Xin; Chao, Jennifer R; Wang, Ruikang K
2014-08-01
Despite the existence of automatic segmentation techniques, trained graders still rely on manual segmentation to provide retinal layers and features from clinical optical coherence tomography (OCT) images for accurate measurements. To bridge the gap between this time-consuming need of manual segmentation and currently available automatic segmentation techniques, this paper proposes a user-guided segmentation method to perform the segmentation of retinal layers and features in OCT images. With this method, by interactively navigating three-dimensional (3-D) OCT images, the user first manually defines user-defined (or sketched) lines at regions where the retinal layers appear very irregular for which the automatic segmentation method often fails to provide satisfactory results. The algorithm is then guided by these sketched lines to trace the entire 3-D retinal layer and anatomical features by the use of novel layer and edge detectors that are based on robust likelihood estimation. The layer and edge boundaries are finally obtained to achieve segmentation. Segmentation of retinal layers in mouse and human OCT images demonstrates the reliability and efficiency of the proposed user-guided segmentation method.
WE-AB-BRA-05: Fully Automatic Segmentation of Male Pelvic Organs On CT Without Manual Intervention
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Y; Lian, J; Chen, R
Purpose: We aim to develop a fully automatic tool for accurate contouring of major male pelvic organs in CT images for radiotherapy without any manual initialization, yet still achieving superior performance than the existing tools. Methods: A learning-based 3D deformable shape model was developed for automatic contouring. Specifically, we utilized a recent machine learning method, random forest, to jointly learn both image regressor and classifier for each organ. In particular, the image regressor is trained to predict the 3D displacement from each vertex of the 3D shape model towards the organ boundary based on the local image appearance around themore » location of this vertex. The predicted 3D displacements are then used to drive the 3D shape model towards the target organ. Once the shape model is deformed close to the target organ, it is further refined by an organ likelihood map estimated by the learned classifier. As the organ likelihood map provides good guideline for the organ boundary, the precise contouring Result could be achieved, by deforming the 3D shape model locally to fit boundaries in the organ likelihood map. Results: We applied our method to 29 previously-treated prostate cancer patients, each with one planning CT scan. Compared with manually delineated pelvic organs, our method obtains overlap ratios of 85.2%±3.74% for the prostate, 94.9%±1.62% for the bladder, and 84.7%±1.97% for the rectum, respectively. Conclusion: This work demonstrated feasibility of a novel machine-learning based approach for accurate and automatic contouring of major male pelvic organs. It shows the potential to replace the time-consuming and inconsistent manual contouring in the clinic. Also, compared with the existing works, our method is more accurate and also efficient since it does not require any manual intervention, such as manual landmark placement. Moreover, our method obtained very similar contouring results as the clinical experts. Project is partially support by a grant from NCI 1R01CA140413.« less
An image-based approach for automatic detecting five true-leaves stage of cotton
NASA Astrophysics Data System (ADS)
Li, Yanan; Cao, Zhiguo; Wu, Xi; Yu, Zhenghong; Wang, Yu; Bai, Xiaodong
2013-10-01
Cotton, as one of the four major economic crops, is of great significance to the development of the national economy. Monitoring cotton growth status by automatic image-based detection makes sense due to its low-cost, low-labor and the capability of continuous observations. However, little research has been done to improve close observation of different growth stages of field crops using digital cameras. Therefore, algorithms proposed by us were developed to detect the growth information and predict the starting date of cotton automatically. In this paper, we introduce an approach for automatic detecting five true-leaves stage, which is a critical growth stage of cotton. On account of the drawbacks caused by illumination and the complex background, we cannot use the global coverage as the unique standard of judgment. Consequently, we propose a new method to determine the five true-leaves stage through detecting the node number between the main stem and the side stems, based on the agricultural meteorological observation specification. The error of the results between the predicted starting date with the proposed algorithm and artificial observations is restricted to no more than one day.
An automatic method to detect and track the glottal gap from high speed videoendoscopic images.
Andrade-Miranda, Gustavo; Godino-Llorente, Juan I; Moro-Velázquez, Laureano; Gómez-García, Jorge Andrés
2015-10-29
The image-based analysis of the vocal folds vibration plays an important role in the diagnosis of voice disorders. The analysis is based not only on the direct observation of the video sequences, but also in an objective characterization of the phonation process by means of features extracted from the recorded images. However, such analysis is based on a previous accurate identification of the glottal gap, which is the most challenging step for a further automatic assessment of the vocal folds vibration. In this work, a complete framework to automatically segment and track the glottal area (or glottal gap) is proposed. The algorithm identifies a region of interest that is adapted along time, and combine active contours and watershed transform for the final delineation of the glottis and also an automatic procedure for synthesize different videokymograms is proposed. Thanks to the ROI implementation, our technique is robust to the camera shifting and also the objective test proved the effectiveness and performance of the approach in the most challenging scenarios that it is when exist an inappropriate closure of the vocal folds. The novelties of the proposed algorithm relies on the used of temporal information for identify an adaptive ROI and the use of watershed merging combined with active contours for the glottis delimitation. Additionally, an automatic procedure for synthesize multiline VKG by the identification of the glottal main axis is developed.
ERIC Educational Resources Information Center
Chounta, Irene-Angelica; Avouris, Nikolaos
2016-01-01
This paper presents the integration of a real time evaluation method of collaboration quality in a monitoring application that supports teachers in class orchestration. The method is implemented as an automatic rater of collaboration quality and studied in a real time scenario of use. We argue that automatic and semi-automatic methods which…
Névéol, Aurélie; Pereira, Suzanne; Kerdelhué, Gaetan; Dahamna, Badisse; Joubert, Michel; Darmoni, Stéfan J
2007-01-01
The growing number of resources to be indexed in the catalogue of online health resources in French (CISMeF) calls for curating strategies involving automatic indexing tools while maintaining the catalogue's high indexing quality standards. To develop a simple automatic tool that retrieves MeSH descriptors from documents titles. In parallel to research on advanced indexing methods, a bag-of-words tool was developed for timely inclusion in CISMeF's maintenance system. An evaluation was carried out on a corpus of 99 documents. The indexing sets retrieved by the automatic tool were compared to manual indexing based on the title and on the full text of resources. 58% of the major main headings were retrieved by the bag-of-words algorithm and the precision on main heading retrieval was 69%. Bag-of-words indexing has effectively been used on selected resources to be included in CISMeF since August 2006. Meanwhile, on going work aims at improving the current version of the tool.
Towards an automatic lab-on-valve-ion mobility spectrometric system for detection of cocaine abuse.
Cocovi-Solberg, David J; Esteve-Turrillas, Francesc A; Armenta, Sergio; de la Guardia, Miguel; Miró, Manuel
2017-08-25
A lab-on-valve miniaturized system integrating on-line disposable micro-solid phase extraction has been interfaced with ion mobility spectrometry for the accurate and sensitive determination of cocaine and ecgonine methyl ester in oral fluids. The method is based on the automatic loading of 500μL of oral fluid along with the retention of target analytes and matrix clean-up by mixed-mode cationic/reversed-phase solid phase beads, followed by elution with 100μL of 2-propanol containing (3% v/v) ammonia, which are online injected into the IMS. The sorptive particles are automatically discarded after every individual assay inasmuch as the sorptive capacity of the sorbent material is proven to be dramatically deteriorated with reuse. The method provided a limit of detection of 0.3 and 0.14μgL -1 for cocaine and ecgonine methyl ester, respectively, with relative standard deviation values from 8 till 14% with a total analysis time per sample of 7.5min. Method trueness was evaluated by analyzing oral fluid samples spiked with cocaine at different concentration levels (1, 5 and 25μgL -1 ) affording relative recoveries within the range of 85±24%. Fifteen saliva samples were collected from volunteers and analysed following the proposed automatic procedure, showing a 40% cocaine occurrence with concentrations ranging from 1.3 to 97μgL -1 . Field saliva samples were also analysed by reference methods based on lateral flow immunoassay and gas chromatography-mass spectrometry. The application of this procedure to the control of oral fluids of cocaine consumers represents a step forward towards the development of a point-of-care cocaine abuse sensing system. Copyright © 2017 Elsevier B.V. All rights reserved.
A Review on Automatic Mammographic Density and Parenchymal Segmentation
He, Wenda; Juette, Arne; Denton, Erika R. E.; Oliver, Arnau
2015-01-01
Breast cancer is the most frequently diagnosed cancer in women. However, the exact cause(s) of breast cancer still remains unknown. Early detection, precise identification of women at risk, and application of appropriate disease prevention measures are by far the most effective way to tackle breast cancer. There are more than 70 common genetic susceptibility factors included in the current non-image-based risk prediction models (e.g., the Gail and the Tyrer-Cuzick models). Image-based risk factors, such as mammographic densities and parenchymal patterns, have been established as biomarkers but have not been fully incorporated in the risk prediction models used for risk stratification in screening and/or measuring responsiveness to preventive approaches. Within computer aided mammography, automatic mammographic tissue segmentation methods have been developed for estimation of breast tissue composition to facilitate mammographic risk assessment. This paper presents a comprehensive review of automatic mammographic tissue segmentation methodologies developed over the past two decades and the evidence for risk assessment/density classification using segmentation. The aim of this review is to analyse how engineering advances have progressed and the impact automatic mammographic tissue segmentation has in a clinical environment, as well as to understand the current research gaps with respect to the incorporation of image-based risk factors in non-image-based risk prediction models. PMID:26171249
Automated coronary artery calcification detection on low-dose chest CT images
NASA Astrophysics Data System (ADS)
Xie, Yiting; Cham, Matthew D.; Henschke, Claudia; Yankelevitz, David; Reeves, Anthony P.
2014-03-01
Coronary artery calcification (CAC) measurement from low-dose CT images can be used to assess the risk of coronary artery disease. A fully automatic algorithm to detect and measure CAC from low-dose non-contrast, non-ECG-gated chest CT scans is presented. Based on the automatically detected CAC, the Agatston score (AS), mass score and volume score were computed. These were compared with scores obtained manually from standard-dose ECG-gated scans and low-dose un-gated scans of the same patient. The automatic algorithm segments the heart region based on other pre-segmented organs to provide a coronary region mask. The mitral valve and aortic valve calcification is identified and excluded. All remaining voxels greater than 180HU within the mask region are considered as CAC candidates. The heart segmentation algorithm was evaluated on 400 non-contrast cases with both low-dose and regular dose CT scans. By visual inspection, 371 (92.8%) of the segmentations were acceptable. The automated CAC detection algorithm was evaluated on 41 low-dose non-contrast CT scans. Manual markings were performed on both low-dose and standard-dose scans for these cases. Using linear regression, the correlation of the automatic AS with the standard-dose manual scores was 0.86; with the low-dose manual scores the correlation was 0.91. Standard risk categories were also computed. The automated method risk category agreed with manual markings of gated scans for 24 cases while 15 cases were 1 category off. For low-dose scans, the automatic method agreed with 33 cases while 7 cases were 1 category off.
Automatic left-atrial segmentation from cardiac 3D ultrasound: a dual-chamber model-based approach
NASA Astrophysics Data System (ADS)
Almeida, Nuno; Sarvari, Sebastian I.; Orderud, Fredrik; Gérard, Olivier; D'hooge, Jan; Samset, Eigil
2016-04-01
In this paper, we present an automatic solution for segmentation and quantification of the left atrium (LA) from 3D cardiac ultrasound. A model-based framework is applied, making use of (deformable) active surfaces to model the endocardial surfaces of cardiac chambers, allowing incorporation of a priori anatomical information in a simple fashion. A dual-chamber model (LA and left ventricle) is used to detect and track the atrio-ventricular (AV) plane, without any user input. Both chambers are represented by parametric surfaces and a Kalman filter is used to fit the model to the position of the endocardial walls detected in the image, providing accurate detection and tracking during the whole cardiac cycle. This framework was tested in 20 transthoracic cardiac ultrasound volumetric recordings of healthy volunteers, and evaluated using manual traces of a clinical expert as a reference. The 3D meshes obtained with the automatic method were close to the reference contours at all cardiac phases (mean distance of 0.03+/-0.6 mm). The AV plane was detected with an accuracy of -0.6+/-1.0 mm. The LA volumes assessed automatically were also in agreement with the reference (mean +/-1.96 SD): 0.4+/-5.3 ml, 2.1+/-12.6 ml, and 1.5+/-7.8 ml at end-diastolic, end-systolic and pre-atrial-contraction frames, respectively. This study shows that the proposed method can be used for automatic volumetric assessment of the LA, considerably reducing the analysis time and effort when compared to manual analysis.
Brain connectomics imaging in schizophrenia study
NASA Astrophysics Data System (ADS)
Tseng, Wen-Yih Isaac; Chen, Yu-Jen; Hsu, Yung-Chin
2017-04-01
Schizophrenia is a debilitating mental disorder of which the biological underpinning is still unclear. Increasing evidence in neuroscience has indicated that schizophrenia arises from abnormal connections within or between networks, hence called dysconnectvity syndrome. Recently, we established an automatic method to analyze integrity of the white matter tracts over the whole brain based on diffusion MRI data, named tract-based automatic analysis (TBAA), and used this method to study white matter connection in patients with schizophrenia. We found that alteration of tract integrity is hereditary and inherent; it is found in siblings and in patients in the early phase of disease. Moreover, patients with good treatment outcome and those with poor outcome show distinctly different patterns of alterations, suggesting that these two groups of patients might be distinguishable based on the difference in tract alteration. In summary, the altered tracts revealed by TBAA might become potential biomarkers or trait markers for schizophrenia.
A method for real-time implementation of HOG feature extraction
NASA Astrophysics Data System (ADS)
Luo, Hai-bo; Yu, Xin-rong; Liu, Hong-mei; Ding, Qing-hai
2011-08-01
Histogram of oriented gradient (HOG) is an efficient feature extraction scheme, and HOG descriptors are feature descriptors which is widely used in computer vision and image processing for the purpose of biometrics, target tracking, automatic target detection(ATD) and automatic target recognition(ATR) etc. However, computation of HOG feature extraction is unsuitable for hardware implementation since it includes complicated operations. In this paper, the optimal design method and theory frame for real-time HOG feature extraction based on FPGA were proposed. The main principle is as follows: firstly, the parallel gradient computing unit circuit based on parallel pipeline structure was designed. Secondly, the calculation of arctangent and square root operation was simplified. Finally, a histogram generator based on parallel pipeline structure was designed to calculate the histogram of each sub-region. Experimental results showed that the HOG extraction can be implemented in a pixel period by these computing units.
Different Manhattan project: automatic statistical model generation
NASA Astrophysics Data System (ADS)
Yap, Chee Keng; Biermann, Henning; Hertzmann, Aaron; Li, Chen; Meyer, Jon; Pao, Hsing-Kuo; Paxia, Salvatore
2002-03-01
We address the automatic generation of large geometric models. This is important in visualization for several reasons. First, many applications need access to large but interesting data models. Second, we often need such data sets with particular characteristics (e.g., urban models, park and recreation landscape). Thus we need the ability to generate models with different parameters. We propose a new approach for generating such models. It is based on a top-down propagation of statistical parameters. We illustrate the method in the generation of a statistical model of Manhattan. But the method is generally applicable in the generation of models of large geographical regions. Our work is related to the literature on generating complex natural scenes (smoke, forests, etc) based on procedural descriptions. The difference in our approach stems from three characteristics: modeling with statistical parameters, integration of ground truth (actual map data), and a library-based approach for texture mapping.
NASA Astrophysics Data System (ADS)
Lasaponara, Rosa; Masini, Nicola
2018-06-01
The identification and quantification of disturbance of archaeological sites has been generally approached by visual inspection of optical aerial or satellite pictures. In this paper, we briefly summarize the state of the art of the traditionally satellite-based approaches for looting identification and propose a new automatic method for archaeological looting feature extraction approach (ALFEA). It is based on three steps: the enhancement using spatial autocorrelation, unsupervised classification, and segmentation. ALFEA has been applied to Google Earth images of two test areas, selected in desert environs in Syria (Dura Europos), and in Peru (Cahuachi-Nasca). The reliability of ALFEA was assessed through field surveys in Peru and visual inspection for the Syrian case study. Results from the evaluation procedure showed satisfactory performance from both of the two analysed test cases with a rate of success higher than 90%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, AS; Piper, J; Curry, K
2015-06-15
Purpose: Prostate MRI plays an important role in diagnosis, biopsy guidance, and therapy planning for prostate cancer. Prostate MRI contours can be used to aid in image fusion for ultrasound biopsy guidance and delivery of radiation. Our goal in this study is to evaluate an automatic atlas-based segmentation method for generating prostate and peripheral zone (PZ) contours on MRI. Methods: T2-weighted MRIs were acquired on 3T-Discovery MR750 System (GE, Milwaukee). The Volumes of Interest (VOIs): prostate and PZ were outlined by an expert radiation oncologist and used to create an atlas library for atlas-based segmentation. The atlas-segmentation accuracy was evaluatedmore » using a leave-one-out analysis. The method involved automatically finding the atlas subject that best matched the test subject followed by a normalized intensity-based free-form deformable registration of the atlas subject to the test subject. The prostate and PZ contours were transformed to the test subject using the same deformation. For each test subject the three best matches were used and the final contour was combined using Majority Vote. The atlas-segmentation process was fully automatic. Dice similarity coefficients (DSC) and mean Hausdorff values were used for comparison. Results: VOIs contours were available for 28 subjects. For the prostate, the atlas-based segmentation method resulted in an average DSC of 0.88+/−0.08 and a mean Hausdorff distance of 1.1+/−0.9mm. The number of patients (#) in DSC ranges are as follows: 0.60–0.69(1), 0.70–0.79(2), 0.80–0.89(13), >0.89(11). For the PZ, the average DSC was 0.72+/−0.17 and average Hausdorff of 0.9+/−0.9mm. The number of patients (#) in DSC ranges are as follows: <0.60(4), 0.60–0.69(6), 0.70–0.79(7), 0.80–0.89(9), >0.89(1). Conclusion: The MRI atlas-based segmentation method achieved good results for both the whole prostate and PZ compared to expert defined VOIs. The technique is fast, fully automatic, and has the potential to provide significant time savings for prostate VOI definition. AS Nelson and J Piper are partial owners of MIM Software, Inc. AS Nelson, J Piper, K Curry, and A Swallen are current employees at MIM Software, Inc.« less
2016-07-08
Systems Using Automata Theory and Barrier Certifi- cates We developed a sound but incomplete method for the computational verification of specifications...method merges ideas from automata -based model checking with those from control theory including so-called barrier certificates and optimization-based... Automata theory meets barrier certificates: Temporal logic verification of nonlinear systems,” IEEE Transactions on Automatic Control, 2015. [J2] R
Differences between automatically detected and steady-state fractional flow reserve.
Härle, Tobias; Meyer, Sven; Vahldiek, Felix; Elsässer, Albrecht
2016-02-01
Measurement of fractional flow reserve (FFR) has become a standard diagnostic tool in the catheterization laboratory. FFR evaluation studies were based on pressure recordings during steady-state maximum hyperemia. Commercially available computer systems detect the lowest Pd/Pa ratio automatically, which might not always be measured during steady-state hyperemia. We sought to compare the automatically detected FFR and true steady-state FFR. Pressure measurement traces of 105 coronary lesions from 77 patients with intermediate coronary lesions or multivessel disease were reviewed. In all patients, hyperemia had been achieved by intravenous adenosine administration using a dosage of 140 µg/kg/min. In 42 lesions (40%) automatically detected FFR was lower than true steady-state FFR. Mean bias was 0.009 (standard deviation 0.015, limits of agreement -0.02, 0.037). In 4 lesions (3.8%) both methods lead to different treatment recommendations, in all 4 cases instantaneous wave-free ratio confirmed steady-state FFR. Automatically detected FFR was slightly lower than steady-state FFR in more than one-third of cases. Consequently, interpretation of automatically detected FFR values closely below the cutoff value requires special attention.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sasahara, M; Arimura, H; Hirose, T
Purpose: Current image-guided radiotherapy (IGRT) procedure is bonebased patient positioning, followed by subjective manual correction using cone beam computed tomography (CBCT). This procedure might cause the misalignment of the patient positioning. Automatic target-based patient positioning systems achieve the better reproducibility of patient setup. Our aim of this study was to develop an automatic target-based patient positioning framework for IGRT with CBCT images in prostate cancer treatment. Methods: Seventy-three CBCT images of 10 patients and 24 planning CT images with digital imaging and communications in medicine for radiotherapy (DICOM-RT) structures were used for this study. Our proposed framework started from themore » generation of probabilistic atlases of bone and prostate from 24 planning CT images and prostate contours, which were made in the treatment planning. Next, the gray-scale histograms of CBCT values within CTV regions in the planning CT images were obtained as the occurrence probability of the CBCT values. Then, CBCT images were registered to the atlases using a rigid registration with mutual information. Finally, prostate regions were estimated by applying the Bayesian inference to CBCT images with the probabilistic atlases and CBCT value occurrence probability. The proposed framework was evaluated by calculating the Euclidean distance of errors between two centroids of prostate regions determined by our method and ground truths of manual delineations by a radiation oncologist and a medical physicist on CBCT images for 10 patients. Results: The average Euclidean distance between the centroids of extracted prostate regions determined by our proposed method and ground truths was 4.4 mm. The average errors for each direction were 1.8 mm in anteroposterior direction, 0.6 mm in lateral direction and 2.1 mm in craniocaudal direction. Conclusion: Our proposed framework based on probabilistic atlases and Bayesian inference might be feasible to automatically determine prostate regions on CBCT images.« less
Optimization and automation of quantitative NMR data extraction.
Bernstein, Michael A; Sýkora, Stan; Peng, Chen; Barba, Agustín; Cobas, Carlos
2013-06-18
NMR is routinely used to quantitate chemical species. The necessary experimental procedures to acquire quantitative data are well-known, but relatively little attention has been applied to data processing and analysis. We describe here a robust expert system that can be used to automatically choose the best signals in a sample for overall concentration determination and determine analyte concentration using all accepted methods. The algorithm is based on the complete deconvolution of the spectrum which makes it tolerant of cases where signals are very close to one another and includes robust methods for the automatic classification of NMR resonances and molecule-to-spectrum multiplets assignments. With the functionality in place and optimized, it is then a relatively simple matter to apply the same workflow to data in a fully automatic way. The procedure is desirable for both its inherent performance and applicability to NMR data acquired for very large sample sets.
Ribeiro, David S M; Prior, João A V; Taveira, Christian J M; Mendes, José M A F S; Santos, João L M
2011-06-15
In this work, and for the first time, it was developed an automatic and fast screening miniaturized flow system for the toxicological control of glibenclamide in beverages, with application in forensic laboratory investigations, and also, for the chemical control of commercially available pharmaceutical formulations. The automatic system exploited the multipumping flow (MPFS) concept and allowed the implementation of a new glibenclamide determination method based on the fluorometric monitoring of the drug in acidic medium (λ(ex)=301 nm; λ(em)=404 nm), in the presence of an anionic surfactant (SDS), promoting an organized micellar medium to enhance the fluorometric measurements. The developed approach assured good recoveries in the analysis of five spiked alcoholic beverages. Additionally, a good agreement was verified when comparing the results obtained in the determination of glibenclamide in five commercial pharmaceutical formulations by the proposed method and by the pharmacopoeia reference procedure. Copyright © 2011 Elsevier B.V. All rights reserved.
Code of Federal Regulations, 2011 CFR
2011-01-01
... consumption and water consumption of automatic commercial ice makers. 431.134 Section 431.134 Energy... EQUIPMENT Automatic Commercial Ice Makers Test Procedures § 431.134 Uniform test methods for the measurement of energy consumption and water consumption of automatic commercial ice makers. (a) Scope. This...
Content-based cell pathology image retrieval by combining different features
NASA Astrophysics Data System (ADS)
Zhou, Guangquan; Jiang, Lu; Luo, Limin; Bao, Xudong; Shu, Huazhong
2004-04-01
Content Based Color Cell Pathology Image Retrieval is one of the newest computer image processing applications in medicine. Recently, some algorithms have been developed to achieve this goal. Because of the particularity of cell pathology images, the result of the image retrieval based on single characteristic is not satisfactory. A new method for pathology image retrieval by combining color, texture and morphologic features to search cell images is proposed. Firstly, nucleus regions of leukocytes in images are automatically segmented by K-mean clustering method. Then single leukocyte region is detected by utilizing thresholding algorithm segmentation and mathematics morphology. The features that include color, texture and morphologic features are extracted from single leukocyte to represent main attribute in the search query. The features are then normalized because the numerical value range and physical meaning of extracted features are different. Finally, the relevance feedback system is introduced. So that the system can automatically adjust the weights of different features and improve the results of retrieval system according to the feedback information. Retrieval results using the proposed method fit closely with human perception and are better than those obtained with the methods based on single feature.
An Automatic Diagnosis Method of Facial Acne Vulgaris Based on Convolutional Neural Network.
Shen, Xiaolei; Zhang, Jiachi; Yan, Chenjun; Zhou, Hong
2018-04-11
In this paper, we present a new automatic diagnosis method for facial acne vulgaris which is based on convolutional neural networks (CNNs). To overcome the shortcomings of previous methods which were the inability to classify enough types of acne vulgaris. The core of our method is to extract features of images based on CNNs and achieve classification by classifier. A binary-classifier of skin-and-non-skin is used to detect skin area and a seven-classifier is used to achieve the classification task of facial acne vulgaris and healthy skin. In the experiments, we compare the effectiveness of our CNN and the VGG16 neural network which is pre-trained on the ImageNet data set. We use a ROC curve to evaluate the performance of binary-classifier and use a normalized confusion matrix to evaluate the performance of seven-classifier. The results of our experiments show that the pre-trained VGG16 neural network is effective in extracting features from facial acne vulgaris images. And the features are very useful for the follow-up classifiers. Finally, we try applying the classifiers both based on the pre-trained VGG16 neural network to assist doctors in facial acne vulgaris diagnosis.
The Measurement of Term Importance in Automatic Indexing.
ERIC Educational Resources Information Center
Salton, G.; And Others
1981-01-01
Reviews major term-weighting theories, presents methods for estimating the relevance properties of terms based on their frequency characteristics in a document collection, and compares weighting systems using term relevance properties with more conventional frequency-based methodologies. Eighteen references are cited. (Author/FM)
Interactive surface correction for 3D shape based segmentation
NASA Astrophysics Data System (ADS)
Schwarz, Tobias; Heimann, Tobias; Tetzlaff, Ralf; Rau, Anne-Mareike; Wolf, Ivo; Meinzer, Hans-Peter
2008-03-01
Statistical shape models have become a fast and robust method for segmentation of anatomical structures in medical image volumes. In clinical practice, however, pathological cases and image artifacts can lead to local deviations of the detected contour from the true object boundary. These deviations have to be corrected manually. We present an intuitively applicable solution for surface interaction based on Gaussian deformation kernels. The method is evaluated by two radiological experts on segmentations of the liver in contrast-enhanced CT images and of the left heart ventricle (LV) in MRI data. For both applications, five datasets are segmented automatically using deformable shape models, and the resulting surfaces are corrected manually. The interactive correction step improves the average surface distance against ground truth from 2.43mm to 2.17mm for the liver, and from 2.71mm to 1.34mm for the LV. We expect this method to raise the acceptance of automatic segmentation methods in clinical application.
Swiderska, Zaneta; Korzynska, Anna; Markiewicz, Tomasz; Lorent, Malgorzata; Zak, Jakub; Wesolowska, Anna; Roszkowiak, Lukasz; Slodkowska, Janina; Grala, Bartlomiej
2015-01-01
Background. This paper presents the study concerning hot-spot selection in the assessment of whole slide images of tissue sections collected from meningioma patients. The samples were immunohistochemically stained to determine the Ki-67/MIB-1 proliferation index used for prognosis and treatment planning. Objective. The observer performance was examined by comparing results of the proposed method of automatic hot-spot selection in whole slide images, results of traditional scoring under a microscope, and results of a pathologist's manual hot-spot selection. Methods. The results of scoring the Ki-67 index using optical scoring under a microscope, software for Ki-67 index quantification based on hot spots selected by two pathologists (resp., once and three times), and the same software but on hot spots selected by proposed automatic methods were compared using Kendall's tau-b statistics. Results. Results show intra- and interobserver agreement. The agreement between Ki-67 scoring with manual and automatic hot-spot selection is high, while agreement between Ki-67 index scoring results in whole slide images and traditional microscopic examination is lower. Conclusions. The agreement observed for the three scoring methods shows that automation of area selection is an effective tool in supporting physicians and in increasing the reliability of Ki-67 scoring in meningioma.
Swiderska, Zaneta; Korzynska, Anna; Markiewicz, Tomasz; Lorent, Malgorzata; Zak, Jakub; Wesolowska, Anna; Roszkowiak, Lukasz; Slodkowska, Janina; Grala, Bartlomiej
2015-01-01
Background. This paper presents the study concerning hot-spot selection in the assessment of whole slide images of tissue sections collected from meningioma patients. The samples were immunohistochemically stained to determine the Ki-67/MIB-1 proliferation index used for prognosis and treatment planning. Objective. The observer performance was examined by comparing results of the proposed method of automatic hot-spot selection in whole slide images, results of traditional scoring under a microscope, and results of a pathologist's manual hot-spot selection. Methods. The results of scoring the Ki-67 index using optical scoring under a microscope, software for Ki-67 index quantification based on hot spots selected by two pathologists (resp., once and three times), and the same software but on hot spots selected by proposed automatic methods were compared using Kendall's tau-b statistics. Results. Results show intra- and interobserver agreement. The agreement between Ki-67 scoring with manual and automatic hot-spot selection is high, while agreement between Ki-67 index scoring results in whole slide images and traditional microscopic examination is lower. Conclusions. The agreement observed for the three scoring methods shows that automation of area selection is an effective tool in supporting physicians and in increasing the reliability of Ki-67 scoring in meningioma. PMID:26240787
Automatic Detection of Storm Damages Using High-Altitude Photogrammetric Imaging
NASA Astrophysics Data System (ADS)
Litkey, P.; Nurminen, K.; Honkavaara, E.
2013-05-01
The risks of storms that cause damage in forests are increasing due to climate change. Quickly detecting fallen trees, assessing the amount of fallen trees and efficiently collecting them are of great importance for economic and environmental reasons. Visually detecting and delineating storm damage is a laborious and error-prone process; thus, it is important to develop cost-efficient and highly automated methods. Objective of our research project is to investigate and develop a reliable and efficient method for automatic storm damage detection, which is based on airborne imagery that is collected after a storm. The requirements for the method are the before-storm and after-storm surface models. A difference surface is calculated using two DSMs and the locations where significant changes have appeared are automatically detected. In our previous research we used four-year old airborne laser scanning surface model as the before-storm surface. The after-storm DSM was provided from the photogrammetric images using the Next Generation Automatic Terrain Extraction (NGATE) algorithm of Socet Set software. We obtained 100% accuracy in detection of major storm damages. In this investigation we will further evaluate the sensitivity of the storm-damage detection process. We will investigate the potential of national airborne photography, that is collected at no-leaf season, to automatically produce a before-storm DSM using image matching. We will also compare impact of the terrain extraction algorithm to the results. Our results will also promote the potential of national open source data sets in the management of natural disasters.
Metzger, Marie-Hélène; Tvardik, Nastassia; Gicquel, Quentin; Bouvry, Côme; Poulet, Emmanuel; Potinet-Pagliaroli, Véronique
2017-06-01
The aim of this study was to determine whether an expert system based on automated processing of electronic health records (EHRs) could provide a more accurate estimate of the annual rate of emergency department (ED) visits for suicide attempts in France, as compared to the current national surveillance system based on manual coding by emergency practitioners. A feasibility study was conducted at Lyon University Hospital, using data for all ED patient visits in 2012. After automatic data extraction and pre-processing, including automatic coding of medical free-text through use of the Unified Medical Language System, seven different machine-learning methods were used to classify the reasons for ED visits into "suicide attempts" versus "other reasons". The performance of these different methods was compared by using the F-measure. In a test sample of 444 patients admitted to the ED in 2012 (98 suicide attempts, 48 cases of suicidal ideation, and 292 controls with no recorded non-fatal suicidal behaviour), the F-measure for automatic detection of suicide attempts ranged from 70.4% to 95.3%. The random forest and naïve Bayes methods performed best. This study demonstrates that machine-learning methods can improve the quality of epidemiological indicators as compared to current national surveillance of suicide attempts. Copyright © 2016 John Wiley & Sons, Ltd.
Research on large spatial coordinate automatic measuring system based on multilateral method
NASA Astrophysics Data System (ADS)
Miao, Dongjing; Li, Jianshuan; Li, Lianfu; Jiang, Yuanlin; Kang, Yao; He, Mingzhao; Deng, Xiangrui
2015-10-01
To measure the spatial coordinate accurately and efficiently in large size range, a manipulator automatic measurement system which based on multilateral method is developed. This system is divided into two parts: The coordinate measurement subsystem is consists of four laser tracers, and the trajectory generation subsystem is composed by a manipulator and a rail. To ensure that there is no laser beam break during the measurement process, an optimization function is constructed by using the vectors between the laser tracers measuring center and the cat's eye reflector measuring center, then an orientation automatically adjust algorithm for the reflector is proposed, with this algorithm, the laser tracers are always been able to track the reflector during the entire measurement process. Finally, the proposed algorithm is validated by taking the calibration of laser tracker for instance: the actual experiment is conducted in 5m × 3m × 3.2m range, the algorithm is used to plan the orientations of the reflector corresponding to the given 24 points automatically. After improving orientations of some minority points with adverse angles, the final results are used to control the manipulator's motion. During the actual movement, there are no beam break occurs. The result shows that the proposed algorithm help the developed system to measure the spatial coordinates over a large range with efficiency.
Extraction of fault component from abnormal sound in diesel engines using acoustic signals
NASA Astrophysics Data System (ADS)
Dayong, Ning; Changle, Sun; Yongjun, Gong; Zengmeng, Zhang; Jiaoyi, Hou
2016-06-01
In this paper a method for extracting fault components from abnormal acoustic signals and automatically diagnosing diesel engine faults is presented. The method named dislocation superimposed method (DSM) is based on the improved random decrement technique (IRDT), differential function (DF) and correlation analysis (CA). The aim of DSM is to linearly superpose multiple segments of abnormal acoustic signals because of the waveform similarity of faulty components. The method uses sample points at the beginning of time when abnormal sound appears as the starting position for each segment. In this study, the abnormal sound belonged to shocking faulty type; thus, the starting position searching method based on gradient variance was adopted. The coefficient of similar degree between two same sized signals is presented. By comparing with a similar degree, the extracted fault component could be judged automatically. The results show that this method is capable of accurately extracting the fault component from abnormal acoustic signals induced by faulty shocking type and the extracted component can be used to identify the fault type.
Zhang, Chen; Sun, Chao; Gao, Liqiang; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang
2013-01-01
Bio-robots based on brain computer interface (BCI) suffer from the lack of considering the characteristic of the animals in navigation. This paper proposed a new method for bio-robots' automatic navigation combining the reward generating algorithm base on Reinforcement Learning (RL) with the learning intelligence of animals together. Given the graded electrical reward, the animal e.g. the rat, intends to seek the maximum reward while exploring an unknown environment. Since the rat has excellent spatial recognition, the rat-robot and the RL algorithm can convergent to an optimal route by co-learning. This work has significant inspiration for the practical development of bio-robots' navigation with hybrid intelligence.
NASA Astrophysics Data System (ADS)
Riveiro, B.; DeJong, M.; Conde, B.
2016-06-01
Despite the tremendous advantages of the laser scanning technology for the geometric characterization of built constructions, there are important limitations preventing more widespread implementation in the structural engineering domain. Even though the technology provides extensive and accurate information to perform structural assessment and health monitoring, many people are resistant to the technology due to the processing times involved. Thus, new methods that can automatically process LiDAR data and subsequently provide an automatic and organized interpretation are required. This paper presents a new method for fully automated point cloud segmentation of masonry arch bridges. The method efficiently creates segmented, spatially related and organized point clouds, which each contain the relevant geometric data for a particular component (pier, arch, spandrel wall, etc.) of the structure. The segmentation procedure comprises a heuristic approach for the separation of different vertical walls, and later image processing tools adapted to voxel structures allows the efficient segmentation of the main structural elements of the bridge. The proposed methodology provides the essential processed data required for structural assessment of masonry arch bridges based on geometric anomalies. The method is validated using a representative sample of masonry arch bridges in Spain.
Automatic segmentation of cerebral white matter hyperintensities using only 3D FLAIR images.
Simões, Rita; Mönninghoff, Christoph; Dlugaj, Martha; Weimar, Christian; Wanke, Isabel; van Cappellen van Walsum, Anne-Marie; Slump, Cornelis
2013-09-01
Magnetic Resonance (MR) white matter hyperintensities have been shown to predict an increased risk of developing cognitive decline. However, their actual role in the conversion to dementia is still not fully understood. Automatic segmentation methods can help in the screening and monitoring of Mild Cognitive Impairment patients who take part in large population-based studies. Most existing segmentation approaches use multimodal MR images. However, multiple acquisitions represent a limitation in terms of both patient comfort and computational complexity of the algorithms. In this work, we propose an automatic lesion segmentation method that uses only three-dimensional fluid-attenuation inversion recovery (FLAIR) images. We use a modified context-sensitive Gaussian mixture model to determine voxel class probabilities, followed by correction of FLAIR artifacts. We evaluate the method against the manual segmentation performed by an experienced neuroradiologist and compare the results with other unimodal segmentation approaches. Finally, we apply our method to the segmentation of multiple sclerosis lesions by using a publicly available benchmark dataset. Results show a similar performance to other state-of-the-art multimodal methods, as well as to the human rater. Copyright © 2013 Elsevier Inc. All rights reserved.
Automatic 3D liver segmentation based on deep learning and globally optimized surface evolution
NASA Astrophysics Data System (ADS)
Hu, Peijun; Wu, Fa; Peng, Jialin; Liang, Ping; Kong, Dexing
2016-12-01
The detection and delineation of the liver from abdominal 3D computed tomography (CT) images are fundamental tasks in computer-assisted liver surgery planning. However, automatic and accurate segmentation, especially liver detection, remains challenging due to complex backgrounds, ambiguous boundaries, heterogeneous appearances and highly varied shapes of the liver. To address these difficulties, we propose an automatic segmentation framework based on 3D convolutional neural network (CNN) and globally optimized surface evolution. First, a deep 3D CNN is trained to learn a subject-specific probability map of the liver, which gives the initial surface and acts as a shape prior in the following segmentation step. Then, both global and local appearance information from the prior segmentation are adaptively incorporated into a segmentation model, which is globally optimized in a surface evolution way. The proposed method has been validated on 42 CT images from the public Sliver07 database and local hospitals. On the Sliver07 online testing set, the proposed method can achieve an overall score of 80.3+/- 4.5 , yielding a mean Dice similarity coefficient of 97.25+/- 0.65 % , and an average symmetric surface distance of 0.84+/- 0.25 mm. The quantitative validations and comparisons show that the proposed method is accurate and effective for clinical application.
Dentalmaps: Automatic Dental Delineation for Radiotherapy Planning in Head-and-Neck Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thariat, Juliette, E-mail: jthariat@hotmail.com; Ramus, Liliane; INRIA
Purpose: To propose an automatic atlas-based segmentation framework of the dental structures, called Dentalmaps, and to assess its accuracy and relevance to guide dental care in the context of intensity-modulated radiotherapy. Methods and Materials: A multi-atlas-based segmentation, less sensitive to artifacts than previously published head-and-neck segmentation methods, was used. The manual segmentations of a 21-patient database were first deformed onto the query using nonlinear registrations with the training images and then fused to estimate the consensus segmentation of the query. Results: The framework was evaluated with a leave-one-out protocol. The maximum doses estimated using manual contours were considered as groundmore » truth and compared with the maximum doses estimated using automatic contours. The dose estimation error was within 2-Gy accuracy in 75% of cases (with a median of 0.9 Gy), whereas it was within 2-Gy accuracy in 30% of cases only with the visual estimation method without any contour, which is the routine practice procedure. Conclusions: Dose estimates using this framework were more accurate than visual estimates without dental contour. Dentalmaps represents a useful documentation and communication tool between radiation oncologists and dentists in routine practice. Prospective multicenter assessment is underway on patients extrinsic to the database.« less
NASA Astrophysics Data System (ADS)
Marchetti, E.; Ripepe, M.; Ulivieri, G.; Kogelnig, A.
2015-11-01
Avalanche risk management is strongly related to the ability to identify and timely report the occurrence of snow avalanches. Infrasound has been applied to avalanche research and monitoring for the last 20 years but it never turned into an operational tool to identify clear signals related to avalanches. We present here a method based on the analysis of infrasound signals recorded by a small aperture array in Ischgl (Austria), which provides a significant improvement to overcome this limit. The method is based on array-derived wave parameters, such as back azimuth and apparent velocity. The method defines threshold criteria for automatic avalanche identification by considering avalanches as a moving source of infrasound. We validate the efficiency of the automatic infrasound detection with continuous observations with Doppler radar and we show how the velocity of a snow avalanche in any given path around the array can be efficiently derived. Our results indicate that a proper infrasound array analysis allows a robust, real-time, remote detection of snow avalanches that is able to provide the number and the time of occurrence of snow avalanches occurring all around the array, which represent key information for a proper validation of avalanche forecast models and risk management in a given area.
Morshed, Nader; Echols, Nathaniel; Adams, Paul D.
2015-04-25
In the process of macromolecular model building, crystallographers must examine electron density for isolated atoms and differentiate sites containing structured solvent molecules from those containing elemental ions. This task requires specific knowledge of metal-binding chemistry and scattering properties and is prone to error. A method has previously been described to identify ions based on manually chosen criteria for a number of elements. Here, the use of support vector machines (SVMs) to automatically classify isolated atoms as either solvent or one of various ions is described. Two data sets of protein crystal structures, one containing manually curated structures deposited with anomalousmore » diffraction data and another with automatically filtered, high-resolution structures, were constructed. On the manually curated data set, an SVM classifier was able to distinguish calcium from manganese, zinc, iron and nickel, as well as all five of these ions from water molecules, with a high degree of accuracy. Additionally, SVMs trained on the automatically curated set of high-resolution structures were able to successfully classify most common elemental ions in an independent validation test set. This method is readily extensible to other elemental ions and can also be used in conjunction with previous methods based on a priori expectations of the chemical environment and X-ray scattering.« less
Detection technology research on the one-way clutch of automatic brake adjuster
NASA Astrophysics Data System (ADS)
Jiang, Wensong; Luo, Zai; Lu, Yi
2013-10-01
In this article, we provide a new testing method to evaluate the acceptable quality of the one-way clutch of automatic brake adjuster. To analysis the suitable adjusting brake moment which keeps the automatic brake adjuster out of failure, we build a mechanical model of one-way clutch according to the structure and the working principle of one-way clutch. The ranges of adjusting brake moment both clockwise and anti-clockwise can be calculated through the mechanical model of one-way clutch. Its critical moment, as well, are picked up as the ideal values of adjusting brake moment to evaluate the acceptable quality of one-way clutch of automatic brake adjuster. we calculate the ideal values of critical moment depending on the different structure of one-way clutch based on its mechanical model before the adjusting brake moment test begin. In addition, an experimental apparatus, which the uncertainty of measurement is ±0.1Nm, is specially designed to test the adjusting brake moment both clockwise and anti-clockwise. Than we can judge the acceptable quality of one-way clutch of automatic brake adjuster by comparing the test results and the ideal values instead of the EXP. In fact, the evaluation standard of adjusting brake moment applied on the project are still using the EXP provided by manufacturer currently in China, but it would be unavailable when the material of one-way clutch changed. Five kinds of automatic brake adjusters are used in the verification experiment to verify the accuracy of the test method. The experimental results show that the experimental values of adjusting brake moment both clockwise and anti-clockwise are within the ranges of theoretical results. The testing method provided by this article vividly meet the requirements of manufacturer's standard.
Automatic localization of the da Vinci surgical instrument tips in 3-D transrectal ultrasound.
Mohareri, Omid; Ramezani, Mahdi; Adebar, Troy K; Abolmaesumi, Purang; Salcudean, Septimiu E
2013-09-01
Robot-assisted laparoscopic radical prostatectomy (RALRP) using the da Vinci surgical system is the current state-of-the-art treatment option for clinically confined prostate cancer. Given the limited field of view of the surgical site in RALRP, several groups have proposed the integration of transrectal ultrasound (TRUS) imaging in the surgical workflow to assist with accurate resection of the prostate and the sparing of the neurovascular bundles (NVBs). We previously introduced a robotic TRUS manipulator and a method for automatically tracking da Vinci surgical instruments with the TRUS imaging plane, in order to facilitate the integration of intraoperative TRUS in RALRP. Rapid and automatic registration of the kinematic frames of the da Vinci surgical system and the robotic TRUS probe manipulator is a critical component of the instrument tracking system. In this paper, we propose a fully automatic registration technique based on automatic 3-D TRUS localization of robot instrument tips pressed against the air-tissue boundary anterior to the prostate. The detection approach uses a multiscale filtering technique to identify and localize surgical instrument tips in the TRUS volume, and could also be used to detect other surface fiducials in 3-D ultrasound. Experiments have been performed using a tissue phantom and two ex vivo tissue samples to show the feasibility of the proposed methods. Also, an initial in vivo evaluation of the system has been carried out on a live anaesthetized dog with a da Vinci Si surgical system and a target registration error (defined as the root mean square distance of corresponding points after registration) of 2.68 mm has been achieved. Results show this method's accuracy and consistency for automatic registration of TRUS images to the da Vinci surgical system.
NASA Astrophysics Data System (ADS)
Yusop, Hanafi M.; Ghazali, M. F.; Yusof, M. F. M.; Remli, M. A. Pi; Kamarulzaman, M. H.
2017-10-01
In a recent study, the analysis of pressure transient signals could be seen as an accurate and low-cost method for leak and feature detection in water distribution systems. Transient phenomena occurs due to sudden changes in the fluid’s propagation in pipelines system caused by rapid pressure and flow fluctuation due to events such as closing and opening valves rapidly or through pump failure. In this paper, the feasibility of the Hilbert-Huang transform (HHT) method/technique in analysing the pressure transient signals in presented and discussed. HHT is a way to decompose a signal into intrinsic mode functions (IMF). However, the advantage of HHT is its difficulty in selecting the suitable IMF for the next data postprocessing method which is Hilbert Transform (HT). This paper reveals that utilizing the application of an integrated kurtosis-based algorithm for a z-filter technique (I-Kaz) to kurtosis ratio (I-Kaz-Kurtosis) allows/contributes to/leads to automatic selection of the IMF that should be used. This technique is demonstrated on a 57.90-meter medium high-density polyethylene (MDPE) pipe installed with a single artificial leak. The analysis results using the I-Kaz-kurtosis ratio revealed/confirmed that the method can be used as an automatic selection of the IMF although the noise level ratio of the signal is low. Therefore, the I-Kaz-kurtosis ratio method is recommended as a means to implement an automatic selection technique of the IMF for HHT analysis.
An automatic markerless registration method for neurosurgical robotics based on an optical camera.
Meng, Fanle; Zhai, Fangwen; Zeng, Bowei; Ding, Hui; Wang, Guangzhi
2018-02-01
Current markerless registration methods for neurosurgical robotics use the facial surface to match the robot space with the image space, and acquisition of the facial surface usually requires manual interaction and constrains the patient to a supine position. To overcome these drawbacks, we propose a registration method that is automatic and does not constrain patient position. An optical camera attached to the robot end effector captures images around the patient's head from multiple views. Then, high coverage of the head surface is reconstructed from the images through multi-view stereo vision. Since the acquired head surface point cloud contains color information, a specific mark that is manually drawn on the patient's head prior to the capture procedure can be extracted to automatically accomplish coarse registration rather than using facial anatomic landmarks. Then, fine registration is achieved by registering the high coverage of the head surface without relying solely on the facial region, thus eliminating patient position constraints. The head surface was acquired by the camera with a good repeatability accuracy. The average target registration error of 8 different patient positions measured with targets inside a head phantom was [Formula: see text], while the mean surface registration error was [Formula: see text]. The method proposed in this paper achieves automatic markerless registration in multiple patient positions and guarantees registration accuracy inside the head. This method provides a new approach for establishing the spatial relationship between the image space and the robot space.
ATIPS: Automatic Travel Itinerary Planning System for Domestic Areas
2016-01-01
Leisure travel has become a topic of great interest to Taiwanese residents in recent years. Most residents expect to be able to relax on a vacation during the holidays; however, the complicated procedure of travel itinerary planning is often discouraging and leads them to abandon the idea of traveling. In this paper, we design an automatic travel itinerary planning system for the domestic area (ATIPS) using an algorithm to automatically plan a domestic travel itinerary based on user intentions that allows users to minimize the process of trip planning. Simply by entering the travel time, the departure point, and the destination location, the system can automatically generate a travel itinerary. According to the results of the experiments, 70% of users were satisfied with the result of our system, and 82% of users were satisfied with the automatic user preference learning mechanism of ATIPS. Our algorithm also provides a framework for substituting modules or weights and offers a new method for travel planning. PMID:26839529
ATIPS: Automatic Travel Itinerary Planning System for Domestic Areas.
Chang, Hsien-Tsung; Chang, Yi-Ming; Tsai, Meng-Tze
2016-01-01
Leisure travel has become a topic of great interest to Taiwanese residents in recent years. Most residents expect to be able to relax on a vacation during the holidays; however, the complicated procedure of travel itinerary planning is often discouraging and leads them to abandon the idea of traveling. In this paper, we design an automatic travel itinerary planning system for the domestic area (ATIPS) using an algorithm to automatically plan a domestic travel itinerary based on user intentions that allows users to minimize the process of trip planning. Simply by entering the travel time, the departure point, and the destination location, the system can automatically generate a travel itinerary. According to the results of the experiments, 70% of users were satisfied with the result of our system, and 82% of users were satisfied with the automatic user preference learning mechanism of ATIPS. Our algorithm also provides a framework for substituting modules or weights and offers a new method for travel planning.
Roumeliotis, Grayson; Willing, Ryan; Neuert, Mark; Ahluwalia, Romy; Jenkyn, Thomas; Yazdani, Arjang
2015-09-01
The accurate assessment of symmetry in the craniofacial skeleton is important for cosmetic and reconstructive craniofacial surgery. Although there have been several published attempts to develop an accurate system for determining the correct plane of symmetry, all are inaccurate and time consuming. Here, the authors applied a novel semi-automatic method for the calculation of craniofacial symmetry, based on principal component analysis and iterative corrective point computation, to a large sample of normal adult male facial computerized tomography scans obtained clinically (n = 32). The authors hypothesized that this method would generate planes of symmetry that would result in less error when one side of the face was compared to the other than a symmetry plane generated using a plane defined by cephalometric landmarks. When a three-dimensional model of one side of the face was reflected across the semi-automatic plane of symmetry there was less error than when reflected across the cephalometric plane. The semi-automatic plane was also more accurate when the locations of bilateral cephalometric landmarks (eg, frontozygomatic sutures) were compared across the face. The authors conclude that this method allows for accurate and fast measurements of craniofacial symmetry. This has important implications for studying the development of the facial skeleton, and clinical application for reconstruction.
Interactive vs. automatic ultrasound image segmentation methods for staging hepatic lipidosis.
Weijers, Gert; Starke, Alexander; Haudum, Alois; Thijssen, Johan M; Rehage, Jürgen; De Korte, Chris L
2010-07-01
The aim of this study was to test the hypothesis that automatic segmentation of vessels in ultrasound (US) images can produce similar or better results in grading fatty livers than interactive segmentation. A study was performed in postpartum dairy cows (N=151), as an animal model of human fatty liver disease, to test this hypothesis. Five transcutaneous and five intraoperative US liver images were acquired in each animal and a liverbiopsy was taken. In liver tissue samples, triacylglycerol (TAG) was measured by biochemical analysis and hepatic diseases other than hepatic lipidosis were excluded by histopathologic examination. Ultrasonic tissue characterization (UTC) parameters--Mean echo level, standard deviation (SD) of echo level, signal-to-noise ratio (SNR), residual attenuation coefficient (ResAtt) and axial and lateral speckle size--were derived using a computer-aided US (CAUS) protocol and software package. First, the liver tissue was interactively segmented by two observers. With increasing fat content, fewer hepatic vessels were visible in the ultrasound images and, therefore, a smaller proportion of the liver needed to be excluded from these images. Automatic-segmentation algorithms were implemented and it was investigated whether better results could be achieved than with the subjective and time-consuming interactive-segmentation procedure. The automatic-segmentation algorithms were based on both fixed and adaptive thresholding techniques in combination with a 'speckle'-shaped moving-window exclusion technique. All data were analyzed with and without postprocessing as contained in CAUS and with different automated-segmentation techniques. This enabled us to study the effect of the applied postprocessing steps on single and multiple linear regressions ofthe various UTC parameters with TAG. Improved correlations for all US parameters were found by using automatic-segmentation techniques. Stepwise multiple linear-regression formulas where derived and used to predict TAG level in the liver. Receiver-operating-characteristics (ROC) analysis was applied to assess the performance and area under the curve (AUC) of predicting TAG and to compare the sensitivity and specificity of the methods. Best speckle-size estimates and overall performance (R2 = 0.71, AUC = 0.94) were achieved by using an SNR-based adaptive automatic-segmentation method (used TAG threshold: 50 mg/g liver wet weight). Automatic segmentation is thus feasible and profitable.
Left ventricular endocardial surface detection based on real-time 3D echocardiographic data
NASA Technical Reports Server (NTRS)
Corsi, C.; Borsari, M.; Consegnati, F.; Sarti, A.; Lamberti, C.; Travaglini, A.; Shiota, T.; Thomas, J. D.
2001-01-01
OBJECTIVE: A new computerized semi-automatic method for left ventricular (LV) chamber segmentation is presented. METHODS: The LV is imaged by real-time three-dimensional echocardiography (RT3DE). The surface detection model, based on level set techniques, is applied to RT3DE data for image analysis. The modified level set partial differential equation we use is solved by applying numerical methods for conservation laws. The initial conditions are manually established on some slices of the entire volume. The solution obtained for each slice is a contour line corresponding with the boundary between LV cavity and LV endocardium. RESULTS: The mathematical model has been applied to sequences of frames of human hearts (volume range: 34-109 ml) imaged by 2D and reconstructed off-line and RT3DE data. Volume estimation obtained by this new semi-automatic method shows an excellent correlation with those obtained by manual tracing (r = 0.992). Dynamic change of LV volume during the cardiac cycle is also obtained. CONCLUSION: The volume estimation method is accurate; edge based segmentation, image completion and volume reconstruction can be accomplished. The visualization technique also allows to navigate into the reconstructed volume and to display any section of the volume.
Practical automatic Arabic license plate recognition system
NASA Astrophysics Data System (ADS)
Mohammad, Khader; Agaian, Sos; Saleh, Hani
2011-02-01
Since 1970's, the need of an automatic license plate recognition system, sometimes referred as Automatic License Plate Recognition system, has been increasing. A license plate recognition system is an automatic system that is able to recognize a license plate number, extracted from image sensors. In specific, Automatic License Plate Recognition systems are being used in conjunction with various transportation systems in application areas such as law enforcement (e.g. speed limit enforcement) and commercial usages such as parking enforcement and automatic toll payment private and public entrances, border control, theft and vandalism control. Vehicle license plate recognition has been intensively studied in many countries. Due to the different types of license plates being used, the requirement of an automatic license plate recognition system is different for each country. [License plate detection using cluster run length smoothing algorithm ].Generally, an automatic license plate localization and recognition system is made up of three modules; license plate localization, character segmentation and optical character recognition modules. This paper presents an Arabic license plate recognition system that is insensitive to character size, font, shape and orientation with extremely high accuracy rate. The proposed system is based on a combination of enhancement, license plate localization, morphological processing, and feature vector extraction using the Haar transform. The performance of the system is fast due to classification of alphabet and numerals based on the license plate organization. Experimental results for license plates of two different Arab countries show an average of 99 % successful license plate localization and recognition in a total of more than 20 different images captured from a complex outdoor environment. The results run times takes less time compared to conventional and many states of art methods.
Automatic lung tumor segmentation on PET/CT images using fuzzy Markov random field model.
Guo, Yu; Feng, Yuanming; Sun, Jian; Zhang, Ning; Lin, Wang; Sa, Yu; Wang, Ping
2014-01-01
The combination of positron emission tomography (PET) and CT images provides complementary functional and anatomical information of human tissues and it has been used for better tumor volume definition of lung cancer. This paper proposed a robust method for automatic lung tumor segmentation on PET/CT images. The new method is based on fuzzy Markov random field (MRF) model. The combination of PET and CT image information is achieved by using a proper joint posterior probability distribution of observed features in the fuzzy MRF model which performs better than the commonly used Gaussian joint distribution. In this study, the PET and CT simulation images of 7 non-small cell lung cancer (NSCLC) patients were used to evaluate the proposed method. Tumor segmentations with the proposed method and manual method by an experienced radiation oncologist on the fused images were performed, respectively. Segmentation results obtained with the two methods were similar and Dice's similarity coefficient (DSC) was 0.85 ± 0.013. It has been shown that effective and automatic segmentations can be achieved with this method for lung tumors which locate near other organs with similar intensities in PET and CT images, such as when the tumors extend into chest wall or mediastinum.
Virtual Surveyor based Object Extraction from Airborne LiDAR data
NASA Astrophysics Data System (ADS)
Habib, Md. Ahsan
Topographic feature detection of land cover from LiDAR data is important in various fields - city planning, disaster response and prevention, soil conservation, infrastructure or forestry. In recent years, feature classification, compliant with Object-Based Image Analysis (OBIA) methodology has been gaining traction in remote sensing and geographic information science (GIS). In OBIA, the LiDAR image is first divided into meaningful segments called object candidates. This results, in addition to spectral values, in a plethora of new information such as aggregated spectral pixel values, morphology, texture, context as well as topology. Traditional nonparametric segmentation methods rely on segmentations at different scales to produce a hierarchy of semantically significant objects. Properly tuned scale parameters are, therefore, imperative in these methods for successful subsequent classification. Recently, some progress has been made in the development of methods for tuning the parameters for automatic segmentation. However, researchers found that it is very difficult to automatically refine the tuning with respect to each object class present in the scene. Moreover, due to the relative complexity of real-world objects, the intra-class heterogeneity is very high, which leads to over-segmentation. Therefore, the method fails to deliver correctly many of the new segment features. In this dissertation, a new hierarchical 3D object segmentation algorithm called Automatic Virtual Surveyor based Object Extracted (AVSOE) is presented. AVSOE segments objects based on their distinct geometric concavity/convexity. This is achieved by strategically mapping the sloping surface, which connects the object to its background. Further analysis produces hierarchical decomposition of objects to its sub-objects at a single scale level. Extensive qualitative and qualitative results are presented to demonstrate the efficacy of this hierarchical segmentation approach.
Aishima, Jun; Russel, Daniel S; Guibas, Leonidas J; Adams, Paul D; Brunger, Axel T
2005-10-01
Automatic fitting methods that build molecules into electron-density maps usually fail below 3.5 A resolution. As a first step towards addressing this problem, an algorithm has been developed using an approximation of the medial axis to simplify an electron-density isosurface. This approximation captures the central axis of the isosurface with a graph which is then matched against a graph of the molecular model. One of the first applications of the medial axis to X-ray crystallography is presented here. When applied to ligand fitting, the method performs at least as well as methods based on selecting peaks in electron-density maps. Generalization of the method to recognition of common features across multiple contour levels could lead to powerful automatic fitting methods that perform well even at low resolution.
High Performance Automatic Character Skinning Based on Projection Distance
NASA Astrophysics Data System (ADS)
Li, Jun; Lin, Feng; Liu, Xiuling; Wang, Hongrui
2018-03-01
Skeleton-driven-deformation methods have been commonly used in the character deformations. The process of painting skin weights for character deformation is a long-winded task requiring manual tweaking. We present a novel method to calculate skinning weights automatically from 3D human geometric model and corresponding skeleton. The method first, groups each mesh vertex of 3D human model to a skeleton bone by the minimum distance from a mesh vertex to each bone. Secondly, calculates each vertex's weights to the adjacent bones by the vertex's projection point distance to the bone joints. Our method's output can not only be applied to any kind of skeleton-driven deformation, but also to motion capture driven (mocap-driven) deformation. Experiments results show that our method not only has strong generality and robustness, but also has high performance.
Uozumi, Y; Nagamune, K
2013-01-01
The purpose of this study is to propose an automatic segmentation about each bone (the femur, the tibia, the patellar, and fibular) of the knee in MDCT image. The proposed method was applied for six patients (Age 33 ± 13, four males/tew females). The proposed method segmented the knee joint into each bone by using anatomical structure for the knee joint. The experiments calculate matching rate of the manual and the proposed method for evaluating it. As a result, The matching rate of the femur, the tibia, the patellar, and fibula were 95.84 ± 0.57%, 94.12 ± 1.01%, 94.49 ± 0.83%, 86.37 ± 4.28%, respectively. This study concluded that the proposed method is enough to segment the knee bones.
Automatic anatomical segmentation of the liver by separation planes
NASA Astrophysics Data System (ADS)
Boltcheva, Dobrina; Passat, Nicolas; Agnus, Vincent; Jacob-Da, Marie-Andrée, , Col; Ronse, Christian; Soler, Luc
2006-03-01
Surgical planning in oncological liver surgery is based on the location of the 8 anatomical segments according to Couinaud's definition and tumors inside these structures. The detection of the boundaries between the segments is then the first step of the preoperative planning. The proposed method, devoted to binary images of livers segmented from CT-scans, has been designed to delineate these segments. It automatically detects a set of landmarks using a priori anatomical knowledge and differential geometry criteria. These landmarks are then used to position the Couinaud's segments. Validations performed on 7 clinical cases tend to prove that the method is reliable for most of these separation planes.
[Medical image elastic registration smoothed by unconstrained optimized thin-plate spline].
Zhang, Yu; Li, Shuxiang; Chen, Wufan; Liu, Zhexing
2003-12-01
Elastic registration of medical image is an important subject in medical image processing. Previous work has concentrated on selecting the corresponding landmarks manually and then using thin-plate spline interpolating to gain the elastic transformation. However, the landmarks extraction is always prone to error, which will influence the registration results. Localizing the landmarks manually is also difficult and time-consuming. We the optimization theory to improve the thin-plate spline interpolation, and based on it, used an automatic method to extract the landmarks. Combining these two steps, we have proposed an automatic, exact and robust registration method and have gained satisfactory registration results.
Cross correlation anomaly detection system
NASA Technical Reports Server (NTRS)
Micka, E. Z. (Inventor)
1975-01-01
This invention provides a method for automatically inspecting the surface of an object, such as an integrated circuit chip, whereby the data obtained by the light reflected from the surface, caused by a scanning light beam, is automatically compared with data representing acceptable values for each unique surface. A signal output provided indicated of acceptance or rejection of the chip. Acceptance is based on predetermined statistical confidence intervals calculated from known good regions of the object being tested, or their representative values. The method can utilize a known good chip, a photographic mask from which the I.C. was fabricated, or a computer stored replica of each pattern being tested.
Code of Federal Regulations, 2013 CFR
2013-01-01
... water consumption of automatic commercial ice makers. 431.134 Section 431.134 Energy DEPARTMENT OF... Automatic Commercial Ice Makers Test Procedures § 431.134 Uniform test methods for the measurement of energy and water consumption of automatic commercial ice makers. (a) Scope. This section provides the test...
Code of Federal Regulations, 2014 CFR
2014-01-01
... water consumption of automatic commercial ice makers. 431.134 Section 431.134 Energy DEPARTMENT OF... Automatic Commercial Ice Makers Test Procedures § 431.134 Uniform test methods for the measurement of energy and water consumption of automatic commercial ice makers. (a) Scope. This section provides the test...
Li, Guang; Wei, Jie; Huang, Hailiang; Gaebler, Carl Philipp; Yuan, Amy; Deasy, Joseph O
2015-12-01
To automatically estimate average diaphragm motion trajectory (ADMT) based on four-dimensional computed tomography (4DCT), facilitating clinical assessment of respiratory motion and motion variation and retrospective motion study. We have developed an effective motion extraction approach and a machine-learning-based algorithm to estimate the ADMT. Eleven patients with 22 sets of 4DCT images (4DCT1 at simulation and 4DCT2 at treatment) were studied. After automatically segmenting the lungs, the differential volume-per-slice (dVPS) curves of the left and right lungs were calculated as a function of slice number for each phase with respective to the full-exhalation. After 5-slice moving average was performed, the discrete cosine transform (DCT) was applied to analyze the dVPS curves in frequency domain. The dimensionality of the spectrum data was reduced by using several lowest frequency coefficients ( f v ) to account for most of the spectrum energy (Σ f v 2 ). Multiple linear regression (MLR) method was then applied to determine the weights of these frequencies by fitting the ground truth-the measured ADMT, which are represented by three pivot points of the diaphragm on each side. The 'leave-one-out' cross validation method was employed to analyze the statistical performance of the prediction results in three image sets: 4DCT1, 4DCT2, and 4DCT1 + 4DCT2. Seven lowest frequencies in DCT domain were found to be sufficient to approximate the patient dVPS curves ( R = 91%-96% in MLR fitting). The mean error in the predicted ADMT using leave-one-out method was 0.3 ± 1.9 mm for the left-side diaphragm and 0.0 ± 1.4 mm for the right-side diaphragm. The prediction error is lower in 4DCT2 than 4DCT1, and is the lowest in 4DCT1 and 4DCT2 combined. This frequency-analysis-based machine learning technique was employed to predict the ADMT automatically with an acceptable error (0.2 ± 1.6 mm). This volumetric approach is not affected by the presence of the lung tumors, providing an automatic robust tool to evaluate diaphragm motion.
Wang, Zhengzhou; Hu, Bingliang; Yin, Qinye
2017-01-01
The schlieren method of measuring far-field focal spots offers many advantages at the Shenguang III laser facility such as low cost and automatic laser-path collimation. However, current methods of far-field focal spot measurement often suffer from low precision and efficiency when the final focal spot is merged manually, thereby reducing the accuracy of reconstruction. In this paper, we introduce an improved schlieren method to construct the high dynamic-range image of far-field focal spots and improve the reconstruction accuracy and efficiency. First, a detection method based on weak light beam sampling and magnification imaging was designed; images of the main and side lobes of the focused laser irradiance in the far field were obtained using two scientific CCD cameras. Second, using a self-correlation template matching algorithm, a circle the same size as the schlieren ball was dug from the main lobe cutting image and used to change the relative region of the main lobe cutting image within a 100×100 pixel region. The position that had the largest correlation coefficient between the side lobe cutting image and the main lobe cutting image when a circle was dug was identified as the best matching point. Finally, the least squares method was used to fit the center of the side lobe schlieren small ball, and the error was less than 1 pixel. The experimental results show that this method enables the accurate, high-dynamic-range measurement of a far-field focal spot and automatic image reconstruction. Because the best matching point is obtained through image processing rather than traditional reconstruction methods based on manual splicing, this method is less sensitive to the efficiency of focal-spot reconstruction and thus offers better experimental precision. PMID:28207758
Automatic segmentation of equine larynx for diagnosis of laryngeal hemiplegia
NASA Astrophysics Data System (ADS)
Salehin, Md. Musfequs; Zheng, Lihong; Gao, Junbin
2013-10-01
This paper presents an automatic segmentation method for delineation of the clinically significant contours of the equine larynx from an endoscopic image. These contours are used to diagnose the most common disease of horse larynx laryngeal hemiplegia. In this study, hierarchal structured contour map is obtained by the state-of-the-art segmentation algorithm, gPb-OWT-UCM. The conic-shaped outer boundary of equine larynx is extracted based on Pascal's theorem. Lastly, Hough Transformation method is applied to detect lines related to the edges of vocal folds. The experimental results show that the proposed approach has better performance in extracting the targeted contours of equine larynx than the results of using only the gPb-OWT-UCM method.
System and method for charging a plug-in electric vehicle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bassham, Marjorie A.; Spigno, Jr., Ciro A.; Muller, Brett T.
2017-05-02
A charging system and method that may be used to automatically apply customized charging settings to a plug-in electric vehicle, where application of the settings is based on the vehicle's location. According to an exemplary embodiment, a user may establish and save a separate charging profile with certain customized charging settings for each geographic location where they plan to charge their plug-in electric vehicle. Whenever the plug-in electric vehicle enters a new geographic area, the charging method may automatically apply the charging profile that corresponds to that area. Thus, the user does not have to manually change or manipulate themore » charging settings every time they charge the plug-in electric vehicle in a new location.« less
Optimal and adaptive methods of processing hydroacoustic signals (review)
NASA Astrophysics Data System (ADS)
Malyshkin, G. S.; Sidel'nikov, G. B.
2014-09-01
Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.
Automatic lesion boundary detection in dermoscopy images using gradient vector flow snakes
Erkol, Bulent; Moss, Randy H.; Stanley, R. Joe; Stoecker, William V.; Hvatum, Erik
2011-01-01
Background Malignant melanoma has a good prognosis if treated early. Dermoscopy images of pigmented lesions are most commonly taken at × 10 magnification under lighting at a low angle of incidence while the skin is immersed in oil under a glass plate. Accurate skin lesion segmentation from the background skin is important because some of the features anticipated to be used for diagnosis deal with shape of the lesion and others deal with the color of the lesion compared with the color of the surrounding skin. Methods In this research, gradient vector flow (GVF) snakes are investigated to find the border of skin lesions in dermoscopy images. An automatic initialization method is introduced to make the skin lesion border determination process fully automated. Results Skin lesion segmentation results are presented for 70 benign and 30 melanoma skin lesion images for the GVF-based method and a color histogram analysis technique. The average errors obtained by the GVF-based method are lower for both the benign and melanoma image sets than for the color histogram analysis technique based on comparison with manually segmented lesions determined by a dermatologist. Conclusions The experimental results for the GVF-based method demonstrate promise as an automated technique for skin lesion segmentation in dermoscopy images. PMID:15691255
PaCeQuant: A Tool for High-Throughput Quantification of Pavement Cell Shape Characteristics1[OPEN
Poeschl, Yvonne; Plötner, Romina
2017-01-01
Pavement cells (PCs) are the most frequently occurring cell type in the leaf epidermis and play important roles in leaf growth and function. In many plant species, PCs form highly complex jigsaw-puzzle-shaped cells with interlocking lobes. Understanding of their development is of high interest for plant science research because of their importance for leaf growth and hence for plant fitness and crop yield. Studies of PC development, however, are limited, because robust methods are lacking that enable automatic segmentation and quantification of PC shape parameters suitable to reflect their cellular complexity. Here, we present our new ImageJ-based tool, PaCeQuant, which provides a fully automatic image analysis workflow for PC shape quantification. PaCeQuant automatically detects cell boundaries of PCs from confocal input images and enables manual correction of automatic segmentation results or direct import of manually segmented cells. PaCeQuant simultaneously extracts 27 shape features that include global, contour-based, skeleton-based, and PC-specific object descriptors. In addition, we included a method for classification and analysis of lobes at two-cell junctions and three-cell junctions, respectively. We provide an R script for graphical visualization and statistical analysis. We validated PaCeQuant by extensive comparative analysis to manual segmentation and existing quantification tools and demonstrated its usability to analyze PC shape characteristics during development and between different genotypes. PaCeQuant thus provides a platform for robust, efficient, and reproducible quantitative analysis of PC shape characteristics that can easily be applied to study PC development in large data sets. PMID:28931626
Identification of forensic samples by using an infrared-based automatic DNA sequencer.
Ricci, Ugo; Sani, Ilaria; Klintschar, Michael; Cerri, Nicoletta; De Ferrari, Francesco; Giovannucci Uzielli, Maria Luisa
2003-06-01
We have recently introduced a new protocol for analyzing all core loci of the Federal Bureau of Investigation's (FBI) Combined DNA Index System (CODIS) with an infrared (IR) automatic DNA sequencer (LI-COR 4200). The amplicons were labeled with forward oligonucleotide primers, covalently linked to a new infrared fluorescent molecule (IRDye 800). The alleles were displayed as familiar autoradiogram-like images with real-time detection. This protocol was employed for paternity testing, population studies, and identification of degraded forensic samples. We extensively analyzed some simulated forensic samples and mixed stains (blood, semen, saliva, bones, and fixed archival embedded tissues), comparing the results with donor samples. Sensitivity studies were also performed for the four multiplex systems. Our results show the efficiency, reliability, and accuracy of the IR system for the analysis of forensic samples. We also compared the efficiency of the multiplex protocol with ultraviolet (UV) technology. Paternity tests, undegraded DNA samples, and real forensic samples were analyzed with this approach based on IR technology and with UV-based automatic sequencers in combination with commercially-available kits. The comparability of the results with the widespread UV methods suggests that it is possible to exchange data between laboratories using the same core group of markers but different primer sets and detection methods.
Automatic Segmenting Structures in MRI's Based on Texture Analysis and Fuzzy Logic
NASA Astrophysics Data System (ADS)
Kaur, Mandeep; Rattan, Munish; Singh, Pushpinder
2017-12-01
The purpose of this paper is to present the variational method for geometric contours which helps the level set function remain close to the sign distance function, therefor it remove the need of expensive re-initialization procedure and thus, level set method is applied on magnetic resonance images (MRI) to track the irregularities in them as medical imaging plays a substantial part in the treatment, therapy and diagnosis of various organs, tumors and various abnormalities. It favors the patient with more speedy and decisive disease controlling with lesser side effects. The geometrical shape, the tumor's size and tissue's abnormal growth can be calculated by the segmentation of that particular image. It is still a great challenge for the researchers to tackle with an automatic segmentation in the medical imaging. Based on the texture analysis, different images are processed by optimization of level set segmentation. Traditionally, optimization was manual for every image where each parameter is selected one after another. By applying fuzzy logic, the segmentation of image is correlated based on texture features, to make it automatic and more effective. There is no initialization of parameters and it works like an intelligent system. It segments the different MRI images without tuning the level set parameters and give optimized results for all MRI's.
NASA Technical Reports Server (NTRS)
Tarabalka, Y.; Tilton, J. C.; Benediktsson, J. A.; Chanussot, J.
2012-01-01
The Hierarchical SEGmentation (HSEG) algorithm, which combines region object finding with region object clustering, has given good performances for multi- and hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. Two classification-based approaches for automatic marker selection are adapted and compared for this purpose. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. Three different implementations of the M-HSEG method are proposed and their performances in terms of classification accuracies are compared. The experimental results, presented for three hyperspectral airborne images, demonstrate that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for remote sensing image analysis.
Liu, Bin; Wu, Hao; Zhang, Deyuan; Wang, Xiaolong; Chou, Kuo-Chen
2017-02-21
To expedite the pace in conducting genome/proteome analysis, we have developed a Python package called Pse-Analysis. The powerful package can automatically complete the following five procedures: (1) sample feature extraction, (2) optimal parameter selection, (3) model training, (4) cross validation, and (5) evaluating prediction quality. All the work a user needs to do is to input a benchmark dataset along with the query biological sequences concerned. Based on the benchmark dataset, Pse-Analysis will automatically construct an ideal predictor, followed by yielding the predicted results for the submitted query samples. All the aforementioned tedious jobs can be automatically done by the computer. Moreover, the multiprocessing technique was adopted to enhance computational speed by about 6 folds. The Pse-Analysis Python package is freely accessible to the public at http://bioinformatics.hitsz.edu.cn/Pse-Analysis/, and can be directly run on Windows, Linux, and Unix.
Automatic Tortuosity-Based Retinopathy of Prematurity Screening System
NASA Astrophysics Data System (ADS)
Sukkaew, Lassada; Uyyanonvara, Bunyarit; Makhanov, Stanislav S.; Barman, Sarah; Pangputhipong, Pannet
Retinopathy of Prematurity (ROP) is an infant disease characterized by increased dilation and tortuosity of the retinal blood vessels. Automatic tortuosity evaluation from retinal digital images is very useful to facilitate an ophthalmologist in the ROP screening and to prevent childhood blindness. This paper proposes a method to automatically classify the image into tortuous and non-tortuous. The process imitates expert ophthalmologists' screening by searching for clearly tortuous vessel segments. First, a skeleton of the retinal blood vessels is extracted from the original infant retinal image using a series of morphological operators. Next, we propose to partition the blood vessels recursively using an adaptive linear interpolation scheme. Finally, the tortuosity is calculated based on the curvature of the resulting vessel segments. The retinal images are then classified into two classes using segments characterized by the highest tortuosity. For an optimal set of training parameters the prediction is as high as 100%.
Evaluation of Model Recognition for Grammar-Based Automatic 3d Building Model Reconstruction
NASA Astrophysics Data System (ADS)
Yu, Qian; Helmholz, Petra; Belton, David
2016-06-01
In recent years, 3D city models are in high demand by many public and private organisations, and the steadily growing capacity in both quality and quantity are increasing demand. The quality evaluation of these 3D models is a relevant issue both from the scientific and practical points of view. In this paper, we present a method for the quality evaluation of 3D building models which are reconstructed automatically from terrestrial laser scanning (TLS) data based on an attributed building grammar. The entire evaluation process has been performed in all the three dimensions in terms of completeness and correctness of the reconstruction. Six quality measures are introduced to apply on four datasets of reconstructed building models in order to describe the quality of the automatic reconstruction, and also are assessed on their validity from the evaluation point of view.
Design and Implementation of Multifunctional Automatic Drilling End Effector
NASA Astrophysics Data System (ADS)
Wang, Zhanxi; Qin, Xiansheng; Bai, Jing; Tan, Xiaoqun; Li, Jing
2017-03-01
In order to realize the automatic drilling in aircraft assembly, a drilling end effector is designed by integrating the pressure unit, drilling unit, measurement unit, control system and frame structure. In order to reduce the hole deviation, this paper proposes a vertical normal adjustment program based on 4 laser distance sensors. The actual normal direction of workpiece surface can be calculated through the sensors measurements, and then robot posture is adjusted to realize the hole deviation correction. A base detection method is proposed to detect and locate the hole automatically by using the camera and the reference hole. The experiment results show that the position accuracy of the system is less than 0.3mm, and the normal precision is less than 0.5°. The drilling end effector and robot can greatly improve the efficiency of the aircraft parts and assembly quality, and reduce the product development cycle.
Chen, Xiang; Gilkeson, Robert; Fei, Baowei
2013-01-01
We are investigating three-dimensional (3D) to two-dimensional (2D) registration methods for computed tomography (CT) and dual-energy digital radiography (DR) for the detection of coronary artery calcification. CT is an established tool for the diagnosis of coronary artery diseases (CADs). Dual-energy digital radiography could be a cost-effective alternative for screening coronary artery calcification. In order to utilize CT as the “gold standard” to evaluate the ability of DR images for the detection and localization of calcium, we developed an automatic intensity-based 3D-to-2D registration method for 3D CT volumes and 2D DR images. To generate digital rendering radiographs (DRR) from the CT volumes, we developed three projection methods, i.e. Gaussian-weighted projection, threshold-based projection, and average-based projection. We tested normalized cross correlation (NCC) and normalized mutual information (NMI) as similarity measurement. We used the Downhill Simplex method as the search strategy. Simulated projection images from CT were fused with the corresponding DR images to evaluate the localization of cardiac calcification. The registration method was evaluated by digital phantoms, physical phantoms, and clinical data sets. The results from the digital phantoms show that the success rate is 100% with mean errors of less 0.8 mm and 0.2 degree for both NCC and NMI. The registration accuracy of the physical phantoms is 0.34 ± 0.27 mm. Color overlay and 3D visualization of the clinical data show that the two images are registered well. This is consistent with the improvement of the NMI values from 0.20 ± 0.03 to 0.25 ± 0.03 after registration. The automatic 3D-to-2D registration method is accurate and robust and may provide a useful tool to evaluate the dual-energy DR images for the detection of coronary artery calcification. PMID:24386527
Chen, Xiang; Gilkeson, Robert; Fei, Baowei
2007-03-03
We are investigating three-dimensional (3D) to two-dimensional (2D) registration methods for computed tomography (CT) and dual-energy digital radiography (DR) for the detection of coronary artery calcification. CT is an established tool for the diagnosis of coronary artery diseases (CADs). Dual-energy digital radiography could be a cost-effective alternative for screening coronary artery calcification. In order to utilize CT as the "gold standard" to evaluate the ability of DR images for the detection and localization of calcium, we developed an automatic intensity-based 3D-to-2D registration method for 3D CT volumes and 2D DR images. To generate digital rendering radiographs (DRR) from the CT volumes, we developed three projection methods, i.e. Gaussian-weighted projection, threshold-based projection, and average-based projection. We tested normalized cross correlation (NCC) and normalized mutual information (NMI) as similarity measurement. We used the Downhill Simplex method as the search strategy. Simulated projection images from CT were fused with the corresponding DR images to evaluate the localization of cardiac calcification. The registration method was evaluated by digital phantoms, physical phantoms, and clinical data sets. The results from the digital phantoms show that the success rate is 100% with mean errors of less 0.8 mm and 0.2 degree for both NCC and NMI. The registration accuracy of the physical phantoms is 0.34 ± 0.27 mm. Color overlay and 3D visualization of the clinical data show that the two images are registered well. This is consistent with the improvement of the NMI values from 0.20 ± 0.03 to 0.25 ± 0.03 after registration. The automatic 3D-to-2D registration method is accurate and robust and may provide a useful tool to evaluate the dual-energy DR images for the detection of coronary artery calcification.
NASA Astrophysics Data System (ADS)
Chen, Xiang; Gilkeson, Robert; Fei, Baowei
2007-03-01
We are investigating three-dimensional (3D) to two-dimensional (2D) registration methods for computed tomography (CT) and dual-energy digital radiography (DR) for the detection of coronary artery calcification. CT is an established tool for the diagnosis of coronary artery diseases (CADs). Dual-energy digital radiography could be a cost-effective alternative for screening coronary artery calcification. In order to utilize CT as the "gold standard" to evaluate the ability of DR images for the detection and localization of calcium, we developed an automatic intensity-based 3D-to-2D registration method for 3D CT volumes and 2D DR images. To generate digital rendering radiographs (DRR) from the CT volumes, we developed three projection methods, i.e. Gaussian-weighted projection, threshold-based projection, and average-based projection. We tested normalized cross correlation (NCC) and normalized mutual information (NMI) as similarity measurement. We used the Downhill Simplex method as the search strategy. Simulated projection images from CT were fused with the corresponding DR images to evaluate the localization of cardiac calcification. The registration method was evaluated by digital phantoms, physical phantoms, and clinical data sets. The results from the digital phantoms show that the success rate is 100% with mean errors of less 0.8 mm and 0.2 degree for both NCC and NMI. The registration accuracy of the physical phantoms is 0.34 +/- 0.27 mm. Color overlay and 3D visualization of the clinical data show that the two images are registered well. This is consistent with the improvement of the NMI values from 0.20 +/- 0.03 to 0.25 +/- 0.03 after registration. The automatic 3D-to-2D registration method is accurate and robust and may provide a useful tool to evaluate the dual-energy DR images for the detection of coronary artery calcification.
Moghbel, Mehrdad; Mashohor, Syamsiah; Mahmud, Rozi; Saripan, M. Iqbal Bin
2016-01-01
Segmentation of liver tumors from Computed Tomography (CT) and tumor burden analysis play an important role in the choice of therapeutic strategies for liver diseases and treatment monitoring. In this paper, a new segmentation method for liver tumors from contrast-enhanced CT imaging is proposed. As manual segmentation of tumors for liver treatment planning is both labor intensive and time-consuming, a highly accurate automatic tumor segmentation is desired. The proposed framework is fully automatic requiring no user interaction. The proposed segmentation evaluated on real-world clinical data from patients is based on a hybrid method integrating cuckoo optimization and fuzzy c-means algorithm with random walkers algorithm. The accuracy of the proposed method was validated using a clinical liver dataset containing one of the highest numbers of tumors utilized for liver tumor segmentation containing 127 tumors in total with further validation of the results by a consultant radiologist. The proposed method was able to achieve one of the highest accuracies reported in the literature for liver tumor segmentation compared to other segmentation methods with a mean overlap error of 22.78 % and dice similarity coefficient of 0.75 in 3Dircadb dataset and a mean overlap error of 15.61 % and dice similarity coefficient of 0.81 in MIDAS dataset. The proposed method was able to outperform most other tumor segmentation methods reported in the literature while representing an overlap error improvement of 6 % compared to one of the best performing automatic methods in the literature. The proposed framework was able to provide consistently accurate results considering the number of tumors and the variations in tumor contrast enhancements and tumor appearances while the tumor burden was estimated with a mean error of 0.84 % in 3Dircadb dataset. PMID:27540353
Video-based respiration monitoring with automatic region of interest detection.
Janssen, Rik; Wang, Wenjin; Moço, Andreia; de Haan, Gerard
2016-01-01
Vital signs monitoring is ubiquitous in clinical environments and emerging in home-based healthcare applications. Still, since current monitoring methods require uncomfortable sensors, respiration rate remains the least measured vital sign. In this paper, we propose a video-based respiration monitoring method that automatically detects a respiratory region of interest (RoI) and signal using a camera. Based on the observation that respiration induced chest/abdomen motion is an independent motion system in a video, our basic idea is to exploit the intrinsic properties of respiration to find the respiratory RoI and extract the respiratory signal via motion factorization. We created a benchmark dataset containing 148 video sequences obtained on adults under challenging conditions and also neonates in the neonatal intensive care unit (NICU). The measurements obtained by the proposed video respiration monitoring (VRM) method are not significantly different from the reference methods (guided breathing or contact-based ECG; p-value = 0.6), and explain more than 99% of the variance of the reference values with low limits of agreement (-2.67 to 2.81 bpm). VRM seems to provide a valid solution to ECG in confined motion scenarios, though precision may be reduced for neonates. More studies are needed to validate VRM under challenging recording conditions, including upper-body motion types.
NASA Astrophysics Data System (ADS)
Mallast, U.; Gloaguen, R.; Geyer, S.; Rödiger, T.; Siebert, C.
2011-08-01
In this paper we present a semi-automatic method to infer groundwater flow-paths based on the extraction of lineaments from digital elevation models. This method is especially adequate in remote and inaccessible areas where in-situ data are scarce. The combined method of linear filtering and object-based classification provides a lineament map with a high degree of accuracy. Subsequently, lineaments are differentiated into geological and morphological lineaments using auxiliary information and finally evaluated in terms of hydro-geological significance. Using the example of the western catchment of the Dead Sea (Israel/Palestine), the orientation and location of the differentiated lineaments are compared to characteristics of known structural features. We demonstrate that a strong correlation between lineaments and structural features exists. Using Euclidean distances between lineaments and wells provides an assessment criterion to evaluate the hydraulic significance of detected lineaments. Based on this analysis, we suggest that the statistical analysis of lineaments allows a delineation of flow-paths and thus significant information on groundwater movements. To validate the flow-paths we compare them to existing results of groundwater models that are based on well data.
Demonstration of Automatically-Generated Adjoint Code for Use in Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Green, Lawrence; Carle, Alan; Fagan, Mike
1999-01-01
Gradient-based optimization requires accurate derivatives of the objective function and constraints. These gradients may have previously been obtained by manual differentiation of analysis codes, symbolic manipulators, finite-difference approximations, or existing automatic differentiation (AD) tools such as ADIFOR (Automatic Differentiation in FORTRAN). Each of these methods has certain deficiencies, particularly when applied to complex, coupled analyses with many design variables. Recently, a new AD tool called ADJIFOR (Automatic Adjoint Generation in FORTRAN), based upon ADIFOR, was developed and demonstrated. Whereas ADIFOR implements forward-mode (direct) differentiation throughout an analysis program to obtain exact derivatives via the chain rule of calculus, ADJIFOR implements the reverse-mode counterpart of the chain rule to obtain exact adjoint form derivatives from FORTRAN code. Automatically-generated adjoint versions of the widely-used CFL3D computational fluid dynamics (CFD) code and an algebraic wing grid generation code were obtained with just a few hours processing time using the ADJIFOR tool. The codes were verified for accuracy and were shown to compute the exact gradient of the wing lift-to-drag ratio, with respect to any number of shape parameters, in about the time required for 7 to 20 function evaluations. The codes have now been executed on various computers with typical memory and disk space for problems with up to 129 x 65 x 33 grid points, and for hundreds to thousands of independent variables. These adjoint codes are now used in a gradient-based aerodynamic shape optimization problem for a swept, tapered wing. For each design iteration, the optimization package constructs an approximate, linear optimization problem, based upon the current objective function, constraints, and gradient values. The optimizer subroutines are called within a design loop employing the approximate linear problem until an optimum shape is found, the design loop limit is reached, or no further design improvement is possible due to active design variable bounds and/or constraints. The resulting shape parameters are then used by the grid generation code to define a new wing surface and computational grid. The lift-to-drag ratio and its gradient are computed for the new design by the automatically-generated adjoint codes. Several optimization iterations may be required to find an optimum wing shape. Results from two sample cases will be discussed. The reader should note that this work primarily represents a demonstration of use of automatically- generated adjoint code within an aerodynamic shape optimization. As such, little significance is placed upon the actual optimization results, relative to the method for obtaining the results.
A New Automatic Method of Urban Areas Mapping in East Asia from LANDSAT Data
NASA Astrophysics Data System (ADS)
XU, R.; Jia, G.
2012-12-01
Cities, as places where human activities are concentrated, account for a small percent of global land cover but are frequently cited as the chief causes of, and solutions to, climate, biogeochemistry, and hydrology processes at local, regional, and global scales. Accompanying with uncontrolled economic growth, urban sprawl has been attributed to the accelerating integration of East Asia into the world economy and involved dramatic changes in its urban form and land use. To understand the impact of urban extent on biogeophysical processes, reliable mapping of built-up areas is particularly essential in eastern cities as a result of their characteristics of smaller patches, more fragile, and a lower fraction of the urban landscape which does not have natural than in the West. Segmentation of urban land from other land-cover types using remote sensing imagery can be done by standard classification processes as well as a logic rule calculation based on spectral indices and their derivations. Efforts to establish such a logic rule with no threshold for automatically mapping are highly worthwhile. Existing automatic methods are reviewed, and then a proposed approach is introduced including the calculation of the new index and the improved logic rule. Following this, existing automatic methods as well as the proposed approach are compared in a common context. Afterwards, the proposed approach is tested separately in cities of large, medium, and small scale in East Asia selected from different LANDSAT images. The results are promising as the approach can efficiently segment urban areas, even in the presence of more complex eastern cities. Key words: Urban extraction; Automatic Method; Logic Rule; LANDSAT images; East AisaThe Proposed Approach of Extraction of Urban Built-up Areas in Guangzhou, China
Code of Federal Regulations, 2010 CFR
2010-10-01
... must be installed at the base station to prevent activation of the transmitter when signals of co... located within a 120 kilometer (75 mile) radius of the interconnected base station transmitter. A... more than one system, automatic monitoring equipment must be installed at the base station to prevent...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahrens, J.P.; Shapiro, L.G.; Tanimoto, S.L.
1997-04-01
This paper describes a computing environment which supports computer-based scientific research work. Key features include support for automatic distributed scheduling and execution and computer-based scientific experimentation. A new flexible and extensible scheduling technique that is responsive to a user`s scheduling constraints, such as the ordering of program results and the specification of task assignments and processor utilization levels, is presented. An easy-to-use constraint language for specifying scheduling constraints, based on the relational database query language SQL, is described along with a search-based algorithm for fulfilling these constraints. A set of performance studies show that the environment can schedule and executemore » program graphs on a network of workstations as the user requests. A method for automatically generating computer-based scientific experiments is described. Experiments provide a concise method of specifying a large collection of parameterized program executions. The environment achieved significant speedups when executing experiments; for a large collection of scientific experiments an average speedup of 3.4 on an average of 5.5 scheduled processors was obtained.« less
Popuri, Karteek; Cobzas, Dana; Esfandiari, Nina; Baracos, Vickie; Jägersand, Martin
2016-02-01
The proportions of muscle and fat tissues in the human body, referred to as body composition is a vital measurement for cancer patients. Body composition has been recently linked to patient survival and the onset/recurrence of several types of cancers in numerous cancer research studies. This paper introduces a fully automatic framework for the segmentation of muscle and fat tissues from CT images to estimate body composition. We developed a novel finite element method (FEM) deformable model that incorporates a priori shape information via a statistical deformation model (SDM) within the template-based segmentation framework. The proposed method was validated on 1000 abdominal and 530 thoracic CT images and we obtained very good segmentation results with Jaccard scores in excess of 90% for both the muscle and fat regions.
NASA Astrophysics Data System (ADS)
Gao, Xiatian; Wang, Xiaogang; Jiang, Binhao
2017-10-01
UPSF (Universal Plasma Simulation Framework) is a new plasma simulation code designed for maximum flexibility by using edge-cutting techniques supported by C++17 standard. Through use of metaprogramming technique, UPSF provides arbitrary dimensional data structures and methods to support various kinds of plasma simulation models, like, Vlasov, particle in cell (PIC), fluid, Fokker-Planck, and their variants and hybrid methods. Through C++ metaprogramming technique, a single code can be used to arbitrary dimensional systems with no loss of performance. UPSF can also automatically parallelize the distributed data structure and accelerate matrix and tensor operations by BLAS. A three-dimensional particle in cell code is developed based on UPSF. Two test cases, Landau damping and Weibel instability for electrostatic and electromagnetic situation respectively, are presented to show the validation and performance of the UPSF code.
Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.
Wang, Jun Yi; Ngo, Michael M; Hessl, David; Hagerman, Randi J; Rivera, Susan M
2016-01-01
Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well.
Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem
Wang, Jun Yi; Ngo, Michael M.; Hessl, David; Hagerman, Randi J.; Rivera, Susan M.
2016-01-01
Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer’s segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well. PMID:27213683
An EEG-based functional connectivity measure for automatic detection of alcohol use disorder.
Mumtaz, Wajid; Saad, Mohamad Naufal B Mohamad; Kamel, Nidal; Ali, Syed Saad Azhar; Malik, Aamir Saeed
2018-01-01
The abnormal alcohol consumption could cause toxicity and could alter the human brain's structure and function, termed as alcohol used disorder (AUD). Unfortunately, the conventional screening methods for AUD patients are subjective and manual. Hence, to perform automatic screening of AUD patients, objective methods are needed. The electroencephalographic (EEG) data have been utilized to study the differences of brain signals between alcoholics and healthy controls that could further developed as an automatic screening tool for alcoholics. In this work, resting-state EEG-derived features were utilized as input data to the proposed feature selection and classification method. The aim was to perform automatic classification of AUD patients and healthy controls. The validation of the proposed method involved real-EEG data acquired from 30 AUD patients and 30 age-matched healthy controls. The resting-state EEG-derived features such as synchronization likelihood (SL) were computed involving 19 scalp locations resulted into 513 features. Furthermore, the features were rank-ordered to select the most discriminant features involving a rank-based feature selection method according to a criterion, i.e., receiver operating characteristics (ROC). Consequently, a reduced set of most discriminant features was identified and utilized further during classification of AUD patients and healthy controls. In this study, three different classification models such as Support Vector Machine (SVM), Naïve Bayesian (NB), and Logistic Regression (LR) were used. The study resulted into SVM classification accuracy=98%, sensitivity=99.9%, specificity=95%, and f-measure=0.97; LR classification accuracy=91.7%, sensitivity=86.66%, specificity=96.6%, and f-measure=0.90; NB classification accuracy=93.6%, sensitivity=100%, specificity=87.9%, and f-measure=0.95. The SL features could be utilized as objective markers to screen the AUD patients and healthy controls. Copyright © 2017 Elsevier B.V. All rights reserved.
Automatic Sea Bird Detection from High Resolution Aerial Imagery
NASA Astrophysics Data System (ADS)
Mader, S.; Grenzdörffer, G. J.
2016-06-01
Great efforts are presently taken in the scientific community to develop computerized and (fully) automated image processing methods allowing for an efficient and automatic monitoring of sea birds and marine mammals in ever-growing amounts of aerial imagery. Currently the major part of the processing, however, is still conducted by especially trained professionals, visually examining the images and detecting and classifying the requested subjects. This is a very tedious task, particularly when the rate of void images regularly exceeds the mark of 90%. In the content of this contribution we will present our work aiming to support the processing of aerial images by modern methods from the field of image processing. We will especially focus on the combination of local, region-based feature detection and piecewise global image segmentation for automatic detection of different sea bird species. Large image dimensions resulting from the use of medium and large-format digital cameras in aerial surveys inhibit the applicability of image processing methods based on global operations. In order to efficiently handle those image sizes and to nevertheless take advantage of globally operating segmentation algorithms, we will describe the combined usage of a simple performant feature detector based on local operations on the original image with a complex global segmentation algorithm operating on extracted sub-images. The resulting exact segmentation of possible candidates then serves as a basis for the determination of feature vectors for subsequent elimination of false candidates and for classification tasks.
NASA Astrophysics Data System (ADS)
Giannini, Valentina; Vignati, Anna; Mazzetti, Simone; De Luca, Massimo; Bracco, Christian; Stasi, Michele; Russo, Filippo; Armando, Enrico; Regge, Daniele
2013-02-01
Prostate specific antigen (PSA)-based screening reduces the rate of death from prostate cancer (PCa) by 31%, but this benefit is associated with a high risk of overdiagnosis and overtreatment. As prostate transrectal ultrasound-guided biopsy, the standard procedure for prostate histological sampling, has a sensitivity of 77% with a considerable false-negative rate, more accurate methods need to be found to detect or rule out significant disease. Prostate magnetic resonance imaging has the potential to improve the specificity of PSA-based screening scenarios as a non-invasive detection tool, in particular exploiting the combination of anatomical and functional information in a multiparametric framework. The purpose of this study was to describe a computer aided diagnosis (CAD) method that automatically produces a malignancy likelihood map by combining information from dynamic contrast enhanced MR images and diffusion weighted images. The CAD system consists of multiple sequential stages, from a preliminary registration of images of different sequences, in order to correct for susceptibility deformation and/or movement artifacts, to a Bayesian classifier, which fused all the extracted features into a probability map. The promising results (AUROC=0.87) should be validated on a larger dataset, but they suggest that the discrimination on a voxel basis between benign and malignant tissues is feasible with good performances. This method can be of benefit to improve the diagnostic accuracy of the radiologist, reduce reader variability and speed up the reading time, automatically highlighting probably cancer suspicious regions.
Meng, Hu; Li, Jiang-Yuan; Tang, Yong-Huai
2009-01-01
The virtual instrument system based on LabVIEW 8.0 for ion analyzer which can measure and analyze ion concentrations in solution is developed and comprises homemade conditioning circuit, data acquiring board, and computer. It can calibrate slope, temperature, and positioning automatically. When applied to determine the reaction rate constant by pX, it achieved live acquiring, real-time displaying, automatical processing of testing data, generating the report of results; and other functions. This method simplifies the experimental operation greatly, avoids complicated procedures of manual processing data and personal error, and improves veracity and repeatability of the experiment results.
2014-01-01
Automatic reconstruction of metabolic pathways for an organism from genomics and transcriptomics data has been a challenging and important problem in bioinformatics. Traditionally, known reference pathways can be mapped into an organism-specific ones based on its genome annotation and protein homology. However, this simple knowledge-based mapping method might produce incomplete pathways and generally cannot predict unknown new relations and reactions. In contrast, ab initio metabolic network construction methods can predict novel reactions and interactions, but its accuracy tends to be low leading to a lot of false positives. Here we combine existing pathway knowledge and a new ab initio Bayesian probabilistic graphical model together in a novel fashion to improve automatic reconstruction of metabolic networks. Specifically, we built a knowledge database containing known, individual gene / protein interactions and metabolic reactions extracted from existing reference pathways. Known reactions and interactions were then used as constraints for Bayesian network learning methods to predict metabolic pathways. Using individual reactions and interactions extracted from different pathways of many organisms to guide pathway construction is new and improves both the coverage and accuracy of metabolic pathway construction. We applied this probabilistic knowledge-based approach to construct the metabolic networks from yeast gene expression data and compared its results with 62 known metabolic networks in the KEGG database. The experiment showed that the method improved the coverage of metabolic network construction over the traditional reference pathway mapping method and was more accurate than pure ab initio methods. PMID:25374614
Computer-aided diagnosis system: a Bayesian hybrid classification method.
Calle-Alonso, F; Pérez, C J; Arias-Nicolás, J P; Martín, J
2013-10-01
A novel method to classify multi-class biomedical objects is presented. The method is based on a hybrid approach which combines pairwise comparison, Bayesian regression and the k-nearest neighbor technique. It can be applied in a fully automatic way or in a relevance feedback framework. In the latter case, the information obtained from both an expert and the automatic classification is iteratively used to improve the results until a certain accuracy level is achieved, then, the learning process is finished and new classifications can be automatically performed. The method has been applied in two biomedical contexts by following the same cross-validation schemes as in the original studies. The first one refers to cancer diagnosis, leading to an accuracy of 77.35% versus 66.37%, originally obtained. The second one considers the diagnosis of pathologies of the vertebral column. The original method achieves accuracies ranging from 76.5% to 96.7%, and from 82.3% to 97.1% in two different cross-validation schemes. Even with no supervision, the proposed method reaches 96.71% and 97.32% in these two cases. By using a supervised framework the achieved accuracy is 97.74%. Furthermore, all abnormal cases were correctly classified. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Cross-Language Information Retrieval: An Analysis of Errors.
ERIC Educational Resources Information Center
Ruiz, Miguel E.; Srinivasan, Padmini
1998-01-01
Investigates an automatic method for Cross Language Information Retrieval (CLIR) that utilizes the multilingual Unified Medical Language System (UMLS) Metathesaurus to translate Spanish natural-language queries into English. Results indicate that for Spanish, the UMLS Metathesaurus-based CLIR method is at least equivalent to if not better than…
The use and generation of illustrative examples in computer-based instructional systems
NASA Technical Reports Server (NTRS)
Selig, William John; Johannes, James D.
1987-01-01
A method is proposed whereby the underlying domain knowledge is represented such that illustrative examples may be generated on demand. This method has the advantage that the generated example can follow changes in the domain in addition to allowing automatic customization of the example to the individual.
Wolterink, Jelmer M; Leiner, Tim; de Vos, Bob D; Coatrieux, Jean-Louis; Kelm, B Michael; Kondo, Satoshi; Salgado, Rodrigo A; Shahzad, Rahil; Shu, Huazhong; Snoeren, Miranda; Takx, Richard A P; van Vliet, Lucas J; van Walsum, Theo; Willems, Tineke P; Yang, Guanyu; Zheng, Yefeng; Viergever, Max A; Išgum, Ivana
2016-05-01
The amount of coronary artery calcification (CAC) is a strong and independent predictor of cardiovascular disease (CVD) events. In clinical practice, CAC is manually identified and automatically quantified in cardiac CT using commercially available software. This is a tedious and time-consuming process in large-scale studies. Therefore, a number of automatic methods that require no interaction and semiautomatic methods that require very limited interaction for the identification of CAC in cardiac CT have been proposed. Thus far, a comparison of their performance has been lacking. The objective of this study was to perform an independent evaluation of (semi)automatic methods for CAC scoring in cardiac CT using a publicly available standardized framework. Cardiac CT exams of 72 patients distributed over four CVD risk categories were provided for (semi)automatic CAC scoring. Each exam consisted of a noncontrast-enhanced calcium scoring CT (CSCT) and a corresponding coronary CT angiography (CCTA) scan. The exams were acquired in four different hospitals using state-of-the-art equipment from four major CT scanner vendors. The data were divided into 32 training exams and 40 test exams. A reference standard for CAC in CSCT was defined by consensus of two experts following a clinical protocol. The framework organizers evaluated the performance of (semi)automatic methods on test CSCT scans, per lesion, artery, and patient. Five (semi)automatic methods were evaluated. Four methods used both CSCT and CCTA to identify CAC, and one method used only CSCT. The evaluated methods correctly detected between 52% and 94% of CAC lesions with positive predictive values between 65% and 96%. Lesions in distal coronary arteries were most commonly missed and aortic calcifications close to the coronary ostia were the most common false positive errors. The majority (between 88% and 98%) of correctly identified CAC lesions were assigned to the correct artery. Linearly weighted Cohen's kappa for patient CVD risk categorization by the evaluated methods ranged from 0.80 to 1.00. A publicly available standardized framework for the evaluation of (semi)automatic methods for CAC identification in cardiac CT is described. An evaluation of five (semi)automatic methods within this framework shows that automatic per patient CVD risk categorization is feasible. CAC lesions at ambiguous locations such as the coronary ostia remain challenging, but their detection had limited impact on CVD risk determination.
Web-based computer-aided-diagnosis (CAD) system for bone age assessment (BAA) of children
NASA Astrophysics Data System (ADS)
Zhang, Aifeng; Uyeda, Joshua; Tsao, Sinchai; Ma, Kevin; Vachon, Linda A.; Liu, Brent J.; Huang, H. K.
2008-03-01
Bone age assessment (BAA) of children is a clinical procedure frequently performed in pediatric radiology to evaluate the stage of skeletal maturation based on a left hand and wrist radiograph. The most commonly used standard: Greulich and Pyle (G&P) Hand Atlas was developed 50 years ago and exclusively based on Caucasian population. Moreover, inter- & intra-observer discrepancies using this method create a need of an objective and automatic BAA method. A digital hand atlas (DHA) has been collected with 1,400 hand images of normal children from Asian, African American, Caucasian and Hispanic descends. Based on DHA, a fully automatic, objective computer-aided-diagnosis (CAD) method was developed and it was adapted to specific population. To bring DHA and CAD method to the clinical environment as a useful tool in assisting radiologist to achieve higher accuracy in BAA, a web-based system with direct connection to a clinical site is designed as a novel clinical implementation approach for online and real time BAA. The core of the system, a CAD server receives the image from clinical site, processes it by the CAD method and finally, generates report. A web service publishes the results and radiologists at the clinical site can review it online within minutes. This prototype can be easily extended to multiple clinical sites and will provide the foundation for broader use of the CAD system for BAA.
Automatic computation for optimum height planning of apartment buildings to improve solar access
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seong, Yoon-Bok; Kim, Yong-Yee; Seok, Ho-Tae
2011-01-15
The objective of this study is to suggest a mathematical model and an optimal algorithm for determining the height of apartment buildings to satisfy the solar rights of survey buildings or survey housing units. The objective is also to develop an automatic computation model for the optimum height of apartment buildings and then to clarify the performance and expected effects. To accomplish the objective of this study, the following procedures were followed: (1) The necessity of the height planning of obstruction buildings to satisfy the solar rights of survey buildings or survey housing units is demonstrated by analyzing through amore » literature review the recent trend of disputes related to solar rights and to examining the social requirements in terms of solar rights. In addition, the necessity of the automatic computation system for height planning of apartment buildings is demonstrated and a suitable analysis method for this system is chosen by investigating the characteristics of analysis methods for solar rights assessment. (2) A case study on the process of height planning of apartment buildings will be briefly described and the problems occurring in this process will then be examined carefully. (3) To develop an automatic computation model for height planning of apartment buildings, geometrical elements forming apartment buildings are defined by analyzing the geometrical characteristics of apartment buildings. In addition, design factors and regulations required in height planning of apartment buildings are investigated. Based on this knowledge, the methodology and mathematical algorithm to adjust the height of apartment buildings by automatic computation are suggested and probable problems and the ways to resolve these problems are discussed. Finally, the methodology and algorithm for the optimization are suggested. (4) Based on the suggested methodology and mathematical algorithm, the automatic computation model for optimum height of apartment buildings is developed and the developed system is verified through the application of some cases. The effects of the suggested model are then demonstrated quantitatively and qualitatively. (author)« less
Automatic inference of indexing rules for MEDLINE
Névéol, Aurélie; Shooshan, Sonya E; Claveau, Vincent
2008-01-01
Background: Indexing is a crucial step in any information retrieval system. In MEDLINE, a widely used database of the biomedical literature, the indexing process involves the selection of Medical Subject Headings in order to describe the subject matter of articles. The need for automatic tools to assist MEDLINE indexers in this task is growing with the increasing number of publications being added to MEDLINE. Methods: In this paper, we describe the use and the customization of Inductive Logic Programming (ILP) to infer indexing rules that may be used to produce automatic indexing recommendations for MEDLINE indexers. Results: Our results show that this original ILP-based approach outperforms manual rules when they exist. In addition, the use of ILP rules also improves the overall performance of the Medical Text Indexer (MTI), a system producing automatic indexing recommendations for MEDLINE. Conclusion: We expect the sets of ILP rules obtained in this experiment to be integrated into MTI. PMID:19025687
Yang Li; Wei Liang; Yinlong Zhang; Haibo An; Jindong Tan
2016-08-01
Automatic and accurate lumbar vertebrae detection is an essential step of image-guided minimally invasive spine surgery (IG-MISS). However, traditional methods still require human intervention due to the similarity of vertebrae, abnormal pathological conditions and uncertain imaging angle. In this paper, we present a novel convolutional neural network (CNN) model to automatically detect lumbar vertebrae for C-arm X-ray images. Training data is augmented by DRR and automatic segmentation of ROI is able to reduce the computational complexity. Furthermore, a feature fusion deep learning (FFDL) model is introduced to combine two types of features of lumbar vertebrae X-ray images, which uses sobel kernel and Gabor kernel to obtain the contour and texture of lumbar vertebrae, respectively. Comprehensive qualitative and quantitative experiments demonstrate that our proposed model performs more accurate in abnormal cases with pathologies and surgical implants in multi-angle views.
Automatic processing of spoken dialogue in the home hemodialysis domain.
Lacson, Ronilda; Barzilay, Regina
2005-01-01
Spoken medical dialogue is a valuable source of information, and it forms a foundation for diagnosis, prevention and therapeutic management. However, understanding even a perfect transcript of spoken dialogue is challenging for humans because of the lack of structure and the verbosity of dialogues. This work presents a first step towards automatic analysis of spoken medical dialogue. The backbone of our approach is an abstraction of a dialogue into a sequence of semantic categories. This abstraction uncovers structure in informal, verbose conversation between a caregiver and a patient, thereby facilitating automatic processing of dialogue content. Our method induces this structure based on a range of linguistic and contextual features that are integrated in a supervised machine-learning framework. Our model has a classification accuracy of 73%, compared to 33% achieved by a majority baseline (p<0.01). This work demonstrates the feasibility of automatically processing spoken medical dialogue.
Dudik, Joshua M; Kurosu, Atsuko; Coyle, James L; Sejdić, Ervin
2015-04-01
Cervical auscultation with high resolution sensors is currently under consideration as a method of automatically screening for specific swallowing abnormalities. To be clinically useful without human involvement, any devices based on cervical auscultation should be able to detect specified swallowing events in an automatic manner. In this paper, we comparatively analyze the density-based spatial clustering of applications with noise algorithm (DBSCAN), a k-means based algorithm, and an algorithm based on quadratic variation as methods of differentiating periods of swallowing activity from periods of time without swallows. These algorithms utilized swallowing vibration data exclusively and compared the results to a gold standard measure of swallowing duration. Data was collected from 23 subjects that were actively suffering from swallowing difficulties. Comparing the performance of the DBSCAN algorithm with a proven segmentation algorithm that utilizes k-means clustering demonstrated that the DBSCAN algorithm had a higher sensitivity and correctly segmented more swallows. Comparing its performance with a threshold-based algorithm that utilized the quadratic variation of the signal showed that the DBSCAN algorithm offered no direct increase in performance. However, it offered several other benefits including a faster run time and more consistent performance between patients. All algorithms showed noticeable differentiation from the endpoints provided by a videofluoroscopy examination as well as reduced sensitivity. In summary, we showed that the DBSCAN algorithm is a viable method for detecting the occurrence of a swallowing event using cervical auscultation signals, but significant work must be done to improve its performance before it can be implemented in an unsupervised manner. Copyright © 2015 Elsevier Ltd. All rights reserved.
FAMA: An automatic code for stellar parameter and abundance determination
NASA Astrophysics Data System (ADS)
Magrini, Laura; Randich, Sofia; Friel, Eileen; Spina, Lorenzo; Jacobson, Heather; Cantat-Gaudin, Tristan; Donati, Paolo; Baglioni, Roberto; Maiorca, Enrico; Bragaglia, Angela; Sordo, Rosanna; Vallenari, Antonella
2013-10-01
Context. The large amount of spectra obtained during the epoch of extensive spectroscopic surveys of Galactic stars needs the development of automatic procedures to derive their atmospheric parameters and individual element abundances. Aims: Starting from the widely-used code MOOG by C. Sneden, we have developed a new procedure to determine atmospheric parameters and abundances in a fully automatic way. The code FAMA (Fast Automatic MOOG Analysis) is presented describing its approach to derive atmospheric stellar parameters and element abundances. The code, freely distributed, is written in Perl and can be used on different platforms. Methods: The aim of FAMA is to render the computation of the atmospheric parameters and abundances of a large number of stars using measurements of equivalent widths (EWs) as automatic and as independent of any subjective approach as possible. It is based on the simultaneous search for three equilibria: excitation equilibrium, ionization balance, and the relationship between log n(Fe i) and the reduced EWs. FAMA also evaluates the statistical errors on individual element abundances and errors due to the uncertainties in the stellar parameters. The convergence criteria are not fixed "a priori" but are based on the quality of the spectra. Results: In this paper we present tests performed on the solar spectrum EWs that assess the method's dependency on the initial parameters and we analyze a sample of stars observed in Galactic open and globular clusters. The current version of FAMA is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/558/A38
Information retrieval and terminology extraction in online resources for patients with diabetes.
Seljan, Sanja; Baretić, Maja; Kucis, Vlasta
2014-06-01
Terminology use, as a mean for information retrieval or document indexing, plays an important role in health literacy. Specific types of users, i.e. patients with diabetes need access to various online resources (on foreign and/or native language) searching for information on self-education of basic diabetic knowledge, on self-care activities regarding importance of dietetic food, medications, physical exercises and on self-management of insulin pumps. Automatic extraction of corpus-based terminology from online texts, manuals or professional papers, can help in building terminology lists or list of "browsing phrases" useful in information retrieval or in document indexing. Specific terminology lists represent an intermediate step between free text search and controlled vocabulary, between user's demands and existing online resources in native and foreign language. The research aiming to detect the role of terminology in online resources, is conducted on English and Croatian manuals and Croatian online texts, and divided into three interrelated parts: i) comparison of professional and popular terminology use ii) evaluation of automatic statistically-based terminology extraction on English and Croatian texts iii) comparison and evaluation of extracted terminology performed on English manual using statistical and hybrid approaches. Extracted terminology candidates are evaluated by comparison with three types of reference lists: list created by professional medical person, list of highly professional vocabulary contained in MeSH and list created by non-medical persons, made as intersection of 15 lists. Results report on use of popular and professional terminology in online diabetes resources, on evaluation of automatically extracted terminology candidates in English and Croatian texts and on comparison of statistical and hybrid extraction methods in English text. Evaluation of automatic and semi-automatic terminology extraction methods is performed by recall, precision and f-measure.
Fuzzy pulmonary vessel segmentation in contrast enhanced CT data
NASA Astrophysics Data System (ADS)
Kaftan, Jens N.; Kiraly, Atilla P.; Bakai, Annemarie; Das, Marco; Novak, Carol L.; Aach, Til
2008-03-01
Pulmonary vascular tree segmentation has numerous applications in medical imaging and computer-aided diagnosis (CAD), including detection and visualization of pulmonary emboli (PE), improved lung nodule detection, and quantitative vessel analysis. We present a novel approach to pulmonary vessel segmentation based on a fuzzy segmentation concept, combining the strengths of both threshold and seed point based methods. The lungs of the original image are first segmented and a threshold-based approach identifies core vessel components with a high specificity. These components are then used to automatically identify reliable seed points for a fuzzy seed point based segmentation method, namely fuzzy connectedness. The output of the method consists of the probability of each voxel belonging to the vascular tree. Hence, our method provides the possibility to adjust the sensitivity/specificity of the segmentation result a posteriori according to application-specific requirements, through definition of a minimum vessel-probability required to classify a voxel as belonging to the vascular tree. The method has been evaluated on contrast-enhanced thoracic CT scans from clinical PE cases and demonstrates overall promising results. For quantitative validation we compare the segmentation results to randomly selected, semi-automatically segmented sub-volumes and present the resulting receiver operating characteristic (ROC) curves. Although we focus on contrast enhanced chest CT data, the method can be generalized to other regions of the body as well as to different imaging modalities.
A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines.
Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus
2016-01-01
The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts.
Automatic 3D Moment tensor inversions for southern California earthquakes
NASA Astrophysics Data System (ADS)
Liu, Q.; Tape, C.; Friberg, P.; Tromp, J.
2008-12-01
We present a new source mechanism (moment-tensor and depth) catalog for about 150 recent southern California earthquakes with Mw ≥ 3.5. We carefully select the initial solutions from a few available earthquake catalogs as well as our own preliminary 3D moment tensor inversion results. We pick useful data windows by assessing the quality of fits between the data and synthetics using an automatic windowing package FLEXWIN (Maggi et al 2008). We compute the source Fréchet derivatives of moment-tensor elements and depth for a recent 3D southern California velocity model inverted based upon finite-frequency event kernels calculated by the adjoint methods and a nonlinear conjugate gradient technique with subspace preconditioning (Tape et al 2008). We then invert for the source mechanisms and event depths based upon the techniques introduced by Liu et al 2005. We assess the quality of this new catalog, as well as the other existing ones, by computing the 3D synthetics for the updated 3D southern California model. We also plan to implement the moment-tensor inversion methods to automatically determine the source mechanisms for earthquakes with Mw ≥ 3.5 in southern California.
Method for automatic measurement of second language speaking proficiency
NASA Astrophysics Data System (ADS)
Bernstein, Jared; Balogh, Jennifer
2005-04-01
Spoken language proficiency is intuitively related to effective and efficient communication in spoken interactions. However, it is difficult to derive a reliable estimate of spoken language proficiency by situated elicitation and evaluation of a person's communicative behavior. This paper describes the task structure and scoring logic of a group of fully automatic spoken language proficiency tests (for English, Spanish and Dutch) that are delivered via telephone or Internet. Test items are presented in spoken form and require a spoken response. Each test is automatically-scored and primarily based on short, decontextualized tasks that elicit integrated listening and speaking performances. The tests present several types of tasks to candidates, including sentence repetition, question answering, sentence construction, and story retelling. The spoken responses are scored according to the lexical content of the response and a set of acoustic base measures on segments, words and phrases, which are scaled with IRT methods or parametrically combined to optimize fit to human listener judgments. Most responses are isolated spoken phrases and sentences that are scored according to their linguistic content, their latency, and their fluency and pronunciation. The item development procedures and item norming are described.
A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines
Mikut, Ralf; Reischl, Markus
2016-01-01
The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts. PMID:27764213
ABI Base Recall: Automatic Correction and Ends Trimming of DNA Sequences.
Elyazghi, Zakaria; Yazouli, Loubna El; Sadki, Khalid; Radouani, Fouzia
2017-12-01
Automated DNA sequencers produce chromatogram files in ABI format. When viewing chromatograms, some ambiguities are shown at various sites along the DNA sequences, because the program implemented in the sequencing machine and used to call bases cannot always precisely determine the right nucleotide, especially when it is represented by either a broad peak or a set of overlaying peaks. In such cases, a letter other than A, C, G, or T is recorded, most commonly N. Thus, DNA sequencing chromatograms need manual examination: checking for mis-calls and truncating the sequence when errors become too frequent. The purpose of this paper is to develop a program allowing the automatic correction of these ambiguities. This application is a Web-based program powered by Shiny and runs under R platform for an easy exploitation. As a part of the interface, we added the automatic ends clipping option, alignment against reference sequences, and BLAST. To develop and test our tool, we collected several bacterial DNA sequences from different laboratories within Institut Pasteur du Maroc and performed both manual and automatic correction. The comparison between the two methods was carried out. As a result, we note that our program, ABI base recall, accomplishes good correction with a high accuracy. Indeed, it increases the rate of identity and coverage and minimizes the number of mismatches and gaps, hence it provides solution to sequencing ambiguities and saves biologists' time and labor.
Wang, Hongzhi; Das, Sandhitsu R.; Suh, Jung Wook; Altinay, Murat; Pluta, John; Craige, Caryne; Avants, Brian; Yushkevich, Paul A.
2011-01-01
We propose a simple but generally applicable approach to improving the accuracy of automatic image segmentation algorithms relative to manual segmentations. The approach is based on the hypothesis that a large fraction of the errors produced by automatic segmentation are systematic, i.e., occur consistently from subject to subject, and serves as a wrapper method around a given host segmentation method. The wrapper method attempts to learn the intensity, spatial and contextual patterns associated with systematic segmentation errors produced by the host method on training data for which manual segmentations are available. The method then attempts to correct such errors in segmentations produced by the host method on new images. One practical use of the proposed wrapper method is to adapt existing segmentation tools, without explicit modification, to imaging data and segmentation protocols that are different from those on which the tools were trained and tuned. An open-source implementation of the proposed wrapper method is provided, and can be applied to a wide range of image segmentation problems. The wrapper method is evaluated with four host brain MRI segmentation methods: hippocampus segmentation using FreeSurfer (Fischl et al., 2002); hippocampus segmentation using multi-atlas label fusion (Artaechevarria et al., 2009); brain extraction using BET (Smith, 2002); and brain tissue segmentation using FAST (Zhang et al., 2001). The wrapper method generates 72%, 14%, 29% and 21% fewer erroneously segmented voxels than the respective host segmentation methods. In the hippocampus segmentation experiment with multi-atlas label fusion as the host method, the average Dice overlap between reference segmentations and segmentations produced by the wrapper method is 0.908 for normal controls and 0.893 for patients with mild cognitive impairment. Average Dice overlaps of 0.964, 0.905 and 0.951 are obtained for brain extraction, white matter segmentation and gray matter segmentation, respectively. PMID:21237273
SVM-based automatic diagnosis method for keratoconus
NASA Astrophysics Data System (ADS)
Gao, Yuhong; Wu, Qiang; Li, Jing; Sun, Jiande; Wan, Wenbo
2017-06-01
Keratoconus is a progressive cornea disease that can lead to serious myopia and astigmatism, or even to corneal transplantation, if it becomes worse. The early detection of keratoconus is extremely important to know and control its condition. In this paper, we propose an automatic diagnosis algorithm for keratoconus to discriminate the normal eyes and keratoconus ones. We select the parameters obtained by Oculyzer as the feature of cornea, which characterize the cornea both directly and indirectly. In our experiment, 289 normal cases and 128 keratoconus cases are divided into training and test sets respectively. Far better than other kernels, the linear kernel of SVM has sensitivity of 94.94% and specificity of 97.87% with all the parameters training in the model. In single parameter experiment of linear kernel, elevation with 92.03% sensitivity and 98.61% specificity and thickness with 97.28% sensitivity and 97.82% specificity showed their good classification abilities. Combining elevation and thickness of the cornea, the proposed method can reach 97.43% sensitivity and 99.19% specificity. The experiments demonstrate that the proposed automatic diagnosis method is feasible and reliable.
Automatic rebuilding and optimization of crystallographic structures in the Protein Data Bank
Joosten, Robbie P.; Joosten, Krista; Cohen, Serge X.; Vriend, Gert; Perrakis, Anastassis
2011-01-01
Motivation: Macromolecular crystal structures in the Protein Data Bank (PDB) are a key source of structural insight into biological processes. These structures, some >30 years old, were constructed with methods of their era. With PDB_REDO, we aim to automatically optimize these structures to better fit their corresponding experimental data, passing the benefits of new methods in crystallography on to a wide base of non-crystallographer structure users. Results: We developed new algorithms to allow automatic rebuilding and remodeling of main chain peptide bonds and side chains in crystallographic electron density maps, and incorporated these and further enhancements in the PDB_REDO procedure. Applying the updated PDB_REDO to the oldest, but also to some of the newest models in the PDB, corrects existing modeling errors and brings these models to a higher quality, as judged by standard validation methods. Availability and Implementation: The PDB_REDO database and links to all software are available at http://www.cmbi.ru.nl/pdb_redo. Contact: r.joosten@nki.nl; a.perrakis@nki.nl Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:22034521
Hu, D; Sarder, P; Ronhovde, P; Orthaus, S; Achilefu, S; Nussinov, Z
2014-01-01
Inspired by a multiresolution community detection based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Furthermore, using the proposed method, the mean-square error in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The multiresolution community detection method appeared to perform better than a popular spectral clustering-based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in mean-square error with increasing resolution. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.
Hu, Dandan; Sarder, Pinaki; Ronhovde, Peter; Orthaus, Sandra; Achilefu, Samuel; Nussinov, Zohar
2014-01-01
Inspired by a multi-resolution community detection (MCD) based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Further, using the proposed method, the mean-square error (MSE) in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The MCD method appeared to perform better than a popular spectral clustering based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in MSE with increasing resolution. PMID:24251410
NASA Astrophysics Data System (ADS)
Vamshi, Gasiganti T.; Martha, Tapas R.; Vinod Kumar, K.
2016-05-01
Identification of impact craters is a primary requirement to study past geological processes such as impact history. They are also used as proxies for measuring relative ages of various planetary or satellite bodies and help to understand the evolution of planetary surfaces. In this paper, we present a new method using object-based image analysis (OBIA) technique to detect impact craters of wide range of sizes from topographic data. Multiresolution image segmentation of digital terrain models (DTMs) available from the NASA's LRO mission was carried out to create objects. Subsequently, objects were classified into impact craters using shape and morphometric criteria resulting in 95% detection accuracy. The methodology developed in a training area in parts of Mare Imbrium in the form of a knowledge-based ruleset when applied in another area, detected impact craters with 90% accuracy. The minimum and maximum sizes (diameters) of impact craters detected in parts of Mare Imbrium by our method are 29 m and 1.5 km, respectively. Diameters of automatically detected impact craters show good correlation (R2 > 0.85) with the diameters of manually detected impact craters.
Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera.
Nguyen, Thuy Tuong; Slaughter, David C; Hanson, Bradley D; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin
2015-07-28
This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images.
Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera
Nguyen, Thuy Tuong; Slaughter, David C.; Hanson, Bradley D.; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin
2015-01-01
This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images. PMID:26225982
A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing
Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian
2016-01-01
Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623
Measuring patterns in team interaction sequences using a discrete recurrence approach.
Gorman, Jamie C; Cooke, Nancy J; Amazeen, Polemnia G; Fouse, Shannon
2012-08-01
Recurrence-based measures of communication determinism and pattern information are described and validated using previously collected team interaction data. Team coordination dynamics has revealed that"mixing" team membership can lead to flexible interaction processes, but keeping a team "intact" can lead to rigid interaction processes. We hypothesized that communication of intact teams would have greater determinism and higher pattern information compared to that of mixed teams. Determinism and pattern information were measured from three-person Uninhabited Air Vehicle team communication sequences over a series of 40-minute missions. Because team members communicated using push-to-talk buttons, communication sequences were automatically generated during each mission. The Composition x Mission determinism effect was significant. Intact teams' determinism increased over missions, whereas mixed teams' determinism did not change. Intact teams had significantly higher maximum pattern information than mixed teams. Results from these new communication analysis methods converge with content-based methods and support our hypotheses. Because they are not content based, and because they are automatic and fast, these new methods may be amenable to real-time communication pattern analysis.
Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System.
Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu
2016-10-20
Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias.
Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System
Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu
2016-01-01
Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias. PMID:27775596
Automated retinal vessel type classification in color fundus images
NASA Astrophysics Data System (ADS)
Yu, H.; Barriga, S.; Agurto, C.; Nemeth, S.; Bauman, W.; Soliz, P.
2013-02-01
Automated retinal vessel type classification is an essential first step toward machine-based quantitative measurement of various vessel topological parameters and identifying vessel abnormalities and alternations in cardiovascular disease risk analysis. This paper presents a new and accurate automatic artery and vein classification method developed for arteriolar-to-venular width ratio (AVR) and artery and vein tortuosity measurements in regions of interest (ROI) of 1.5 and 2.5 optic disc diameters from the disc center, respectively. This method includes illumination normalization, automatic optic disc detection and retinal vessel segmentation, feature extraction, and a partial least squares (PLS) classification. Normalized multi-color information, color variation, and multi-scale morphological features are extracted on each vessel segment. We trained the algorithm on a set of 51 color fundus images using manually marked arteries and veins. We tested the proposed method in a previously unseen test data set consisting of 42 images. We obtained an area under the ROC curve (AUC) of 93.7% in the ROI of AVR measurement and 91.5% of AUC in the ROI of tortuosity measurement. The proposed AV classification method has the potential to assist automatic cardiovascular disease early detection and risk analysis.
A threshold selection method based on edge preserving
NASA Astrophysics Data System (ADS)
Lou, Liantang; Dan, Wei; Chen, Jiaqi
2015-12-01
A method of automatic threshold selection for image segmentation is presented. An optimal threshold is selected in order to preserve edge of image perfectly in image segmentation. The shortcoming of Otsu's method based on gray-level histograms is analyzed. The edge energy function of bivariate continuous function is expressed as the line integral while the edge energy function of image is simulated by discretizing the integral. An optimal threshold method by maximizing the edge energy function is given. Several experimental results are also presented to compare with the Otsu's method.
Species distribution modeling based on the automated identification of citizen observations.
Botella, Christophe; Joly, Alexis; Bonnet, Pierre; Monestiez, Pascal; Munoz, François
2018-02-01
A species distribution model computed with automatically identified plant observations was developed and evaluated to contribute to future ecological studies. We used deep learning techniques to automatically identify opportunistic plant observations made by citizens through a popular mobile application. We compared species distribution modeling of invasive alien plants based on these data to inventories made by experts. The trained models have a reasonable predictive effectiveness for some species, but they are biased by the massive presence of cultivated specimens. The method proposed here allows for fine-grained and regular monitoring of some species of interest based on opportunistic observations. More in-depth investigation of the typology of the observations and the sampling bias should help improve the approach in the future.
Automatic tissue image segmentation based on image processing and deep learning
NASA Astrophysics Data System (ADS)
Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting
2018-02-01
Image segmentation plays an important role in multimodality imaging, especially in fusion structural images offered by CT, MRI with functional images collected by optical technologies or other novel imaging technologies. Plus, image segmentation also provides detailed structure description for quantitative visualization of treating light distribution in the human body when incorporated with 3D light transport simulation method. Here we used image enhancement, operators, and morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in a deep learning way. We also introduced parallel computing. Such approaches greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. Our results can be used as a criteria when diagnosing diseases such as cerebral atrophy, which is caused by pathological changes in gray matter or white matter. We demonstrated the great potential of such image processing and deep leaning combined automatic tissue image segmentation in personalized medicine, especially in monitoring, and treatments.
Automatic characterization of sleep need dissipation dynamics using a single EEG signal.
Garcia-Molina, Gary; Bellesi, Michele; Riedner, Brady; Pastoor, Sander; Pfundtner, Stefan; Tononi, Giulio
2015-01-01
In the two-process model of sleep regulation, slow-wave activity (SWA, i.e. the EEG power in the 0.5-4 Hz frequency band) is considered a direct indicator of sleep need. SWA builds up during non-rapid eye movement (NREM) sleep, declines before the onset of rapid-eye-movement (REM) sleep, remains low during REM and the level of increase in successive NREM episodes gets progressively lower. Sleep need dissipates with a speed that is proportional to SWA and can be characterized in terms of the initial sleep need, and the decay rate. The goal in this paper is to automatically characterize sleep need from a single EEG signal acquired at a frontal location. To achieve this, a highly specific and reasonably sensitive NREM detection algorithm is proposed that leverages the concept of a single-class Kernel-based classifier. Using automatic NREM detection, we propose a method to estimate the decay rate and the initial sleep need. This method was tested on experimental data from 8 subjects who recorded EEG during three nights at home. We found that on average the estimates of the decay rate and the initial sleep need have higher values when automatic NREM detection was used as compared to manual NREM annotation. However, the average variability of these estimates across multiple nights of the same subject was lower when the automatic NREM detection classifier was used. While this method slightly over estimates the sleep need parameters, the reduced variability across subjects makes it more effective for within subject statistical comparisons of a given sleep intervention.
Automatic Query Formulations in Information Retrieval.
ERIC Educational Resources Information Center
Salton, G.; And Others
1983-01-01
Introduces methods designed to reduce role of search intermediaries by generating Boolean search formulations automatically using term frequency considerations from natural language statements provided by system patrons. Experimental results are supplied and methods are described for applying automatic query formulation process in practice.…
Automatic P-S phase picking procedure based on Kurtosis: Vanuatu region case study
NASA Astrophysics Data System (ADS)
Baillard, C.; Crawford, W. C.; Ballu, V.; Hibert, C.
2012-12-01
Automatic P and S phase picking is indispensable for large seismological data sets. Robust algorithms, based on short term and long term average ratio comparison (Allen, 1982), are commonly used for event detection, but further improvements can be made in phase identification and picking. We present a picking scheme using consecutively Kurtosis-derived Characteristic Functions (CF) and Eigenvalue decompositions on 3-component seismic data to independently pick P and S arrivals. When computed over a sliding window of the signal, a sudden increase in the CF reveals a transition from a gaussian to a non-gaussian distribution, characterizing the phase onset (Saragiotis, 2002). One advantage of the method is that it requires much fewer adjustable parameters than competing methods. We modified the Kurtosis CF to improve pick precision, by computing the CF over several frequency bandwidths, window sizes and smoothing parameters. Once phases were picked, we determined the onset type (P or S) using polarization parameters (rectilinearity, azimuth and dip) calculated using Eigenvalue decompositions of the covariance matrix (Cichowicz, 1993). Finally, we removed bad picks using a clustering procedure and the signal-to-noise ratio (SNR). The pick quality index was also assigned based on the SNR value. Amplitude calculation is integrated into the procedure to enable automatic magnitude calculation. We applied this procedure to data from a network of 30 wideband seismometers (including 10 oceanic bottom seismometers) in Vanuatu that ran for 10 months from May 2008 to February 2009. We manually picked the first 172 events of June, whose local magnitudes range from 0.7 to 3.7. We made a total of 1601 picks, 1094 P and 507 S. We then applied our automatic picking to the same dataset. 70% of the manually picked onsets were picked automatically. For P-picks, the difference between manual and automatic picks is 0.01 ± 0.08 s overall; for the best quality picks (quality index 0: 64% of the P-picks) the difference is -0.01 ± 0.07 s. For S-picks, the difference is -0.09 ± 0.26 s overall and -0.06 ± 0.14 s for good quality picks (index 1: 26% of the S-picks). Residuals showed no dependence on the event magnitudes. The method independently picks S and P waves with good precision and only a few parameters to adjust for relatively small earthquakes (mostly ≤ 2 Ml). The automatic procedure was then applied to the whole dataset. Earthquake locations obtained by inverting onset arrivals revealed clustering and lineations that helped us to constrain the subduction plane. Those key parameters will be integrated to a 3D finite-difference modeling and compared to GPS data in order to better understand the complex geodynamics behavior of the Vanuatu region.
Multiclassifier information fusion methods for microarray pattern recognition
NASA Astrophysics Data System (ADS)
Braun, Jerome J.; Glina, Yan; Judson, Nicholas; Herzig-Marx, Rachel
2004-04-01
This paper addresses automatic recognition of microarray patterns, a capability that could have a major significance for medical diagnostics, enabling development of diagnostic tools for automatic discrimination of specific diseases. The paper presents multiclassifier information fusion methods for microarray pattern recognition. The input space partitioning approach based on fitness measures that constitute an a-priori gauging of classification efficacy for each subspace is investigated. Methods for generation of fitness measures, generation of input subspaces and their use in the multiclassifier fusion architecture are presented. In particular, two-level quantification of fitness that accounts for the quality of each subspace as well as the quality of individual neighborhoods within the subspace is described. Individual-subspace classifiers are Support Vector Machine based. The decision fusion stage fuses the information from mulitple SVMs along with the multi-level fitness information. Final decision fusion stage techniques, including weighted fusion as well as Dempster-Shafer theory based fusion are investigated. It should be noted that while the above methods are discussed in the context of microarray pattern recognition, they are applicable to a broader range of discrimination problems, in particular to problems involving a large number of information sources irreducible to a low-dimensional feature space.
Modeling and visualizing borehole information on virtual globes using KML
NASA Astrophysics Data System (ADS)
Zhu, Liang-feng; Wang, Xi-feng; Zhang, Bing
2014-01-01
Advances in virtual globes and Keyhole Markup Language (KML) are providing the Earth scientists with the universal platforms to manage, visualize, integrate and disseminate geospatial information. In order to use KML to represent and disseminate subsurface geological information on virtual globes, we present an automatic method for modeling and visualizing a large volume of borehole information. Based on a standard form of borehole database, the method first creates a variety of borehole models with different levels of detail (LODs), including point placemarks representing drilling locations, scatter dots representing contacts and tube models representing strata. Subsequently, the level-of-detail based (LOD-based) multi-scale representation is constructed to enhance the efficiency of visualizing large numbers of boreholes. Finally, the modeling result can be loaded into a virtual globe application for 3D visualization. An implementation program, termed Borehole2KML, is developed to automatically convert borehole data into KML documents. A case study of using Borehole2KML to create borehole models in Shanghai shows that the modeling method is applicable to visualize, integrate and disseminate borehole information on the Internet. The method we have developed has potential use in societal service of geological information.
Abd El Aziz, Mohamed; Selim, I M; Xiong, Shengwu
2017-06-30
This paper presents a new approach for the automatic detection of galaxy morphology from datasets based on an image-retrieval approach. Currently, there are several classification methods proposed to detect galaxy types within an image. However, in some situations, the aim is not only to determine the type of galaxy within the queried image, but also to determine the most similar images for query image. Therefore, this paper proposes an image-retrieval method to detect the type of galaxies within an image and return with the most similar image. The proposed method consists of two stages, in the first stage, a set of features is extracted based on shape, color and texture descriptors, then a binary sine cosine algorithm selects the most relevant features. In the second stage, the similarity between the features of the queried galaxy image and the features of other galaxy images is computed. Our experiments were performed using the EFIGI catalogue, which contains about 5000 galaxies images with different types (edge-on spiral, spiral, elliptical and irregular). We demonstrate that our proposed approach has better performance compared with the particle swarm optimization (PSO) and genetic algorithm (GA) methods.
Cyber Surveillance for Flood Disasters
Lo, Shi-Wei; Wu, Jyh-Horng; Lin, Fang-Pang; Hsu, Ching-Han
2015-01-01
Regional heavy rainfall is usually caused by the influence of extreme weather conditions. Instant heavy rainfall often results in the flooding of rivers and the neighboring low-lying areas, which is responsible for a large number of casualties and considerable property loss. The existing precipitation forecast systems mostly focus on the analysis and forecast of large-scale areas but do not provide precise instant automatic monitoring and alert feedback for individual river areas and sections. Therefore, in this paper, we propose an easy method to automatically monitor the flood object of a specific area, based on the currently widely used remote cyber surveillance systems and image processing methods, in order to obtain instant flooding and waterlogging event feedback. The intrusion detection mode of these surveillance systems is used in this study, wherein a flood is considered a possible invasion object. Through the detection and verification of flood objects, automatic flood risk-level monitoring of specific individual river segments, as well as the automatic urban inundation detection, has become possible. The proposed method can better meet the practical needs of disaster prevention than the method of large-area forecasting. It also has several other advantages, such as flexibility in location selection, no requirement of a standard water-level ruler, and a relatively large field of view, when compared with the traditional water-level measurements using video screens. The results can offer prompt reference for appropriate disaster warning actions in small areas, making them more accurate and effective. PMID:25621609
Image-guided automatic triggering of a fractional CO2 laser in aesthetic procedures.
Wilczyński, Sławomir; Koprowski, Robert; Wiernek, Barbara K; Błońska-Fajfrowska, Barbara
2016-09-01
Laser procedures in dermatology and aesthetic medicine are associated with the need for manual laser triggering. This leads to pulse overlapping and side effects. Automatic laser triggering based on image analysis can provide a secure fit to each successive doses of radiation. A fractional CO2 laser was used in the study. 500 images of the human skin of healthy subjects were acquired. Automatic triggering was initiated by an application together with a camera which tracks and analyses the skin in visible light. The tracking algorithm uses the methods of image analysis to overlap images. After locating the characteristic points in analysed adjacent areas, the correspondence of graphs is found. The point coordinates derived from the images are the vertices of graphs with respect to which isomorphism is sought. When the correspondence of graphs is found, it is possible to overlap the neighbouring parts of the image. The proposed method of laser triggering owing to the automatic image fitting method allows for 100% repeatability. To meet this requirement, there must be at least 13 graph vertices obtained from the image. For this number of vertices, the time of analysis of a single image is less than 0.5s. The proposed method, applied in practice, may help reduce the number of side effects during dermatological laser procedures resulting from laser pulse overlapping. In addition, it reduces treatment time and enables to propose new techniques of treatment through controlled, precise laser pulse overlapping. Copyright © 2016 Elsevier Ltd. All rights reserved.
Automatic 3D segmentation of multiphoton images: a key step for the quantification of human skin.
Decencière, Etienne; Tancrède-Bohin, Emmanuelle; Dokládal, Petr; Koudoro, Serge; Pena, Ana-Maria; Baldeweck, Thérèse
2013-05-01
Multiphoton microscopy has emerged in the past decade as a useful noninvasive imaging technique for in vivo human skin characterization. However, it has not been used until now in evaluation clinical trials, mainly because of the lack of specific image processing tools that would allow the investigator to extract pertinent quantitative three-dimensional (3D) information from the different skin components. We propose a 3D automatic segmentation method of multiphoton images which is a key step for epidermis and dermis quantification. This method, based on the morphological watershed and graph cuts algorithms, takes into account the real shape of the skin surface and of the dermal-epidermal junction, and allows separating in 3D the epidermis and the superficial dermis. The automatic segmentation method and the associated quantitative measurements have been developed and validated on a clinical database designed for aging characterization. The segmentation achieves its goals for epidermis-dermis separation and allows quantitative measurements inside the different skin compartments with sufficient relevance. This study shows that multiphoton microscopy associated with specific image processing tools provides access to new quantitative measurements on the various skin components. The proposed 3D automatic segmentation method will contribute to build a powerful tool for characterizing human skin condition. To our knowledge, this is the first 3D approach to the segmentation and quantification of these original images. © 2013 John Wiley & Sons A/S. Published by Blackwell Publishing Ltd.
Peak Detection Method Evaluation for Ion Mobility Spectrometry by Using Machine Learning Approaches
Hauschild, Anne-Christin; Kopczynski, Dominik; D’Addario, Marianna; Baumbach, Jörg Ingo; Rahmann, Sven; Baumbach, Jan
2013-01-01
Ion mobility spectrometry with pre-separation by multi-capillary columns (MCC/IMS) has become an established inexpensive, non-invasive bioanalytics technology for detecting volatile organic compounds (VOCs) with various metabolomics applications in medical research. To pave the way for this technology towards daily usage in medical practice, different steps still have to be taken. With respect to modern biomarker research, one of the most important tasks is the automatic classification of patient-specific data sets into different groups, healthy or not, for instance. Although sophisticated machine learning methods exist, an inevitable preprocessing step is reliable and robust peak detection without manual intervention. In this work we evaluate four state-of-the-art approaches for automated IMS-based peak detection: local maxima search, watershed transformation with IPHEx, region-merging with VisualNow, and peak model estimation (PME). We manually generated a gold standard with the aid of a domain expert (manual) and compare the performance of the four peak calling methods with respect to two distinct criteria. We first utilize established machine learning methods and systematically study their classification performance based on the four peak detectors’ results. Second, we investigate the classification variance and robustness regarding perturbation and overfitting. Our main finding is that the power of the classification accuracy is almost equally good for all methods, the manually created gold standard as well as the four automatic peak finding methods. In addition, we note that all tools, manual and automatic, are similarly robust against perturbations. However, the classification performance is more robust against overfitting when using the PME as peak calling preprocessor. In summary, we conclude that all methods, though small differences exist, are largely reliable and enable a wide spectrum of real-world biomedical applications. PMID:24957992
Peak detection method evaluation for ion mobility spectrometry by using machine learning approaches.
Hauschild, Anne-Christin; Kopczynski, Dominik; D'Addario, Marianna; Baumbach, Jörg Ingo; Rahmann, Sven; Baumbach, Jan
2013-04-16
Ion mobility spectrometry with pre-separation by multi-capillary columns (MCC/IMS) has become an established inexpensive, non-invasive bioanalytics technology for detecting volatile organic compounds (VOCs) with various metabolomics applications in medical research. To pave the way for this technology towards daily usage in medical practice, different steps still have to be taken. With respect to modern biomarker research, one of the most important tasks is the automatic classification of patient-specific data sets into different groups, healthy or not, for instance. Although sophisticated machine learning methods exist, an inevitable preprocessing step is reliable and robust peak detection without manual intervention. In this work we evaluate four state-of-the-art approaches for automated IMS-based peak detection: local maxima search, watershed transformation with IPHEx, region-merging with VisualNow, and peak model estimation (PME).We manually generated Metabolites 2013, 3 278 a gold standard with the aid of a domain expert (manual) and compare the performance of the four peak calling methods with respect to two distinct criteria. We first utilize established machine learning methods and systematically study their classification performance based on the four peak detectors' results. Second, we investigate the classification variance and robustness regarding perturbation and overfitting. Our main finding is that the power of the classification accuracy is almost equally good for all methods, the manually created gold standard as well as the four automatic peak finding methods. In addition, we note that all tools, manual and automatic, are similarly robust against perturbations. However, the classification performance is more robust against overfitting when using the PME as peak calling preprocessor. In summary, we conclude that all methods, though small differences exist, are largely reliable and enable a wide spectrum of real-world biomedical applications.
Evolutionary Local Search of Fuzzy Rules through a novel Neuro-Fuzzy encoding method.
Carrascal, A; Manrique, D; Ríos, J; Rossi, C
2003-01-01
This paper proposes a new approach for constructing fuzzy knowledge bases using evolutionary methods. We have designed a genetic algorithm that automatically builds neuro-fuzzy architectures based on a new indirect encoding method. The neuro-fuzzy architecture represents the fuzzy knowledge base that solves a given problem; the search for this architecture takes advantage of a local search procedure that improves the chromosomes at each generation. Experiments conducted both on artificially generated and real world problems confirm the effectiveness of the proposed approach.