Science.gov

Sample records for automatic model based

  1. Model-Based Reasoning in Humans Becomes Automatic with Training.

    PubMed

    Economides, Marcos; Kurth-Nelson, Zeb; Lübbert, Annika; Guitart-Masip, Marc; Dolan, Raymond J

    2015-09-01

    Model-based and model-free reinforcement learning (RL) have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load--a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders. PMID:26379239

  2. Model-Based Reasoning in Humans Becomes Automatic with Training

    PubMed Central

    Lübbert, Annika; Guitart-Masip, Marc; Dolan, Raymond J.

    2015-01-01

    Model-based and model-free reinforcement learning (RL) have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load—a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders. PMID:26379239

  3. Automatic sensor placement for model-based robot vision.

    PubMed

    Chen, S Y; Li, Y F

    2004-02-01

    This paper presents a method for automatic sensor placement for model-based robot vision. In such a vision system, the sensor often needs to be moved from one pose to another around the object to observe all features of interest. This allows multiple three-dimensional (3-D) images to be taken from different vantage viewpoints. The task involves determination of the optimal sensor placements and a shortest path through these viewpoints. During the sensor planning, object features are resampled as individual points attached with surface normals. The optimal sensor placement graph is achieved by a genetic algorithm in which a min-max criterion is used for the evaluation. A shortest path is determined by Christofides algorithm. A Viewpoint Planner is developed to generate the sensor placement plan. It includes many functions, such as 3-D animation of the object geometry, sensor specification, initialization of the viewpoint number and their distribution, viewpoint evolution, shortest path computation, scene simulation of a specific viewpoint, parameter amendment. Experiments are also carried out on a real robot vision system to demonstrate the effectiveness of the proposed method.

  4. Automatic code generation from the OMT-based dynamic model

    SciTech Connect

    Ali, J.; Tanaka, J.

    1996-12-31

    The OMT object-oriented software development methodology suggests creating three models of the system, i.e., object model, dynamic model and functional model. We have developed a system that automatically generates implementation code from the dynamic model. The system first represents the dynamic model as a table and then generates executable Java language code from it. We used inheritance for super-substate relationships. We considered that transitions relate to states in a state diagram exactly as operations relate to classes in an object diagram. In the generated code, each state in the state diagram becomes a class and each event on a state becomes an operation on the corresponding class. The system is implemented and can generate executable code for any state diagram. This makes the role of the dynamic model more significant and the job of designers even simpler.

  5. Evaluation of Model Recognition for Grammar-Based Automatic 3d Building Model Reconstruction

    NASA Astrophysics Data System (ADS)

    Yu, Qian; Helmholz, Petra; Belton, David

    2016-06-01

    In recent years, 3D city models are in high demand by many public and private organisations, and the steadily growing capacity in both quality and quantity are increasing demand. The quality evaluation of these 3D models is a relevant issue both from the scientific and practical points of view. In this paper, we present a method for the quality evaluation of 3D building models which are reconstructed automatically from terrestrial laser scanning (TLS) data based on an attributed building grammar. The entire evaluation process has been performed in all the three dimensions in terms of completeness and correctness of the reconstruction. Six quality measures are introduced to apply on four datasets of reconstructed building models in order to describe the quality of the automatic reconstruction, and also are assessed on their validity from the evaluation point of view.

  6. Sparse appearance model-based algorithm for automatic segmentation and identification of articulated hand bones

    NASA Astrophysics Data System (ADS)

    Reda, Fitsum A.; Peng, Zhigang; Liao, Shu; Shinagawa, Yoshihisa; Zhan, Yiqiang; Hermosillo, Gerardo; Zhou, Xiang Sean

    2014-03-01

    Automatic and precise segmentation of hand bones is important for many medical imaging applications. Although several previous studies address bone segmentation, automatically segmenting articulated hand bones remains a challenging task. The highly articulated nature of hand bones limits the effectiveness of atlas-based segmentation methods. The use of low-level information derived from the image-of-interest alone is insufficient for detecting bones and distinguishing boundaries of different bones that are in close proximity to each other. In this study, we propose a method that combines an articulated statistical shape model and a local exemplar-based appearance model for automatically segmenting hand bones in CT. Our approach is to perform a hierarchical articulated shape deformation that is driven by a set of local exemplar-based appearance models. Specifically, for each point in the shape model, the local appearance model is described by a set of profiles of low-level image features along the normal of the shape. During segmentation, each point in the shape model is deformed to a new point whose image features are closest to the appearance model. The shape model is also constrained by an articulation model described by a set of pre-determined landmarks on the finger joints. In this way, the deformation is robust to sporadic false bony edges and is able to fit fingers with large articulations. We validated our method on 23 CT scans and we have a segmentation success rate of ~89.70 %. This result indicates that our method is viable for automatic segmentation of articulated hand bones in conventional CT.

  7. Grammar-based Automatic 3D Model Reconstruction from Terrestrial Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Yu, Q.; Helmholz, P.; Belton, D.; West, G.

    2014-04-01

    The automatic reconstruction of 3D buildings has been an important research topic during the last years. In this paper, a novel method is proposed to automatically reconstruct the 3D building models from segmented data based on pre-defined formal grammar and rules. Such segmented data can be extracted e.g. from terrestrial or mobile laser scanning devices. Two steps are considered in detail. The first step is to transform the segmented data into 3D shapes, for instance using the DXF (Drawing Exchange Format) format which is a CAD data file format used for data interchange between AutoCAD and other program. Second, we develop a formal grammar to describe the building model structure and integrate the pre-defined grammars into the reconstruction process. Depending on the different segmented data, the selected grammar and rules are applied to drive the reconstruction process in an automatic manner. Compared with other existing approaches, our proposed method allows the model reconstruction directly from 3D shapes and takes the whole building into account.

  8. Automatic sleep staging based on ECG signals using hidden Markov models.

    PubMed

    Ying Chen; Xin Zhu; Wenxi Chen

    2015-08-01

    This study is designed to investigate the feasibility of automatic sleep staging using features only derived from electrocardiography (ECG) signal. The study was carried out using the framework of hidden Markov models (HMMs). The mean, and SD values of heart rates (HRs) computed from each 30-second epoch served as the features. The two feature sequences were first detrended by ensemble empirical mode decomposition (EEMD), formed as a two-dimensional feature vector, and then converted into code vectors by vector quantization (VQ) method. The output VQ indexes were utilized to estimate parameters for HMMs. The proposed model was tested and evaluated on a group of healthy individuals using leave-one-out cross-validation. The automatic sleep staging results were compared with PSG estimated ones. Results showed accuracies of 82.2%, 76.0%, 76.1% and 85.5% for deep, light, REM and wake sleep, respectively. The findings proved that HRs-based HMM approach is feasible for automatic sleep staging and can pave a way for developing more efficient, robust, and simple sleep staging system suitable for home application. PMID:26736316

  9. Automatic quantitative analysis of ultrasound tongue contours via wavelet-based functional mixed models.

    PubMed

    Lancia, Leonardo; Rausch, Philip; Morris, Jeffrey S

    2015-02-01

    This paper illustrates the application of wavelet-based functional mixed models to automatic quantification of differences between tongue contours obtained through ultrasound imaging. The reliability of this method is demonstrated through the analysis of tongue positions recorded from a female and a male speaker at the onset of the vowels /a/ and /i/ produced in the context of the consonants /t/ and /k/. The proposed method allows detection of significant differences between configurations of the articulators that are visible in ultrasound images during the production of different speech gestures and is compatible with statistical designs containing both fixed and random terms.

  10. Model-based vision system for automatic recognition of structures in dental radiographs

    NASA Astrophysics Data System (ADS)

    Acharya, Raj S.; Samarabandu, Jagath K.; Hausmann, E.; Allen, K. A.

    1991-07-01

    X-ray diagnosis of destructive periodontal disease requires assessing serial radiographs by an expert to determine the change in the distance between cemento-enamel junction (CEJ) and the bone crest. To achieve this without the subjectivity of a human expert, a knowledge based system is proposed to automatically locate the two landmarks which are the CEJ and the level of alveolar crest at its junction with the periodontal ligament space. This work is a part of an ongoing project to automatically measure the distance between CEJ and the bone crest along a line parallel to the axis of the tooth. The approach presented in this paper is based on identifying a prominent feature such as the tooth boundary using local edge detection and edge thresholding to establish a reference and then using model knowledge to process sub-regions in locating the landmarks. Segmentation techniques invoked around these regions consists of a neural-network like hierarchical refinement scheme together with local gradient extraction, multilevel thresholding and ridge tracking. Recognition accuracy is further improved by first locating the easily identifiable parts of the bone surface and the interface between the enamel and the dentine and then extending these boundaries towards the periodontal ligament space and the tooth boundary respectively. The system is realized as a collection of tools (or knowledge sources) for pre-processing, segmentation, primary and secondary feature detection and a control structure based on the blackboard model to coordinate the activities of these tools.

  11. [Automatic detection of exudates in retinal images based on threshold moving average models].

    PubMed

    Wisaeng, K; Hiransakolwong, N; Pothiruk, E

    2015-01-01

    Since exudate diagnostic procedures require the attention of an expert ophthalmologist as well as regular monitoring of the disease, the workload of expert ophthalmologists will eventually exceed the current screening capabilities. Retinal imaging technology is a current practice screening capability providing a great potential solution. In this paper, a fast and robust automatic detection of exudates based on moving average histogram models of the fuzzy image was applied, and then the better histogram was derived. After segmentation of the exudate candidates, the true exudates were pruned based on Sobel edge detector and automatic Otsu's thresholding algorithm that resulted in the accurate location of the exudates in digital retinal images. To compare the performance of exudate detection methods we have constructed a large database of digital retinal images. The method was trained on a set of 200 retinal images, and tested on a completely independent set of 1220 retinal images. Results show that the exudate detection method performs overall best sensitivity, specificity, and accuracy of 90.42%, 94.60%, and 93.69%, respectively. PMID:26016034

  12. Chinese Automatic Question Answering System of Specific-domain Based on Vector Space Model

    NASA Astrophysics Data System (ADS)

    Hu, Haiqing; Ren, Fuji; Kuroiwa, Shingo

    In order to meet the demand to acquire necessary information efficiently from large electronic text, the Question and Answering (QA) technology to show a clear reply automatically to a question asked in the user's natural language has widely attracted attention in recent years. Although the research of QA system in China is later than that in western countries and Japan, it has attracted more and more attention recently. In this paper, we propose a Question-Answering construction, which synthesizes the answer retrieval to the questions asked most frequently based on common knowledge, and the document retrieval concerning sightseeing information. In order to improve reply accuracy, one must consider the synthetic model based on statistic VSM and the shallow semantic analysis, and the domain is limited to sightseeing information. A Chinese QA system about sightseeing based on the proposed method has been built. The result is obtained by evaluation experiments, where high accuracy can be achieved when the results of retrieval were regarded as correct, if the correct answer appeared among those of the top three resemblance degree. The experiments proved the efficiency of our method and it is feasible to develop Question-Answering technology based on this method.

  13. Automatic measurement of vertebral body deformations in CT images based on a 3D parametric model

    NASA Astrophysics Data System (ADS)

    Štern, Darko; Bürmen, Miran; Njagulj, Vesna; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž

    2012-03-01

    Accurate and objective evaluation of vertebral body deformations represents an important part of the clinical diagnostics and therapy of pathological conditions affecting the spine. Although modern clinical practice is oriented towards threedimensional (3D) imaging techniques, the established methods for the evaluation of vertebral body deformations are based on measurements in two-dimensional (2D) X-ray images. In this paper, we propose a method for automatic measurement of vertebral body deformations in computed tomography (CT) images that is based on efficient modeling of the vertebral body shape with a 3D parametric model. By fitting the 3D model to the vertebral body in the image, quantitative description of normal and pathological vertebral bodies is obtained from the value of 25 parameters of the model. The evaluation of vertebral body deformations is based on the distance of the observed vertebral body from the distribution of the parameter values of normal vertebral bodies in the parametric space. The distribution is obtained from 80 normal vertebral bodies in the training data set and verified with eight normal vertebral bodies in the control data set. The statistically meaningful distance of eight pathological vertebral bodies in the study data set from the distribution of normal vertebral bodies in the parametric space shows that the parameters can be used to successfully model vertebral body deformations in 3D. The proposed method may therefore be used to assess vertebral body deformations in 3D or provide clinically meaningful observations that are not available when using 2D methods that are established in clinical practice.

  14. Automatic mathematical modeling for space application

    NASA Technical Reports Server (NTRS)

    Wang, Caroline K.

    1987-01-01

    A methodology for automatic mathematical modeling is described. The major objective is to create a very friendly environment for engineers to design, maintain and verify their model and also automatically convert the mathematical model into FORTRAN code for conventional computation. A demonstration program was designed for modeling the Space Shuttle Main Engine simulation mathematical model called Propulsion System Automatic Modeling (PSAM). PSAM provides a very friendly and well organized environment for engineers to build a knowledge base for base equations and general information. PSAM contains an initial set of component process elements for the Space Shuttle Main Engine simulation and a questionnaire that allows the engineer to answer a set of questions to specify a particular model. PSAM is then able to automatically generate the model and the FORTRAN code. A future goal is to download the FORTRAN code to the VAX/VMS system for conventional computation.

  15. Modelling Pasture-based Automatic Milking System Herds: Grazeable Forage Options

    PubMed Central

    Islam, M. R.; Garcia, S. C.; Clark, C. E. F.; Kerrisk, K. L.

    2015-01-01

    One of the challenges to increase milk production in a large pasture-based herd with an automatic milking system (AMS) is to grow forages within a 1-km radius, as increases in walking distance increases milking interval and reduces yield. The main objective of this study was to explore sustainable forage option technologies that can supply high amount of grazeable forages for AMS herds using the Agricultural Production Systems Simulator (APSIM) model. Three different basic simulation scenarios (with irrigation) were carried out using forage crops (namely maize, soybean and sorghum) for the spring-summer period. Subsequent crops in the three scenarios were forage rape over-sown with ryegrass. Each individual simulation was run using actual climatic records for the period from 1900 to 2010. Simulated highest forage yields in maize, soybean and sorghum- (each followed by forage rape-ryegrass) based rotations were 28.2, 22.9, and 19.3 t dry matter/ha, respectively. The simulations suggested that the irrigation requirement could increase by up to 18%, 16%, and 17% respectively in those rotations in El-Niño years compared to neutral years. On the other hand, irrigation requirement could increase by up to 25%, 23%, and 32% in maize, soybean and sorghum based rotations in El-Nino years compared to La-Nina years. However, irrigation requirement could decrease by up to 8%, 7%, and 13% in maize, soybean and sorghum based rotations in La-Nina years compared to neutral years. The major implication of this study is that APSIM models have potentials in devising preferred forage options to maximise grazeable forage yield which may create the opportunity to grow more forage in small areas around the AMS which in turn will minimise walking distance and milking interval and thus increase milk production. Our analyses also suggest that simulation analysis may provide decision support during climatic uncertainty. PMID:25924963

  16. Model-based automatic target recognition using hierarchical foveal machine vision

    NASA Astrophysics Data System (ADS)

    McKee, Douglas C.; Bandera, Cesar; Ghosal, Sugata; Rauss, Patrick J.

    1996-06-01

    This paper presents a target detection and interrogation techniques for a foveal automatic target recognition (ATR) system based on the hierarchical scale-space processing of imagery from a rectilinear tessellated multiacuity retinotopology. Conventional machine vision captures imagery and applies early vision techniques with uniform resolution throughout the field-of-view (FOV). In contrast, foveal active vision features graded acuity imagers and processing coupled with context sensitive gaze control, analogous to that prevalent throughout vertebrate vision. Foveal vision can operate more efficiently in dynamic scenarios with localized relevance than uniform acuity vision because resolution is treated as a dynamically allocable resource. Foveal ATR exploits the difference between detection and recognition resolution requirements and sacrifices peripheral acuity to achieve a wider FOV (e.g. faster search), greater localized resolution where needed (e.g., more confident recognition at the fovea), and faster frame rates (e.g., more reliable tracking and navigation) without increasing processing requirements. The rectilinearity of the retinotopology supports a data structure that is a subset of the image pyramid. This structure lends itself to multiresolution and conventional 2-D algorithms, and features a shift invariance of perceived target shape that tolerates sensor pointing errors and supports multiresolution model-based techniques. The detection technique described in this paper searches for regions-of- interest (ROIs) using the foveal sensor's wide FOV peripheral vision. ROIs are initially detected using anisotropic diffusion filtering and expansion template matching to a multiscale Zernike polynomial-based target model. Each ROI is then interrogated to filter out false target ROIs by sequentially pointing a higher acuity region of the sensor at each ROI centroid and conducting a fractal dimension test that distinguishes targets from structured clutter.

  17. GIS Data Based Automatic High-Fidelity 3D Road Network Modeling

    NASA Technical Reports Server (NTRS)

    Wang, Jie; Shen, Yuzhong

    2011-01-01

    3D road models are widely used in many computer applications such as racing games and driving simulations_ However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially those existing in the real world. This paper presents a novel approach thai can automatically produce 3D high-fidelity road network models from real 2D road GIS data that mainly contain road. centerline in formation. The proposed method first builds parametric representations of the road centerlines through segmentation and fitting . A basic set of civil engineering rules (e.g., cross slope, superelevation, grade) for road design are then selected in order to generate realistic road surfaces in compliance with these rules. While the proposed method applies to any types of roads, this paper mainly addresses automatic generation of complex traffic interchanges and intersections which are the most sophisticated elements in the road networks

  18. Automatic programming of simulation models

    NASA Technical Reports Server (NTRS)

    Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.

    1988-01-01

    The objective of automatic programming is to improve the overall environment for describing the program. This improved environment is realized by a reduction in the amount of detail that the programmer needs to know and is exposed to. Furthermore, this improved environment is achieved by a specification language that is more natural to the user's problem domain and to the user's way of thinking and looking at the problem. The goal of this research is to apply the concepts of automatic programming (AP) to modeling discrete event simulation system. Specific emphasis is on the design and development of simulation tools to assist the modeler define or construct a model of the system and to then automatically write the corresponding simulation code in the target simulation language, GPSS/PC. A related goal is to evaluate the feasibility of various languages for constructing automatic programming simulation tools.

  19. Automatic programming of simulation models

    NASA Technical Reports Server (NTRS)

    Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.

    1990-01-01

    The concepts of software engineering were used to improve the simulation modeling environment. Emphasis was placed on the application of an element of rapid prototyping, or automatic programming, to assist the modeler define the problem specification. Then, once the problem specification has been defined, an automatic code generator is used to write the simulation code. The following two domains were selected for evaluating the concepts of software engineering for discrete event simulation: manufacturing domain and a spacecraft countdown network sequence. The specific tasks were to: (1) define the software requirements for a graphical user interface to the Automatic Manufacturing Programming System (AMPS) system; (2) develop a graphical user interface for AMPS; and (3) compare the AMPS graphical interface with the AMPS interactive user interface.

  20. Automatic left-atrial segmentation from cardiac 3D ultrasound: a dual-chamber model-based approach

    NASA Astrophysics Data System (ADS)

    Almeida, Nuno; Sarvari, Sebastian I.; Orderud, Fredrik; Gérard, Olivier; D'hooge, Jan; Samset, Eigil

    2016-04-01

    In this paper, we present an automatic solution for segmentation and quantification of the left atrium (LA) from 3D cardiac ultrasound. A model-based framework is applied, making use of (deformable) active surfaces to model the endocardial surfaces of cardiac chambers, allowing incorporation of a priori anatomical information in a simple fashion. A dual-chamber model (LA and left ventricle) is used to detect and track the atrio-ventricular (AV) plane, without any user input. Both chambers are represented by parametric surfaces and a Kalman filter is used to fit the model to the position of the endocardial walls detected in the image, providing accurate detection and tracking during the whole cardiac cycle. This framework was tested in 20 transthoracic cardiac ultrasound volumetric recordings of healthy volunteers, and evaluated using manual traces of a clinical expert as a reference. The 3D meshes obtained with the automatic method were close to the reference contours at all cardiac phases (mean distance of 0.03+/-0.6 mm). The AV plane was detected with an accuracy of -0.6+/-1.0 mm. The LA volumes assessed automatically were also in agreement with the reference (mean +/-1.96 SD): 0.4+/-5.3 ml, 2.1+/-12.6 ml, and 1.5+/-7.8 ml at end-diastolic, end-systolic and pre-atrial-contraction frames, respectively. This study shows that the proposed method can be used for automatic volumetric assessment of the LA, considerably reducing the analysis time and effort when compared to manual analysis.

  1. Automatic sex determination of skulls based on a statistical shape model.

    PubMed

    Luo, Li; Wang, Mengyang; Tian, Yun; Duan, Fuqing; Wu, Zhongke; Zhou, Mingquan; Rozenholc, Yves

    2013-01-01

    Sex determination from skeletons is an important research subject in forensic medicine. Previous skeletal sex assessments are through subjective visual analysis by anthropologists or metric analysis of sexually dimorphic features. In this work, we present an automatic sex determination method for 3D digital skulls, in which a statistical shape model for skulls is constructed, which projects the high-dimensional skull data into a low-dimensional shape space, and Fisher discriminant analysis is used to classify skulls in the shape space. This method combines the advantages of metrical and morphological methods. It is easy to use without professional qualification and tedious manual measurement. With a group of Chinese skulls including 127 males and 81 females, we choose 92 males and 58 females to establish the discriminant model and validate the model with the other skulls. The correct rate is 95.7% and 91.4% for females and males, respectively. Leave-one-out test also shows that the method has a high accuracy.

  2. Model-based automatic 3d building model generation by integrating LiDAR and aerial images

    NASA Astrophysics Data System (ADS)

    Habib, A.; Kwak, E.; Al-Durgham, M.

    2011-12-01

    Accurate, detailed, and up-to-date 3D building models are important for several applications such as telecommunication network planning, urban planning, and military simulation. Existing building reconstruction approaches can be classified according to the data sources they use (i.e., single versus multi-sensor approaches), the processing strategy (i.e., data-driven, model-driven, or hybrid), or the amount of user interaction (i.e., manual, semiautomatic, or fully automated). While it is obvious that 3D building models are important components for many applications, they still lack the economical and automatic techniques for their generation while taking advantage of the available multi-sensory data and combining processing strategies. In this research, an automatic methodology for building modelling by integrating multiple images and LiDAR data is proposed. The objective of this research work is to establish a framework for automatic building generation by integrating data driven and model-driven approaches while combining the advantages of image and LiDAR datasets.

  3. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    PubMed

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656

  4. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    PubMed

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  5. Color Image Segmentation Based on Different Color Space Models Using Automatic GrabCut

    PubMed Central

    Ebied, Hala Mousher; Hussein, Ashraf Saad; Tolba, Mohamed Fahmy

    2014-01-01

    This paper presents a comparative study using different color spaces to evaluate the performance of color image segmentation using the automatic GrabCut technique. GrabCut is considered as one of the semiautomatic image segmentation techniques, since it requires user interaction for the initialization of the segmentation process. The automation of the GrabCut technique is proposed as a modification of the original semiautomatic one in order to eliminate the user interaction. The automatic GrabCut utilizes the unsupervised Orchard and Bouman clustering technique for the initialization phase. Comparisons with the original GrabCut show the efficiency of the proposed automatic technique in terms of segmentation, quality, and accuracy. As no explicit color space is recommended for every segmentation problem, automatic GrabCut is applied with RGB, HSV, CMY, XYZ, and YUV color spaces. The comparative study and experimental results using different color images show that RGB color space is the best color space representation for the set of the images used. PMID:25254226

  6. An automatic image-based modelling method applied to forensic infography.

    PubMed

    Zancajo-Blazquez, Sandra; Gonzalez-Aguilera, Diego; Gonzalez-Jorge, Higinio; Hernandez-Lopez, David

    2015-01-01

    This paper presents a new method based on 3D reconstruction from images that demonstrates the utility and integration of close-range photogrammetry and computer vision as an efficient alternative to modelling complex objects and scenarios of forensic infography. The results obtained confirm the validity of the method compared to other existing alternatives as it guarantees the following: (i) flexibility, permitting work with any type of camera (calibrated and non-calibrated, smartphone or tablet) and image (visible, infrared, thermal, etc.); (ii) automation, allowing the reconstruction of three-dimensional scenarios in the absence of manual intervention, and (iii) high quality results, sometimes providing higher resolution than modern laser scanning systems. As a result, each ocular inspection of a crime scene with any camera performed by the scientific police can be transformed into a scaled 3d model. PMID:25793628

  7. An Automatic Image-Based Modelling Method Applied to Forensic Infography

    PubMed Central

    Zancajo-Blazquez, Sandra; Gonzalez-Aguilera, Diego; Gonzalez-Jorge, Higinio; Hernandez-Lopez, David

    2015-01-01

    This paper presents a new method based on 3D reconstruction from images that demonstrates the utility and integration of close-range photogrammetry and computer vision as an efficient alternative to modelling complex objects and scenarios of forensic infography. The results obtained confirm the validity of the method compared to other existing alternatives as it guarantees the following: (i) flexibility, permitting work with any type of camera (calibrated and non-calibrated, smartphone or tablet) and image (visible, infrared, thermal, etc.); (ii) automation, allowing the reconstruction of three-dimensional scenarios in the absence of manual intervention, and (iii) high quality results, sometimes providing higher resolution than modern laser scanning systems. As a result, each ocular inspection of a crime scene with any camera performed by the scientific police can be transformed into a scaled 3d model. PMID:25793628

  8. Automatic contour definition on left ventriculograms by image evidence and a multiple template-based model.

    PubMed

    Lilly, P; Jenkins, J; Bourdillon, P

    1989-01-01

    An algorithm which utilizes digital image processing and pattern recognition methods for automated definition of left ventricular (LV) contours is presented. Digital image processing and pattern recognition techniques are applied to digitally acquired radiographic images of the heart to extract the LV contours required for quantitative analysis of cardiac function. Knowledge of the image domain is invoked at each step of the algorithm to orient the data search and thereby the complexity of the solution. A knowledge-based image transformation, directional gradient search, expectations of object versus background location, least-cost path searches by dynamic programming, and a digital representation of possible versus impossible ventricular shape are exploited. The digital representation, composed of a set of characteristic templates, was created using contours obtained by manual tracing. The algorithm was tested by application of three sets of 25 images each. Test set one and two were used as training sets for creation of the model for contour correction. Model-based correction proved to be an effective technique, producing significant reduction of error in the final contours.

  9. Different Manhattan project: automatic statistical model generation

    NASA Astrophysics Data System (ADS)

    Yap, Chee Keng; Biermann, Henning; Hertzmann, Aaron; Li, Chen; Meyer, Jon; Pao, Hsing-Kuo; Paxia, Salvatore

    2002-03-01

    We address the automatic generation of large geometric models. This is important in visualization for several reasons. First, many applications need access to large but interesting data models. Second, we often need such data sets with particular characteristics (e.g., urban models, park and recreation landscape). Thus we need the ability to generate models with different parameters. We propose a new approach for generating such models. It is based on a top-down propagation of statistical parameters. We illustrate the method in the generation of a statistical model of Manhattan. But the method is generally applicable in the generation of models of large geographical regions. Our work is related to the literature on generating complex natural scenes (smoke, forests, etc) based on procedural descriptions. The difference in our approach stems from three characteristics: modeling with statistical parameters, integration of ground truth (actual map data), and a library-based approach for texture mapping.

  10. Model Based Automatic Segmentation Of Tree Stems From Single Scan Data

    NASA Astrophysics Data System (ADS)

    Boesch, R.

    2013-10-01

    Forest inventories collect feature data manually on terrestrial field plots. Measuring large amounts of breast height diameters and tree positions is time consuming. Terrestrial laser scanning could be an additional instrument to collect precise and full inventory data in the 3D space. As a preliminary assumption single scan data is used to evaluate a minimal data acquisition scheme. To extract features like trees and diameter from the scanned point cloud, a simple geometric model world is defined in 3D. Trees are cylinder shapes vertically located on a plane. Using a RANSAC-based segmentation approach, cylinders are fitted iteratively in the point cloud. Several threshold parameters increase the robustness of the segmentation model and extract point clouds of single trees, which still contain branches and the tree crown. Fitting circles along the stem using point cloud slices allows to refine the effective diameter for customized heights. The cross section of a single tree point cloud covers only the semi circle towards the scan location, but is still contiguous enough to estimate diameters by using a robust circle fitting method.

  11. An approach of crater automatic recognition based on contour digital elevation model from Chang'E Missions

    NASA Astrophysics Data System (ADS)

    Zuo, W.; Li, C.; Zhang, Z.; Li, H.; Feng, J.

    2015-12-01

    In order to provide fundamental information for exploration and related scientific research on the Moon and other planets, we propose a new automatic method to recognize craters on the lunar surface based on contour data extracted from a digital elevation model (DEM). First, we mapped 16-bits DEM to 256 gray scales for data compression, then for the purposes of better visualization, the grayscale is converted into RGB image. After that, a median filter is applied twice to DEM for data optimization, which produced smooth, continuous outlines for subsequent construction of contour plane. Considering the fact that the morphology of crater on contour plane can be approximately expressed as an ellipse or circle, we extract the outer boundaries of contour plane with the same color(gray value) as targets for further identification though a 8- neighborhood counterclockwise searching method. Then, A library of training samples is constructed based on above targets calculated from some sample DEM data, from which real crater targets are labeled as positive samples manually, and non-crater objects are labeled as negative ones. Some morphological feathers are calculated for all these samples, which are major axis (L), circumference(C), area inside the boundary(S), and radius of the largest inscribed circle(R). We use R/L, R/S, C/L, C/S, R/C, S/L as the key factors for identifying craters, and apply Fisher discrimination method on the sample library to calculate the weight of each factor and determine the discrimination formula, which is then applied to DEM data for identifying lunar craters. The method has been tested and verified with DEM data from CE-1 and CE-2, showing strong recognition ability and robustness and is applicable for the recognition of craters with various diameters and significant morphological differences, making fast and accurate automatic crater recognition possible.

  12. Automatic model-based roentgen stereophotogrammetric analysis (RSA) of total knee prostheses.

    PubMed

    Syu, Ci-Bin; Lai, Jiing-Yih; Chang, Ren-Yi; Shih, Kao-Shang; Chen, Kuo-Jen; Lin, Shang-Chih

    2012-01-01

    Conventional radiography is insensitive for early and accurate estimation of the mal-alignment and wear of knee prostheses. The two-staged (rough and fine) registration of the model-based RSA technique has recently been developed to in vivo estimate the prosthetic pose (i.e, location and orientation). In the literature, rough registration often uses template match or manual adjustment of the roentgen images. Additionally, possible error induced by the nonorthogonality of taking two roentgen images neither examined nor calibrated prior to fine registration. This study developed two RSA methods for automate the estimation of the prosthetic pose and decrease the nonorthogonality-induced error. The predicted results were validated by both simulative and experimental tests and compared with reported findings in the literature. The outcome revealed that the feature-recognized method automates pose estimation and significantly increases the execution efficiency up to about 50 times in comparison with the literature counterparts. Although the nonorthogonal images resulted in undesirable errors, the outline-optimized method can effectively compensate for the induced errors prior to fine registration. The superiority in automation, efficiency, and accuracy demonstrated the clinical practicability of the two proposed methods especially for the numerous fluoroscopic images of dynamic motion.

  13. An automatic generation of non-uniform mesh for CFD analyses of image-based multiscale human airway models

    NASA Astrophysics Data System (ADS)

    Miyawaki, Shinjiro; Tawhai, Merryn H.; Hoffman, Eric A.; Lin, Ching-Long

    2014-11-01

    The authors have developed a method to automatically generate non-uniform CFD mesh for image-based human airway models. The sizes of generated tetrahedral elements vary in both radial and longitudinal directions to account for boundary layer and multiscale nature of pulmonary airflow. The proposed method takes advantage of our previously developed centerline-based geometry reconstruction method. In order to generate the mesh branch by branch in parallel, we used the open-source programs Gmsh and TetGen for surface and volume meshes, respectively. Both programs can specify element sizes by means of background mesh. The size of an arbitrary element in the domain is a function of wall distance, element size on the wall, and element size at the center of airway lumen. The element sizes on the wall are computed based on local flow rate and airway diameter. The total number of elements in the non-uniform mesh (10 M) was about half of that in the uniform mesh, although the computational time for the non-uniform mesh was about twice longer (170 min). The proposed method generates CFD meshes with fine elements near the wall and smooth variation of element size in longitudinal direction, which are required, e.g., for simulations with high flow rate. NIH Grants R01-HL094315, U01-HL114494, and S10-RR022421. Computer time provided by XSEDE.

  14. Modeling and Calibration of Automatic Guided Vehicle

    NASA Astrophysics Data System (ADS)

    Sawada, Kenji; Tanaka, Kosuke; Shin, Seiichi; Kumagai, Kenji; Yoneda, Hisato

    This paper presents a modeling of an automatic guided vehicle (AGV) to achieve a model-based control. The modeling includes 3 kinds of choices; a choice of input-output data pair from 14 candidate pairs, a choice of system identification technique form 5 candidate techniques, a choice of discrete to continuous transform method from 2 candidate methods. In order to obtain reliable plant models of AGV, an approach for calibration between a statistical model and a physical model is also here. In our approach, the models are combined according to the weight of AGV. As a result, our calibration problem is recast as a nonlinear optimization problem that can be solved by quasi-Newton's method.

  15. Automatic Detection of Student Mental Models Based on Natural Language Student Input during Metacognitive Skill Training

    ERIC Educational Resources Information Center

    Lintean, Mihai; Rus, Vasile; Azevedo, Roger

    2012-01-01

    This article describes the problem of detecting the student mental models, i.e. students' knowledge states, during the self-regulatory activity of prior knowledge activation in MetaTutor, an intelligent tutoring system that teaches students self-regulation skills while learning complex science topics. The article presents several approaches to…

  16. A Cut-Based Procedure For Document-Layout Modelling And Automatic Document Analysis

    NASA Astrophysics Data System (ADS)

    Dengel, Andreas R.

    1989-03-01

    With the growing degree of office automation and the decreasing costs of storage devices, it becomes more and more attractive to store optically scanned documents like letters or reports in an electronic form. Therefore the need of a good paper-computer interface becomes increasingly important. This interface must convert paper documents into an electronic representation that not only captures their contents, but also their layout and logical structure. We propose a procedure to describe the layout of a document page by dividing it recursively into nested rectangular areas. A semantic meaning to each one will be assigned by means of logical labels. The procedure is used as a basis for modelling a hierarchical document layout onto the semantic meaning of the parts in the document. We analyse the layout of a document using a best-first search in this tesselation structure. The search is directed by a measure of similarity between the layout pattern in the model and the layout of the actual document. The validity of a hypothesis for the semantic labelling of a layout block can then be verified. It either supports the hypothesis or initiates the generation of a new one. The method has been implemented in Common Lisp on a SUN 3/60 Workstation and has run for a large population of office docu-ments. The results obtained have been very encouraging and have convincingly confirmed the soundness of the approach.

  17. Automatic enrollment for gait-based person re-identification

    NASA Astrophysics Data System (ADS)

    Ortells, Javier; Martín-Félez, Raúl; Mollineda, Ramón A.

    2015-02-01

    Automatic enrollment involves a critical decision-making process within people re-identification context. However, this process has been traditionally undervalued. This paper studies the problem of automatic person enrollment from a realistic perspective relying on gait analysis. Experiments simulating random flows of people with considerable appearance variations between different observations of a person have been conducted, modeling both short- and longterm scenarios. Promising results based on ROC analysis show that automatically enrolling people by their gait is affordable with high success rates.

  18. An Automatic Segmentation and Classification Framework Based on PCNN Model for Single Tooth in MicroCT Images.

    PubMed

    Wang, Liansheng; Li, Shusheng; Chen, Rongzhen; Liu, Sze-Yu; Chen, Jyh-Cheng

    2016-01-01

    Accurate segmentation and classification of different anatomical structures of teeth from medical images plays an essential role in many clinical applications. Usually, the anatomical structures of teeth are manually labelled by experienced clinical doctors, which is time consuming. However, automatic segmentation and classification is a challenging task because the anatomical structures and surroundings of the tooth in medical images are rather complex. Therefore, in this paper, we propose an effective framework which is designed to segment the tooth with a Selective Binary and Gaussian Filtering Regularized Level Set (GFRLS) method improved by fully utilizing three dimensional (3D) information, and classify the tooth by employing unsupervised learning Pulse Coupled Neural Networks (PCNN) model. In order to evaluate the proposed method, the experiments are conducted on the different datasets of mandibular molars and the experimental results show that our method can achieve better accuracy and robustness compared to other four state of the art clustering methods. PMID:27322421

  19. An Automatic Segmentation and Classification Framework Based on PCNN Model for Single Tooth in MicroCT Images

    PubMed Central

    Wang, Liansheng; Li, Shusheng; Chen, Rongzhen; Liu, Sze-Yu; Chen, Jyh-Cheng

    2016-01-01

    Accurate segmentation and classification of different anatomical structures of teeth from medical images plays an essential role in many clinical applications. Usually, the anatomical structures of teeth are manually labelled by experienced clinical doctors, which is time consuming. However, automatic segmentation and classification is a challenging task because the anatomical structures and surroundings of the tooth in medical images are rather complex. Therefore, in this paper, we propose an effective framework which is designed to segment the tooth with a Selective Binary and Gaussian Filtering Regularized Level Set (GFRLS) method improved by fully utilizing three dimensional (3D) information, and classify the tooth by employing unsupervised learning Pulse Coupled Neural Networks (PCNN) model. In order to evaluate the proposed method, the experiments are conducted on the different datasets of mandibular molars and the experimental results show that our method can achieve better accuracy and robustness compared to other four state of the art clustering methods. PMID:27322421

  20. Hidden Markov models in automatic speech recognition

    NASA Astrophysics Data System (ADS)

    Wrzoskowicz, Adam

    1993-11-01

    This article describes a method for constructing an automatic speech recognition system based on hidden Markov models (HMMs). The author discusses the basic concepts of HMM theory and the application of these models to the analysis and recognition of speech signals. The author provides algorithms which make it possible to train the ASR system and recognize signals on the basis of distinct stochastic models of selected speech sound classes. The author describes the specific components of the system and the procedures used to model and recognize speech. The author discusses problems associated with the choice of optimal signal detection and parameterization characteristics and their effect on the performance of the system. The author presents different options for the choice of speech signal segments and their consequences for the ASR process. The author gives special attention to the use of lexical, syntactic, and semantic information for the purpose of improving the quality and efficiency of the system. The author also describes an ASR system developed by the Speech Acoustics Laboratory of the IBPT PAS. The author discusses the results of experiments on the effect of noise on the performance of the ASR system and describes methods of constructing HMM's designed to operate in a noisy environment. The author also describes a language for human-robot communications which was defined as a complex multilevel network from an HMM model of speech sounds geared towards Polish inflections. The author also added mandatory lexical and syntactic rules to the system for its communications vocabulary.

  1. Automatic estimation of midline shift in patients with cerebral glioma based on enhanced voigt model and local symmetry.

    PubMed

    Chen, Mingyang; Elazab, Ahmed; Jia, Fucang; Wu, Jianhuang; Li, Guanglin; Li, Xiaodong; Hu, Qingmao

    2015-12-01

    Cerebral glioma is one of the most aggressive space-occupying diseases, which will exhibit midline shift (MLS) due to mass effect. MLS has been used as an important feature for evaluating the pathological severity and patients' survival possibility. Automatic quantification of MLS is challenging due to deformation, complex shape and complex grayscale distribution. An automatic method is proposed and validated to estimate MLS in patients with gliomas diagnosed using magnetic resonance imaging (MRI). The deformed midline is approximated by combining mechanical model and local symmetry. An enhanced Voigt model which takes into account the size and spatial information of lesion is devised to predict the deformed midline. A composite local symmetry combining local intensity symmetry and local intensity gradient symmetry is proposed to refine the predicted midline within a local window whose size is determined according to the pinhole camera model. To enhance the MLS accuracy, the axial slice with maximum MSL from each volumetric data has been interpolated from a spatial resolution of 1 mm to 0.33 mm. The proposed method has been validated on 30 publicly available clinical head MRI scans presenting with MLS. It delineates the deformed midline with maximum MLS and yields a mean difference of 0.61 ± 0.27 mm, and average maximum difference of 1.89 ± 1.18 mm from the ground truth. Experiments show that the proposed method will yield better accuracy with the geometric center of pathology being the geometric center of tumor and the pathological region being the whole lesion. It has also been shown that the proposed composite local symmetry achieves significantly higher accuracy than the traditional local intensity symmetry and the local intensity gradient symmetry. To the best of our knowledge, for delineation of deformed midline, this is the first report on both quantification of gliomas and from MRI, which hopefully will provide valuable information for diagnosis

  2. Automatic estimation of midline shift in patients with cerebral glioma based on enhanced voigt model and local symmetry.

    PubMed

    Chen, Mingyang; Elazab, Ahmed; Jia, Fucang; Wu, Jianhuang; Li, Guanglin; Li, Xiaodong; Hu, Qingmao

    2015-12-01

    Cerebral glioma is one of the most aggressive space-occupying diseases, which will exhibit midline shift (MLS) due to mass effect. MLS has been used as an important feature for evaluating the pathological severity and patients' survival possibility. Automatic quantification of MLS is challenging due to deformation, complex shape and complex grayscale distribution. An automatic method is proposed and validated to estimate MLS in patients with gliomas diagnosed using magnetic resonance imaging (MRI). The deformed midline is approximated by combining mechanical model and local symmetry. An enhanced Voigt model which takes into account the size and spatial information of lesion is devised to predict the deformed midline. A composite local symmetry combining local intensity symmetry and local intensity gradient symmetry is proposed to refine the predicted midline within a local window whose size is determined according to the pinhole camera model. To enhance the MLS accuracy, the axial slice with maximum MSL from each volumetric data has been interpolated from a spatial resolution of 1 mm to 0.33 mm. The proposed method has been validated on 30 publicly available clinical head MRI scans presenting with MLS. It delineates the deformed midline with maximum MLS and yields a mean difference of 0.61 ± 0.27 mm, and average maximum difference of 1.89 ± 1.18 mm from the ground truth. Experiments show that the proposed method will yield better accuracy with the geometric center of pathology being the geometric center of tumor and the pathological region being the whole lesion. It has also been shown that the proposed composite local symmetry achieves significantly higher accuracy than the traditional local intensity symmetry and the local intensity gradient symmetry. To the best of our knowledge, for delineation of deformed midline, this is the first report on both quantification of gliomas and from MRI, which hopefully will provide valuable information for diagnosis

  3. Modelling Pasture-based Automatic Milking System Herds: The Impact of Large Herd on Milk Yield and Economics

    PubMed Central

    Islam, M. R.; Clark, C. E. F.; Garcia, S. C.; Kerrisk, K. L.

    2015-01-01

    The aim of this modelling study was to investigate the effect of large herd size (and land areas) on walking distances and milking interval (MI), and their impact on milk yield and economic penalties when 50% of the total diets were provided from home grown feed either as pasture or grazeable complementary forage rotation (CFR) in an automatic milking system (AMS). Twelve scenarios consisting of 3 AMS herds (400, 600, 800 cows), 2 levels of pasture utilisation (current AMS utilisation of 15.0 t dry matter [DM]/ha, termed as ‘moderate’; optimum pasture utilisation of 19.7 t DM/ha, termed as ‘high’) and 2 rates of incorporation of grazeable complementary forage system (CFS: 0, 30%; CFS = 65% farm is CFR and 35% of farm is pasture) were investigated. Walking distances, energy loss due to walking, MI, reduction in milk yield and income loss were calculated for each treatment based on information available in the literature. With moderate pasture utilisation and 0% CFR, increasing the herd size from 400 to 800 cows resulted in an increase in total walking distances between the parlour and the paddock from 3.5 to 6.3 km. Consequently, MI increased from 15.2 to 16.4 h with increased herd size from 400 to 800 cows. High pasture utilisation (allowing for an increased stocking density) reduced the total walking distances up to 1 km, thus reduced the MI by up to 0.5 h compared to the moderate pasture, 800 cow herd combination. The high pasture utilisation combined with 30% of the farm in CFR in the farm reduced the total walking distances by up to 1.7 km and MI by up to 0.8 h compared to the moderate pasture and 800 cow herd combination. For moderate pasture utilisation, increasing the herd size from 400 to 800 cows resulted in more dramatic milk yield penalty as yield increasing from c.f. 2.6 and 5.1 kg/cow/d respectively, which incurred a loss of up to $AU 1.9/cow/d. Milk yield losses of 0.61 kg and 0.25 kg for every km increase in total walking distance (voluntary

  4. [Study on the automatic parameters identification of water pipe network model].

    PubMed

    Jia, Hai-Feng; Zhao, Qi-Feng

    2010-01-01

    Based on the problems analysis on development and application of water pipe network model, the model parameters automatic identification is regarded as a kernel bottleneck of model's application in water supply enterprise. The methodology of water pipe network model parameters automatic identification based on GIS and SCADA database is proposed. Then the kernel algorithm of model parameters automatic identification is studied, RSA (Regionalized Sensitivity Analysis) is used for automatic recognition of sensitive parameters, and MCS (Monte-Carlo Sampling) is used for automatic identification of parameters, the detail technical route based on RSA and MCS is presented. The module of water pipe network model parameters automatic identification is developed. At last, selected a typical water pipe network as a case, the case study on water pipe network model parameters automatic identification is conducted and the satisfied results are achieved. PMID:20329520

  5. Automatic Assessment of 3D Modeling Exams

    ERIC Educational Resources Information Center

    Sanna, A.; Lamberti, F.; Paravati, G.; Demartini, C.

    2012-01-01

    Computer-based assessment of exams provides teachers and students with two main benefits: fairness and effectiveness in the evaluation process. This paper proposes a fully automatic evaluation tool for the Graphic and Virtual Design (GVD) curriculum at the First School of Architecture of the Politecnico di Torino, Italy. In particular, the tool is…

  6. Frequency and damping ratio assessment of high-rise buildings using an Automatic Model-Based Approach applied to real-world ambient vibration recordings

    NASA Astrophysics Data System (ADS)

    Nasser, Fatima; Li, Zhongyang; Gueguen, Philippe; Martin, Nadine

    2016-06-01

    This paper deals with the application of the Automatic Model-Based Approach (AMBA) over actual buildings subjected to real-world ambient vibrations. In a previous paper, AMBA was developed with the aim of automating the estimation process of the modal parameters and minimizing the estimation error, especially that of the damping ratio. It is applicable over a single-channel record, has no parameters to be set, and no manual initialization phase. The results presented in this paper should be regarded as further documentation of the approach over real-world ambient vibration signals.

  7. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans

    NASA Astrophysics Data System (ADS)

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F.

    2016-06-01

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted–achieved) were only  ‑0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,‑1.0  ±  1.6% for V 65, and  ‑0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly

  8. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans.

    PubMed

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F

    2016-06-01

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted-achieved) were only  -0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,-1.0  ±  1.6% for V 65, and  -0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly accurate

  9. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans

    NASA Astrophysics Data System (ADS)

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F.

    2016-06-01

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted-achieved) were only  -0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,-1.0  ±  1.6% for V 65, and  -0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly accurate

  10. Embedded knowledge-based system for automatic target recognition

    NASA Astrophysics Data System (ADS)

    Aboutalib, A. O.

    1990-10-01

    The development of a reliable Automatic Target Recognition (ATE) system is considered a very critical and challenging problem. Existing ATE Systems have inherent limitations in terms of recognition performance and the ability to learn and adapt. Artificial Intelligence Techniques have the potential to improve the performance of ATh Systems. In this paper, we presented a novel Knowledge-Engineering tool, termed, the Automatic Reasoning Process (ARP) , that can be used to automatically develop and maintain a Knowledge-Base (K-B) for the ATR Systems. In its learning mode, the ARP utilizes Learning samples to automatically develop the ATR K-B, which consists of minimum size sets of necessary and sufficient conditions for each target class. In its operational mode, the ARP infers the target class from sensor data using the ATh K-B System. The ARP also has the capability to reason under uncertainty, and can support both statistical and model-based approaches for ATR development. The capabilities of the ARP are compared and contrasted to those of another Knowledge-Engineering tool, termed, the Automatic Rule Induction (ARI) which is based on maximizing the mutual information. The AR? has been implemented in LISP on a VAX-GPX workstation.

  11. Automatic mathematical modeling for real time simulation system

    NASA Technical Reports Server (NTRS)

    Wang, Caroline; Purinton, Steve

    1988-01-01

    A methodology for automatic mathematical modeling and generating simulation models is described. The models will be verified by running in a test environment using standard profiles with the results compared against known results. The major objective is to create a user friendly environment for engineers to design, maintain, and verify their model and also automatically convert the mathematical model into conventional code for conventional computation. A demonstration program was designed for modeling the Space Shuttle Main Engine Simulation. It is written in LISP and MACSYMA and runs on a Symbolic 3670 Lisp Machine. The program provides a very friendly and well organized environment for engineers to build a knowledge base for base equations and general information. It contains an initial set of component process elements for the Space Shuttle Main Engine Simulation and a questionnaire that allows the engineer to answer a set of questions to specify a particular model. The system is then able to automatically generate the model and FORTRAN code. The future goal which is under construction is to download the FORTRAN code to VAX/VMS system for conventional computation. The SSME mathematical model will be verified in a test environment and the solution compared with the real data profile. The use of artificial intelligence techniques has shown that the process of the simulation modeling can be simplified.

  12. Production ready feature recognition based automatic group technology part coding

    SciTech Connect

    Ames, A.L.

    1990-01-01

    During the past four years, a feature recognition based expert system for automatically performing group technology part coding from solid model data has been under development. The system has become a production quality tool, capable of quickly the geometry based portions of a part code with no human intervention. It has been tested on over 200 solid models, half of which are models of production Sandia designs. Its performance rivals that of humans performing the same task, often surpassing them in speed and uniformity. The feature recognition capability developed for part coding is being extended to support other applications, such as manufacturability analysis, automatic decomposition (for finite element meshing and machining), and assembly planning. Initial surveys of these applications indicate that the current capability will provide a strong basis for other applications and that extensions toward more global geometric reasoning and tighter coupling with solid modeler functionality will be necessary.

  13. Supervised Automatic Learning Models:. a New Perspective

    NASA Astrophysics Data System (ADS)

    Sánchez-Úbeda, Eugenio F.

    2007-12-01

    Huge amounts of data are available in many disciplines of Science and Industry. In order to extract useful information from these data, a large number of apparently very different learning approaches have been created during the last decades. Each domain uses its own terminology (often incomprehensible to outsiders), even though all approaches basically attempt to solve the same generic learning tasks. The aim of this paper is to present a new perspective on the main existing automatic learning strategies, by providing a general framework to handle and unify many of the existing supervised learning models. The proposed taxonomy allows highlighting the similarity of some models whose original motivation comes from different fields, like engineering, statistics or mathematics. Common supervised models are classified into two main different groups: structured and unstructured models. Multidimensional models are shown as a composition of one-dimensional models, using the latter as elementary building blocks. In order to clarify ideas, examples are provided.

  14. Using automatic programming for simulating reliability network models

    NASA Technical Reports Server (NTRS)

    Tseng, Fan T.; Schroer, Bernard J.; Zhang, S. X.; Wolfsberger, John W.

    1988-01-01

    This paper presents the development of an automatic programming system for assisting modelers of reliability networks to define problems and then automatically generate the corresponding code in the target simulation language GPSS/PC.

  15. A general graphical user interface for automatic reliability modeling

    NASA Technical Reports Server (NTRS)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1991-01-01

    Reported here is a general Graphical User Interface (GUI) for automatic reliability modeling of Processor Memory Switch (PMS) structures using a Markov model. This GUI is based on a hierarchy of windows. One window has graphical editing capabilities for specifying the system's communication structure, hierarchy, reconfiguration capabilities, and requirements. Other windows have field texts, popup menus, and buttons for specifying parameters and selecting actions. An example application of the GUI is given.

  16. Nonlinear spectro-temporal features based on a cochlear model for automatic speech recognition in a noisy situation.

    PubMed

    Choi, Yong-Sun; Lee, Soo-Young

    2013-09-01

    A nonlinear speech feature extraction algorithm was developed by modeling human cochlear functions, and demonstrated as a noise-robust front-end for speech recognition systems. The algorithm was based on a model of the Organ of Corti in the human cochlea with such features as such as basilar membrane (BM), outer hair cells (OHCs), and inner hair cells (IHCs). Frequency-dependent nonlinear compression and amplification of OHCs were modeled by lateral inhibition to enhance spectral contrasts. In particular, the compression coefficients had frequency dependency based on the psychoacoustic evidence. Spectral subtraction and temporal adaptation were applied in the time-frame domain. With long-term and short-term adaptation characteristics, these factors remove stationary or slowly varying components and amplify the temporal changes such as onset or offset. The proposed features were evaluated with a noisy speech database and showed better performance than the baseline methods such as mel-frequency cepstral coefficients (MFCCs) and RASTA-PLP in unknown noisy conditions. PMID:23558292

  17. Automatic GPS satellite based subsidence measurements for Ekofisk

    SciTech Connect

    Mes, M.J.; Luttenberger, C.; Landau, H.; Gustavsen, K.

    1995-12-01

    A fully automatic procedure for the measurement of subsidence of many platforms in almost real time is presented. Such a method is important for developments which may be subject to subsidence and where reliable subsidence and rate measurements are needed for safety, planning of remedial work and verification of subsidence models. Automatic GPS satellite based subsidence measurements are made continuously on platforms in the North Sea Ekofisk Field area. A description of the system is given. The derivation of those parameters which give optimal measurement accuracy is described, the results of these derivations are provided. GPS satellite based measurements are equivalent to pressure gauge based platform subsidence measurements, but they are much cheaper to install and maintain. In addition, GPS based measurements are not subject to drift of any gauges. GPS measurements were coupled to oceanographic quantities such as the platform deck clearance. These quantities now follow from GPS based measurements.

  18. MATURE: A Model Driven bAsed Tool to Automatically Generate a langUage That suppoRts CMMI Process Areas spEcification

    NASA Astrophysics Data System (ADS)

    Musat, David; Castaño, Víctor; Calvo-Manzano, Jose A.; Garbajosa, Juan

    Many companies have achieved a higher quality in their processes by using CMMI. Process definition may be efficiently supported by software tools. A higher automation level will make process improvement and assessment activities easier to be adapted to customer needs. At present, automation of CMMI is based on tools that support practice definition in a textual way. These tools are often enhanced spreadsheets. In this paper, following the Model Driven Development paradigm (MDD), a tool that supports automatic generation of a language that can be used to specify process areas practices is presented. The generation is performed from a metamodel that represents CMMI. This tool, differently from others available, can be customized according to user needs. Guidelines to specify the CMMI metamodel are also provided. The paper also shows how this approach can support other assessment methods.

  19. A new method of automatic processing of seismic waves: waveform modeling by using Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Kodera, Y.; Sakai, S.

    2012-12-01

    Development of a method of automatic processing of seismic waves is needed since there are limitations to manually picking out earthquake events from seismograms. However, there is no practical method to automatically detect arrival times of P and S waves in seismograms. One typical example of previously proposed methods is automatic detection by using AR model (e.g. Kitagawa et al., 2004). This method seems not to be effective for seismograms contaminated with spike noise, because it cannot distinguish non-stationary signals generated by earthquakes from those generated by noise. The difficulty of distinguishing the signals is caused by the fact that the automatic detection system has a lack of information on time series variation of seismic waves. We expect that an automatic detection system that includes the information on seismic waves is more effective for seismograms contaminated with noise. So we try to adapt Hidden Markov Model (HMM) to construct seismic wave models and establish a new automatic detection method. HMM has been widely used in many fields such as voice recognition (e.g. Bishop, 2006). With the use of HMM, P- or S-waveform models that include envelops can be constructed directly and semi-automatically from lots of observed waveform data of P or S waves. These waveform models are expected to become more robust if the quantity of observation data increases. We have constructed seismic wave models based on HMM from seismograms observed in Ashio, Japan. By using these models, we have tried automatic detection of arrival times of earthquake events in Ashio. Results show that automatic detection based on HMM is more effective for seismograms contaminated with noise than that based on AR model.

  20. Digital movie-based on automatic titrations.

    PubMed

    Lima, Ricardo Alexandre C; Almeida, Luciano F; Lyra, Wellington S; Siqueira, Lucas A; Gaião, Edvaldo N; Paiva Junior, Sérgio S L; Lima, Rafaela L F C

    2016-01-15

    This study proposes the use of digital movies (DMs) in a flow-batch analyzer (FBA) to perform automatic, fast and accurate titrations. The term used for this process is "Digital movie-based on automatic titrations" (DMB-AT). A webcam records the DM during the addition of the titrant to the mixing chamber (MC). While the DM is recorded, it is decompiled into frames ordered sequentially at a constant rate of 26 frames per second (FPS). The first frame is used as a reference to define the region of interest (ROI) of 28×13pixels and the R, G and B values, which are used to calculate the Hue (H) values for each frame. The Pearson's correlation coefficient (r) is calculated between the H values of the initial frame and each subsequent frame. The titration curves are plotted in real time using the r values and the opening time of the titrant valve. The end point is estimated by the second derivative method. A software written in C language manages all analytical steps and data treatment in real time. The feasibility of the method was attested by application in acid/base test samples and edible oils. Results were compared with classical titration and did not present statistically significant differences when the paired t-test at the 95% confidence level was applied. The proposed method is able to process about 117-128 samples per hour for the test and edible oil samples, respectively, and its precision was confirmed by overall relative standard deviation (RSD) values, always less than 1.0%.

  1. Digital movie-based on automatic titrations.

    PubMed

    Lima, Ricardo Alexandre C; Almeida, Luciano F; Lyra, Wellington S; Siqueira, Lucas A; Gaião, Edvaldo N; Paiva Junior, Sérgio S L; Lima, Rafaela L F C

    2016-01-15

    This study proposes the use of digital movies (DMs) in a flow-batch analyzer (FBA) to perform automatic, fast and accurate titrations. The term used for this process is "Digital movie-based on automatic titrations" (DMB-AT). A webcam records the DM during the addition of the titrant to the mixing chamber (MC). While the DM is recorded, it is decompiled into frames ordered sequentially at a constant rate of 26 frames per second (FPS). The first frame is used as a reference to define the region of interest (ROI) of 28×13pixels and the R, G and B values, which are used to calculate the Hue (H) values for each frame. The Pearson's correlation coefficient (r) is calculated between the H values of the initial frame and each subsequent frame. The titration curves are plotted in real time using the r values and the opening time of the titrant valve. The end point is estimated by the second derivative method. A software written in C language manages all analytical steps and data treatment in real time. The feasibility of the method was attested by application in acid/base test samples and edible oils. Results were compared with classical titration and did not present statistically significant differences when the paired t-test at the 95% confidence level was applied. The proposed method is able to process about 117-128 samples per hour for the test and edible oil samples, respectively, and its precision was confirmed by overall relative standard deviation (RSD) values, always less than 1.0%. PMID:26592600

  2. Case-based synthesis in automatic advertising creation system

    NASA Astrophysics Data System (ADS)

    Zhuang, Yueting; Pan, Yunhe

    1995-08-01

    Advertising (ads) is an important design area. Though many interactive ad-design softwares have come into commercial use, none of them ever support the intelligent work -- automatic ad creation. The potential for this is enormous. This paper gives a description of our current work in research of an automatic advertising creation system (AACS). After careful analysis of the mental behavior of a human ad designer, we conclude that case-based approach is appropriate to its intelligent modeling. A model for AACS is given in the paper. A case in ads is described as two parts: the creation process and the configuration of the ads picture, with detailed data structures given in the paper. Along with the case representation, we put forward an algorithm. Some issues such as similarity measure computing, and case adaptation have also been discussed.

  3. A new automatic baseline correction method based on iterative method

    NASA Astrophysics Data System (ADS)

    Bao, Qingjia; Feng, Jiwen; Chen, Fang; Mao, Wenping; Liu, Zao; Liu, Kewen; Liu, Chaoyang

    2012-05-01

    A new automatic baseline correction method for Nuclear Magnetic Resonance (NMR) spectra is presented. It is based on an improved baseline recognition method and a new iterative baseline modeling method. The presented baseline recognition method takes advantages of three baseline recognition algorithms in order to recognize all signals in spectra. While in the iterative baseline modeling method, besides the well-recognized baseline points in signal-free regions, the 'quasi-baseline points' in the signal-crowded regions are also identified and then utilized to improve robustness by preventing the negative regions. The experimental results on both simulated data and real metabolomics spectra with over-crowded peaks show the efficiency of this automatic method.

  4. Automatic Texture Mapping of Architectural and Archaeological 3d Models

    NASA Astrophysics Data System (ADS)

    Kersten, T. P.; Stallmann, D.

    2012-07-01

    Today, detailed, complete and exact 3D models with photo-realistic textures are increasingly demanded for numerous applications in architecture and archaeology. Manual texture mapping of 3D models by digital photographs with software packages, such as Maxon Cinema 4D, Autodesk 3Ds Max or Maya, still requires a complex and time-consuming workflow. So, procedures for automatic texture mapping of 3D models are in demand. In this paper two automatic procedures are presented. The first procedure generates 3D surface models with textures by web services, while the second procedure textures already existing 3D models with the software tmapper. The program tmapper is based on the Multi Layer 3D image (ML3DImage) algorithm and developed in the programming language C++. The studies showing that the visibility analysis using the ML3DImage algorithm is not sufficient to obtain acceptable results of automatic texture mapping. To overcome the visibility problem the Point Cloud Painter algorithm in combination with the Z-buffer-procedure will be applied in the future.

  5. Matlab based automatization of an inverse surface temperature modelling procedure for Greenland ice cores using an existing firn densification and heat diffusion model

    NASA Astrophysics Data System (ADS)

    Döring, Michael; Kobashi, Takuro; Kindler, Philippe; Guillevic, Myriam; Leuenberger, Markus

    2016-04-01

    In order to study Northern Hemisphere (NH) climate interactions and variability, getting access to high resolution surface temperature records of the Greenland ice sheet is an integral condition. For example, understanding the causes for changes in the strength of the Atlantic meridional overturning circulation (AMOC) and related effects for the NH [Broecker et al. (1985); Rahmstorf (2002)] or the origin and processes leading the so called Dansgaard-Oeschger events in glacial conditions [Johnsen et al. (1992); Dansgaard et al., 1982] demand accurate and reproducible temperature data. To reveal the surface temperature history, it is suitable to use the isotopic composition of nitrogen (δ15N) from ancient air extracted from ice cores drilled at the Greenland ice sheet. The measured δ15N record of an ice core can be used as a paleothermometer due to the nearly constant isotopic composition of nitrogen in the atmosphere at orbital timescales changes only through firn processes [Severinghaus et. al. (1998); Mariotti (1983)]. To reconstruct the surface temperature for a special drilling site the use of firn models describing gas and temperature diffusion throughout the ice sheet is necessary. For this an existing firn densification and heat diffusion model [Schwander et. al. (1997)] is used. Thereby, a theoretical δ15N record is generated for different temperature and accumulation rate scenarios and compared with measurement data in terms of mean square error (MSE), which leads finally to an optimization problem, namely the finding of a minimal MSE. The goal of the presented study is a Matlab based automatization of this inverse modelling procedure. The crucial point hereby is to find the temperature and accumulation rate input time series which minimizes the MSE. For that, we follow two approaches. The first one is a Monte Carlo type input generator which varies each point in the input time series and calculates the MSE. Then the solutions that fulfil a given limit

  6. Matlab based automatization of an inverse surface temperature modelling procedure for Greenland ice cores using an existing firn densification and heat diffusion model

    NASA Astrophysics Data System (ADS)

    Döring, Michael; Kobashi, Takuro; Kindler, Philippe; Guillevic, Myriam; Leuenberger, Markus

    2016-04-01

    In order to study Northern Hemisphere (NH) climate interactions and variability, getting access to high resolution surface temperature records of the Greenland ice sheet is an integral condition. For example, understanding the causes for changes in the strength of the Atlantic meridional overturning circulation (AMOC) and related effects for the NH [Broecker et al. (1985); Rahmstorf (2002)] or the origin and processes leading the so called Dansgaard-Oeschger events in glacial conditions [Johnsen et al. (1992); Dansgaard et al., 1982] demand accurate and reproducible temperature data. To reveal the surface temperature history, it is suitable to use the isotopic composition of nitrogen (δ15N) from ancient air extracted from ice cores drilled at the Greenland ice sheet. The measured δ15N record of an ice core can be used as a paleothermometer due to the nearly constant isotopic composition of nitrogen in the atmosphere at orbital timescales changes only through firn processes [Severinghaus et. al. (1998); Mariotti (1983)]. To reconstruct the surface temperature for a special drilling site the use of firn models describing gas and temperature diffusion throughout the ice sheet is necessary. For this an existing firn densification and heat diffusion model [Schwander et. al. (1997)] is used. Thereby, a theoretical δ15N record is generated for different temperature and accumulation rate scenarios and compared with measurement data in terms of mean square error (MSE), which leads finally to an optimization problem, namely the finding of a minimal MSE. The goal of the presented study is a Matlab based automatization of this inverse modelling procedure. The crucial point hereby is to find the temperature and accumulation rate input time series which minimizes the MSE. For that, we follow two approaches. The first one is a Monte Carlo type input generator which varies each point in the input time series and calculates the MSE. Then the solutions that fulfil a given limit

  7. Modelling Pasture-based Automatic Milking System Herds: System Fitness of Grazeable Home-grown Forages, Land Areas and Walking Distances.

    PubMed

    Islam, M R; Garcia, S C; Clark, C E F; Kerrisk, K L

    2015-06-01

    To maintain a predominantly pasture-based system, the large herd milked by automatic milking rotary would be required to walk significant distances. Walking distances of greater than 1-km are associated with an increased incidence of undesirably long milking intervals and reduced milk yield. Complementary forages can be incorporated into pasture-based systems to lift total home grown feed in a given area, thus potentially 'concentrating' feed closer to the dairy. The aim of this modelling study was to investigate the total land area required and associated walking distance for large automatic milking system (AMS) herds when incorporating complementary forage rotations (CFR) into the system. Thirty-six scenarios consisting of 3 AMS herds (400, 600, 800 cows), 2 levels of pasture utilisation (current AMS utilisation of 15.0 t dry matter [DM]/ha, termed as moderate; optimum pasture utilisation of 19.7 t DM/ha, termed as high) and 6 rates of replacement of each of these pastures by grazeable CFR (0%, 10%, 20%, 30%, 40%, 50%) were investigated. Results showed that AMS cows were required to walk greater than 1-km when the farm area was greater than 86 ha. Insufficient pasture could be produced within a 1 km distance (i.e. 86 ha land) with home-grown feed (HGF) providing 43%, 29%, and 22% of the metabolisable energy (ME) required by 400, 600, and 800 cows, respectively from pastures. Introduction of pasture (moderate): CFR in AMS at a ratio of 80:20 can feed a 400 cow AMS herd, and can supply 42% and 31% of the ME requirements for 600 and 800 cows, respectively with pasture (moderate): CFR at 50:50 levels. In contrast to moderate pasture, 400 cows can be managed on high pasture utilisation (provided 57% of the total ME requirements). However, similar to the scenarios conducted with moderate pasture, there was insufficient feed produced within 1-km distance of the dairy for 600 or 800 cows. An 800 cow herd required 140 and 130 ha on moderate and high pasture-based AMS

  8. Modelling Pasture-based Automatic Milking System Herds: System Fitness of Grazeable Home-grown Forages, Land Areas and Walking Distances

    PubMed Central

    Islam, M. R.; Garcia, S. C.; Clark, C. E. F.; Kerrisk, K. L.

    2015-01-01

    To maintain a predominantly pasture-based system, the large herd milked by automatic milking rotary would be required to walk significant distances. Walking distances of greater than 1-km are associated with an increased incidence of undesirably long milking intervals and reduced milk yield. Complementary forages can be incorporated into pasture-based systems to lift total home grown feed in a given area, thus potentially ‘concentrating’ feed closer to the dairy. The aim of this modelling study was to investigate the total land area required and associated walking distance for large automatic milking system (AMS) herds when incorporating complementary forage rotations (CFR) into the system. Thirty-six scenarios consisting of 3 AMS herds (400, 600, 800 cows), 2 levels of pasture utilisation (current AMS utilisation of 15.0 t dry matter [DM]/ha, termed as moderate; optimum pasture utilisation of 19.7 t DM/ha, termed as high) and 6 rates of replacement of each of these pastures by grazeable CFR (0%, 10%, 20%, 30%, 40%, 50%) were investigated. Results showed that AMS cows were required to walk greater than 1-km when the farm area was greater than 86 ha. Insufficient pasture could be produced within a 1 km distance (i.e. 86 ha land) with home-grown feed (HGF) providing 43%, 29%, and 22% of the metabolisable energy (ME) required by 400, 600, and 800 cows, respectively from pastures. Introduction of pasture (moderate): CFR in AMS at a ratio of 80:20 can feed a 400 cow AMS herd, and can supply 42% and 31% of the ME requirements for 600 and 800 cows, respectively with pasture (moderate): CFR at 50:50 levels. In contrast to moderate pasture, 400 cows can be managed on high pasture utilisation (provided 57% of the total ME requirements). However, similar to the scenarios conducted with moderate pasture, there was insufficient feed produced within 1-km distance of the dairy for 600 or 800 cows. An 800 cow herd required 140 and 130 ha on moderate and high pasture-based AMS

  9. Connecting Lines of Research on Task Model Variables, Automatic Item Generation, and Learning Progressions in Game-Based Assessment

    ERIC Educational Resources Information Center

    Graf, Edith Aurora

    2014-01-01

    In "How Task Features Impact Evidence from Assessments Embedded in Simulations and Games," Almond, Kim, Velasquez, and Shute have prepared a thought-provoking piece contrasting the roles of task model variables in a traditional assessment of mathematics word problems to their roles in "Newton's Playground," a game designed…

  10. Automatic detection of LUCC based on SIFT

    NASA Astrophysics Data System (ADS)

    Ammala, Keonuchan; Liu, YaoLin; Tai, Ji Rong

    2009-10-01

    Land use cover change (LUCC) provide important information for environmental management and planning. It is one of the most prominent characteristics in globe environment change, and not only limited by natural factor, but also affected by the factor of social, economics, technique and histories. Traditionally, field surveys of land cover and land use are time consuming and costly and provide tabular statistics with out geographic location information. Remote sensing and GIS are the most modern technologies which have been widely used in the field of natural resource management and monitoring. Change detection in land use and updating information on the distribution and dynamics of land use have long term significance in policy making and scientific research. In this paper, we use multistpectral images of Spot period two different of time 2002 and 2007 for detection on LUCC base on Scale Invariant Feature Transform (SIFT) method. An automatic image matching technique based on SIFT was proposed by using the rotation and scale invariant property of SIFT. Keypoints are first extracted by searching over all scales and image locations, then the descriptors defined on the keypoint neighborhood are computed. The proposed algorithm is robust to translation, rotation, noise and scaling. Experimental results, urban is the most part of Huangpi area which have been changed.

  11. Robust driver heartbeat estimation: A q-Hurst exponent based automatic sensor change with interactive multi-model EKF.

    PubMed

    Vrazic, Sacha

    2015-08-01

    Preventing car accidents by monitoring the driver's physiological parameters is of high importance. However, existing measurement methods are not robust to driver's body movements. In this paper, a system that estimates the heartbeat from the seat embedded piezoelectric sensors, and that is robust to strong body movements is presented. Multifractal q-Hurst exponents are used within a classifier to predict the most probable best sensor signal to be used in an Interactive Multi-Model Extended Kalman Filter pulsation estimation procedure. The car vibration noise is reduced using an autoregressive exogenous model to predict the noise on sensors. The performance of the proposed system was evaluated on real driving data up to 100 km/h and with slaloms at high speed. It is shown that this method improves by 36.7% the pulsation estimation under strong body movement compared to static sensor pulsation estimation and appears to provide reliable pulsation variability information for top-level analysis of drowsiness or other conditions.

  12. Robust driver heartbeat estimation: A q-Hurst exponent based automatic sensor change with interactive multi-model EKF.

    PubMed

    Vrazic, Sacha

    2015-08-01

    Preventing car accidents by monitoring the driver's physiological parameters is of high importance. However, existing measurement methods are not robust to driver's body movements. In this paper, a system that estimates the heartbeat from the seat embedded piezoelectric sensors, and that is robust to strong body movements is presented. Multifractal q-Hurst exponents are used within a classifier to predict the most probable best sensor signal to be used in an Interactive Multi-Model Extended Kalman Filter pulsation estimation procedure. The car vibration noise is reduced using an autoregressive exogenous model to predict the noise on sensors. The performance of the proposed system was evaluated on real driving data up to 100 km/h and with slaloms at high speed. It is shown that this method improves by 36.7% the pulsation estimation under strong body movement compared to static sensor pulsation estimation and appears to provide reliable pulsation variability information for top-level analysis of drowsiness or other conditions. PMID:26736864

  13. UMLS-based automatic image indexing.

    PubMed

    Sneiderman, C; Sneiderman, Charles Alan; Demner-Fushman, D; Demner-Fushman, Dina; Fung, K W; Fung, Kin Wah; Bray, B; Bray, Bruce

    2008-01-01

    To date, most accurate image retrieval techniques rely on textual descriptions of images. Our goal is to automatically generate indexing terms for an image extracted from a biomedical article by identifying Unified Medical Language System (UMLS) concepts in image caption and its discussion in the text. In a pilot evaluation of the suggested image indexing method by five physicians, a third of the automatically identified index terms were found suitable for indexing.

  14. AUTOMATISM.

    PubMed

    MCCALDON, R J

    1964-10-24

    Individuals can carry out complex activity while in a state of impaired consciousness, a condition termed "automatism". Consciousness must be considered from both an organic and a psychological aspect, because impairment of consciousness may occur in both ways. Automatism may be classified as normal (hypnosis), organic (temporal lobe epilepsy), psychogenic (dissociative fugue) or feigned. Often painstaking clinical investigation is necessary to clarify the diagnosis. There is legal precedent for assuming that all crimes must embody both consciousness and will. Jurists are loath to apply this principle without reservation, as this would necessitate acquittal and release of potentially dangerous individuals. However, with the sole exception of the defence of insanity, there is at present no legislation to prohibit release without further investigation of anyone acquitted of a crime on the grounds of "automatism".

  15. An Automatic Learning-Based Framework for Robust Nucleus Segmentation.

    PubMed

    Xing, Fuyong; Xie, Yuanpu; Yang, Lin

    2016-02-01

    Computer-aided image analysis of histopathology specimens could potentially provide support for early detection and improved characterization of diseases such as brain tumor, pancreatic neuroendocrine tumor (NET), and breast cancer. Automated nucleus segmentation is a prerequisite for various quantitative analyses including automatic morphological feature computation. However, it remains to be a challenging problem due to the complex nature of histopathology images. In this paper, we propose a learning-based framework for robust and automatic nucleus segmentation with shape preservation. Given a nucleus image, it begins with a deep convolutional neural network (CNN) model to generate a probability map, on which an iterative region merging approach is performed for shape initializations. Next, a novel segmentation algorithm is exploited to separate individual nuclei combining a robust selection-based sparse shape model and a local repulsive deformable model. One of the significant benefits of the proposed framework is that it is applicable to different staining histopathology images. Due to the feature learning characteristic of the deep CNN and the high level shape prior modeling, the proposed method is general enough to perform well across multiple scenarios. We have tested the proposed algorithm on three large-scale pathology image datasets using a range of different tissue and stain preparations, and the comparative experiments with recent state of the arts demonstrate the superior performance of the proposed approach.

  16. The Role of Item Models in Automatic Item Generation

    ERIC Educational Resources Information Center

    Gierl, Mark J.; Lai, Hollis

    2012-01-01

    Automatic item generation represents a relatively new but rapidly evolving research area where cognitive and psychometric theories are used to produce tests that include items generated using computer technology. Automatic item generation requires two steps. First, test development specialists create item models, which are comparable to templates…

  17. Automatic 3D motion estimation of left ventricle from C-arm rotational angiocardiography using a prior motion model and learning based boundary detector.

    PubMed

    Chen, Mingqing; Zheng, Yefeng; Wang, Yang; Mueller, Kerstin; Lauritsch, Guenter

    2013-01-01

    Compared to pre-operative imaging modalities, it is more convenient to estimate the current cardiac physiological status from C-arm angiocardiography since C-arm is a widely used intra-operative imaging modality to guide many cardiac interventions. The 3D shape and motion of the left ventricle (LV) estimated from rotational angiocardiography provide important cardiac function measurements, e.g., ejection fraction and myocardium motion dyssynchrony. However, automatic estimation of the 3D LV motion is difficult since all anatomical structures overlap on the 2D X-ray projections and the nearby confounding strong image boundaries (e.g., pericardium) often cause ambiguities to LV endocardium boundary detection. In this paper, a new framework is proposed to overcome the aforementioned difficulties: (1) A new learning-based boundary detector is developed by training a boosting boundary classifier combined with the principal component analysis of a local image patch; (2) The prior LV motion model is learned from a set of dynamic cardiac computed tomography (CT) sequences to provide a good initial estimate of the 3D LV shape of different cardiac phases; (3) The 3D motion trajectory is learned for each mesh point; (4) All these components are integrated into a multi-surface graph optimization method to extract the globally coherent motion. The method is tested on seven patient scans, showing significant improvement on the ambiguous boundary cases with a detection accuracy of 2.87 +/- 1.00 mm on LV endocardium boundary delineation in the 2D projections.

  18. Automatic 3D motion estimation of left ventricle from C-arm rotational angiocardiography using a prior motion model and learning based boundary detector.

    PubMed

    Chen, Mingqing; Zheng, Yefeng; Wang, Yang; Mueller, Kerstin; Lauritsch, Guenter

    2013-01-01

    Compared to pre-operative imaging modalities, it is more convenient to estimate the current cardiac physiological status from C-arm angiocardiography since C-arm is a widely used intra-operative imaging modality to guide many cardiac interventions. The 3D shape and motion of the left ventricle (LV) estimated from rotational angiocardiography provide important cardiac function measurements, e.g., ejection fraction and myocardium motion dyssynchrony. However, automatic estimation of the 3D LV motion is difficult since all anatomical structures overlap on the 2D X-ray projections and the nearby confounding strong image boundaries (e.g., pericardium) often cause ambiguities to LV endocardium boundary detection. In this paper, a new framework is proposed to overcome the aforementioned difficulties: (1) A new learning-based boundary detector is developed by training a boosting boundary classifier combined with the principal component analysis of a local image patch; (2) The prior LV motion model is learned from a set of dynamic cardiac computed tomography (CT) sequences to provide a good initial estimate of the 3D LV shape of different cardiac phases; (3) The 3D motion trajectory is learned for each mesh point; (4) All these components are integrated into a multi-surface graph optimization method to extract the globally coherent motion. The method is tested on seven patient scans, showing significant improvement on the ambiguous boundary cases with a detection accuracy of 2.87 +/- 1.00 mm on LV endocardium boundary delineation in the 2D projections. PMID:24505748

  19. Grammatically-Based Automatic Word Class Formation

    ERIC Educational Resources Information Center

    Hirschman, Lynette; And Others

    1975-01-01

    Describes an automatic procedure which uses the syntactic relations as the basis for grouping words into classes. It forms classes by grouping together nouns that occur as subject (or object) of the same verbs, and similarly by grouping together verbs occurring with the same subject or object. (Author)

  20. Automatically calibrating admittances in KATE's autonomous launch operations model

    NASA Astrophysics Data System (ADS)

    Morgan, Steve

    1992-09-01

    This report documents a 1000-line Symbolics LISP program that automatically calibrates all 15 fluid admittances in KATE's Autonomous Launch Operations (ALO) model. (KATE is Kennedy Space Center's Knowledge-based Autonomous Test Engineer, a diagnosis and repair expert system created for use on the Space Shuttle's various fluid flow systems.) As a new KATE application, the calibrator described here breaks new ground for KSC's Artificial Intelligence Lab by allowing KATE to both control and measure the hardware she supervises. By automating a formerly manual process, the calibrator: (1) saves the ALO model builder untold amounts of labor; (2) enables quick repairs after workmen accidently adjust ALO's hand valves; and (3) frees the modeler to pursue new KATE applications that previously were too complicated. Also reported are suggestions for enhancing the program: (1) to calibrate ALO's TV cameras, pumps, and sensor tolerances; and (2) to calibrate devices in other KATE models, such as the shuttle's LOX and Environment Control System (ECS).

  1. Automatically calibrating admittances in KATE's autonomous launch operations model

    NASA Technical Reports Server (NTRS)

    Morgan, Steve

    1992-01-01

    This report documents a 1000-line Symbolics LISP program that automatically calibrates all 15 fluid admittances in KATE's Autonomous Launch Operations (ALO) model. (KATE is Kennedy Space Center's Knowledge-based Autonomous Test Engineer, a diagnosis and repair expert system created for use on the Space Shuttle's various fluid flow systems.) As a new KATE application, the calibrator described here breaks new ground for KSC's Artificial Intelligence Lab by allowing KATE to both control and measure the hardware she supervises. By automating a formerly manual process, the calibrator: (1) saves the ALO model builder untold amounts of labor; (2) enables quick repairs after workmen accidently adjust ALO's hand valves; and (3) frees the modeler to pursue new KATE applications that previously were too complicated. Also reported are suggestions for enhancing the program: (1) to calibrate ALO's TV cameras, pumps, and sensor tolerances; and (2) to calibrate devices in other KATE models, such as the shuttle's LOX and Environment Control System (ECS).

  2. Geometrical and topological issues in octree based automatic meshing

    NASA Technical Reports Server (NTRS)

    Saxena, Mukul; Perucchio, Renato

    1987-01-01

    Finite element meshes derived automatically from solid models through recursive spatial subdivision schemes (octrees) can be made to inherit the hierarchical structure and the spatial addressability intrinsic to the underlying grid. These two properties, together with the geometric regularity that can also be built into the mesh, make octree based meshes ideally suited for efficient analysis and self-adaptive remeshing and reanalysis. The element decomposition of the octal cells that intersect the boundary of the domain is discussed. The problem, central to octree based meshing, is solved by combining template mapping and element extraction into a procedure that utilizes both constructive solid geometry and boundary representation techniques. Boundary cells that are not intersected by the edge of the domain boundary are easily mapped to predefined element topology. Cells containing edges (and vertices) are first transformed into a planar polyhedron and then triangulated via element extractor. The modeling environments required for the derivation of planar polyhedra and for element extraction are analyzed.

  3. Automatic Building Information Model Query Generation

    SciTech Connect

    Jiang, Yufei; Yu, Nan; Ming, Jiang; Lee, Sanghoon; DeGraw, Jason; Yen, John; Messner, John I.; Wu, Dinghao

    2015-12-01

    Energy efficient building design and construction calls for extensive collaboration between different subfields of the Architecture, Engineering and Construction (AEC) community. Performing building design and construction engineering raises challenges on data integration and software interoperability. Using Building Information Modeling (BIM) data hub to host and integrate building models is a promising solution to address those challenges, which can ease building design information management. However, the partial model query mechanism of current BIM data hub collaboration model has several limitations, which prevents designers and engineers to take advantage of BIM. To address this problem, we propose a general and effective approach to generate query code based on a Model View Definition (MVD). This approach is demonstrated through a software prototype called QueryGenerator. By demonstrating a case study using multi-zone air flow analysis, we show how our approach and tool can help domain experts to use BIM to drive building design with less labour and lower overhead cost.

  4. 11. MOVABLE BED SEDIMENTATION MODELS. AUTOMATIC SEDIMENT FEEDER DESIGNED AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. MOVABLE BED SEDIMENTATION MODELS. AUTOMATIC SEDIMENT FEEDER DESIGNED AND BUILT BY WES. - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS

  5. Automatic food intake detection based on swallowing sounds

    PubMed Central

    Makeyev, Oleksandr; Lopez-Meyer, Paulo; Schuckers, Stephanie; Besio, Walter; Sazonov, Edward

    2012-01-01

    This paper presents a novel fully automatic food intake detection methodology, an important step toward objective monitoring of ingestive behavior. The aim of such monitoring is to improve our understanding of eating behaviors associated with obesity and eating disorders. The proposed methodology consists of two stages. First, acoustic detection of swallowing instances based on mel-scale Fourier spectrum features and classification using support vector machines is performed. Principal component analysis and a smoothing algorithm are used to improve swallowing detection accuracy. Second, the frequency of swallowing is used as a predictor for detection of food intake episodes. The proposed methodology was tested on data collected from 12 subjects with various degrees of adiposity. Average accuracies of >80% and >75% were obtained for intra-subject and inter-subject models correspondingly with a temporal resolution of 30s. Results obtained on 44.1 hours of data with a total of 7305 swallows show that detection accuracies are comparable for obese and lean subjects. They also suggest feasibility of food intake detection based on swallowing sounds and potential of the proposed methodology for automatic monitoring of ingestive behavior. Based on a wearable non-invasive acoustic sensor the proposed methodology may potentially be used in free-living conditions. PMID:23125873

  6. Automatic reactor model synthesis with genetic programming.

    PubMed

    Dürrenmatt, David J; Gujer, Willi

    2012-01-01

    Successful modeling of wastewater treatment plant (WWTP) processes requires an accurate description of the plant hydraulics. Common methods such as tracer experiments are difficult and costly and thus have limited applicability in practice; engineers are often forced to rely on their experience only. An implementation of grammar-based genetic programming with an encoding to represent hydraulic reactor models as program trees should fill this gap: The encoding enables the algorithm to construct arbitrary reactor models compatible with common software used for WWTP modeling by linking building blocks, such as continuous stirred-tank reactors. Discharge measurements and influent and effluent concentrations are the only required inputs. As shown in a synthetic example, the technique can be used to identify a set of reactor models that perform equally well. Instead of being guided by experience, the most suitable model can now be chosen by the engineer from the set. In a second example, temperature measurements at the influent and effluent of a primary clarifier are used to generate a reactor model. A virtual tracer experiment performed on the reactor model has good agreement with a tracer experiment performed on-site.

  7. Aviation Safety Modeling and Simulation (ASMM) Propulsion Fleet Modeling: A Tool for Semi-Automatic Construction of CORBA-based Applications from Legacy Fortran Programs

    NASA Technical Reports Server (NTRS)

    Sang, Janche

    2003-01-01

    Within NASA's Aviation Safety Program, NASA GRC participates in the Modeling and Simulation Project called ASMM. NASA GRC s focus is to characterize the propulsion systems performance from a fleet management and maintenance perspective by modeling and through simulation predict the characteristics of two classes of commercial engines (CFM56 and GE90). In prior years, the High Performance Computing and Communication (HPCC) program funded, NASA Glenn in developing a large scale, detailed simulations for the analysis and design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). Three major aspects of this modeling included the integration of different engine components, coupling of multiple disciplines, and engine component zooming at appropriate level fidelity, require relatively tight coupling of different analysis codes. Most of these codes in aerodynamics and solid mechanics are written in Fortran. Refitting these legacy Fortran codes with distributed objects can increase these codes reusability. Aviation Safety s modeling and simulation use in characterizing fleet management has similar needs. The modeling and simulation of these propulsion systems use existing Fortran and C codes that are instrumental in determining the performance of the fleet. The research centers on building a CORBA-based development environment for programmers to easily wrap and couple legacy Fortran codes. This environment consists of a C++ wrapper library to hide the details of CORBA and an efficient remote variable scheme to facilitate data exchange between the client and the server model. Additionally, a Web Service model should also be constructed for evaluation of this technology s use over the next two- three years.

  8. Automatism

    PubMed Central

    McCaldon, R. J.

    1964-01-01

    Individuals can carry out complex activity while in a state of impaired consciousness, a condition termed “automatism”. Consciousness must be considered from both an organic and a psychological aspect, because impairment of consciousness may occur in both ways. Automatism may be classified as normal (hypnosis), organic (temporal lobe epilepsy), psychogenic (dissociative fugue) or feigned. Often painstaking clinical investigation is necessary to clarify the diagnosis. There is legal precedent for assuming that all crimes must embody both consciousness and will. Jurists are loath to apply this principle without reservation, as this would necessitate acquittal and release of potentially dangerous individuals. However, with the sole exception of the defence of insanity, there is at present no legislation to prohibit release without further investigation of anyone acquitted of a crime on the grounds of “automatism”. PMID:14199824

  9. A Robot Based Automatic Paint Inspection System

    NASA Astrophysics Data System (ADS)

    Atkinson, R. M.; Claridge, J. F.

    1988-06-01

    The final inspection of manufactured goods is a labour intensive activity. The use of human inspectors has a number of potential disadvantages; it can be expensive, the inspection standard applied is subjective and the inspection process can be slow compared with the production process. The use of automatic optical and electronic systems to perform the inspection task is now a growing practice but, in general, such systems have been applied to small components which are accurately presented. Recent advances in vision systems and robot control technology have made possible the installation of an automated paint inspection system at the Austin Rover Group's plant at Cowley, Oxford. The automatic inspection of painted car bodies is a particularly difficult problem, but one which has major benefits. The pass line of the car bodies is ill-determined, the surface to be inspected is of varying surface geometry and only a short time is available to inspect a large surface area. The benefits, however, are due to the consistent standard of inspection which should lead to lower levels of customer complaints and improved process feedback. The Austin Rover Group initiated the development of a system to fulfil this requirement. Three companies collaborated on the project; Austin Rover itself undertook the production line modifications required for body presentation, Sira Ltd developed the inspection cameras and signal processing system and Unimation (Europe) Ltd designed, supplied and programmed the robot system. Sira's development was supported by a grant from the Department of Trade and Industry.

  10. a Sensor Based Automatic Ovulation Prediction System for Dairy Cows

    NASA Astrophysics Data System (ADS)

    Mottram, Toby; Hart, John; Pemberton, Roy

    2000-12-01

    Sensor scientists have been successful in developing detectors for tiny concentrations of rare compounds, but the work is rarely applied in practice. Any but the most trivial application of sensors requires a specification that should include a sampling system, a sensor, a calibration system and a model of how the information is to be used to control the process of interest. The specification of the sensor system should ask the following questions. How will the material to be analysed be sampled? What decision can be made with the information available from a proposed sensor? This project provides a model of a systems approach to the implementation of automatic ovulation prediction in dairy cows. A healthy well managed dairy cow should calve every year to make the best use of forage. As most cows are inseminated artificially it is of vital importance mat cows are regularly monitored for signs of oestrus. The pressure on dairymen to manage more cows often leads to less time being available for observation of cows to detect oestrus. This, together with breeding and feeding for increased yields, has led to a reduction in reproductive performance. In the UK the typical dairy farmer could save € 12800 per year if ovulation could be predicted accurately. Research over a number of years has shown that regular analysis of milk samples with tests based on enzyme linked immunoassay (ELISA) can map the ovulation cycle. However, these tests require the farmer to implement a manually operated sampling and analysis procedure and the technique has not been widely taken up. The best potential method of achieving 98% specificity of prediction of ovulation is to adapt biosensor techniques to emulate the ELISA tests automatically in the milking system. An automated ovulation prediction system for dairy cows is specified. The system integrates a biosensor with automatic milk sampling and a herd management database. The biosensor is a screen printed carbon electrode system capable of

  11. Automatic indexing of news video for content-based retrieval

    NASA Astrophysics Data System (ADS)

    Yang, Myung-Sup; Yoo, Cheol-Jung; Chang, Ok-Bae

    1998-06-01

    Since it is impossible to automatically parse a general video, we investigated an integrated solution for the content-based news video indexing and the retrieval. Thus, a specific structural video such as news video is parsed, because it is included both temporal and spatial characteristics that the news event with an anchor-person is iteratively appeared, a news icon and a caption are involved in some frame, respectively. To extract automatically the key frames by using the structured knowledge of news, the model used in this paper is consisted of the news event segmentation, caption recognition and search browser module. The following are three main modules represented in this paper: (1) The news event segmentation module (NESM) for both the recognition and the division of an anchor-person shot. (2) The caption recognition module (CRM) for the detection of the caption-frames in a news event, the extraction of their caption region in the frame by using split-merge method, and the recognition of the region as a text with OCR software. 3) The search browser module (SBM) for the display of the list of news events and news captions, which are included in selected news event. However, the SBM can be caused various searching mechanisms.

  12. Automatic activity estimation based on object behaviour signature

    NASA Astrophysics Data System (ADS)

    Martínez-Pérez, F. E.; González-Fraga, J. A.; Tentori, M.

    2010-08-01

    Automatic estimation of human activities is a topic widely studied. However the process becomes difficult when we want to estimate activities from a video stream, because human activities are dynamic and complex. Furthermore, we have to take into account the amount of information that images provide, since it makes the modelling and estimation activities a hard work. In this paper we propose a method for activity estimation based on object behavior. Objects are located in a delimited observation area and their handling is recorded with a video camera. Activity estimation can be done automatically by analyzing the video sequences. The proposed method is called "signature recognition" because it considers a space-time signature of the behaviour of objects that are used in particular activities (e.g. patients' care in a healthcare environment for elder people with restricted mobility). A pulse is produced when an object appears in or disappears of the observation area. This means there is a change from zero to one or vice versa. These changes are produced by the identification of the objects with a bank of nonlinear correlation filters. Each object is processed independently and produces its own pulses; hence we are able to recognize several objects with different patterns at the same time. The method is applied to estimate three healthcare-related activities of elder people with restricted mobility.

  13. Incremental logistic regression for customizing automatic diagnostic models.

    PubMed

    Tortajada, Salvador; Robles, Montserrat; García-Gómez, Juan Miguel

    2015-01-01

    In the last decades, and following the new trends in medicine, statistical learning techniques have been used for developing automatic diagnostic models for aiding the clinical experts throughout the use of Clinical Decision Support Systems. The development of these models requires a large, representative amount of data, which is commonly obtained from one hospital or a group of hospitals after an expensive and time-consuming gathering, preprocess, and validation of cases. After the model development, it has to overcome an external validation that is often carried out in a different hospital or health center. The experience is that the models show underperformed expectations. Furthermore, patient data needs ethical approval and patient consent to send and store data. For these reasons, we introduce an incremental learning algorithm base on the Bayesian inference approach that may allow us to build an initial model with a smaller number of cases and update it incrementally when new data are collected or even perform a new calibration of a model from a different center by using a reduced number of cases. The performance of our algorithm is demonstrated by employing different benchmark datasets and a real brain tumor dataset; and we compare its performance to a previous incremental algorithm and a non-incremental Bayesian model, showing that the algorithm is independent of the data model, iterative, and has a good convergence. PMID:25417079

  14. Size-based protocol optimization using automatic tube current modulation and automatic kV selection in computed tomography.

    PubMed

    MacDougall, Robert D; Kleinman, Patricia L; Callahan, Michael J

    2016-01-01

    Size-based diagnostic reference ranges (DRRs) for contrast-enhanced pediatric abdominal computed tomography (CT) have been published in order to establish practical upper and lower limits of CTDI, DLP, and SSDE. Based on these DRRs, guidelines for establishing size-based SSDE target levels from the SSDE of a standard adult by applying a linear correction factor have been published and provide a great reference for dose optimization initiatives. The necessary step of designing manufacturer-specific CT protocols to achieve established SSDE targets is the responsibility of the Qualified Medical Physicist. The task is straightforward if fixed-mA protocols are used, however, more difficult when automatic exposure control (AEC) and automatic kV selection are considered. In such cases, the physicist must deduce the operation of AEC algorithms from technical documentation or through testing, using a wide range of phantom sizes. Our study presents the results of such testing using anthropomorphic phantoms ranging in size from the newborn to the obese adult. The effect of each user-controlled parameter was modeled for a single-manufacturer AEC algorithm (Siemens CARE Dose4D) and automatic kV selection algorithm (Siemens CARE kV). Based on the results presented in this study, a process for designing mA-modulated, pediatric abdominal CT protocols that achieve user-defined SSDE and kV targets is described. PMID:26894344

  15. Integrating hidden Markov model and PRAAT: a toolbox for robust automatic speech transcription

    NASA Astrophysics Data System (ADS)

    Kabir, A.; Barker, J.; Giurgiu, M.

    2010-09-01

    An automatic time-aligned phone transcription toolbox of English speech corpora has been developed. Especially the toolbox would be very useful to generate robust automatic transcription and able to produce phone level transcription using speaker independent models as well as speaker dependent models without manual intervention. The system is based on standard Hidden Markov Models (HMM) approach and it was successfully experimented over a large audiovisual speech corpus namely GRID corpus. One of the most powerful features of the toolbox is the increased flexibility in speech processing where the speech community would be able to import the automatic transcription generated by HMM Toolkit (HTK) into a popular transcription software, PRAAT, and vice-versa. The toolbox has been evaluated through statistical analysis on GRID data which shows that automatic transcription deviates by an average of 20 ms with respect to manual transcription.

  16. An Enterprise Ontology Building the Bases for Automatic Metadata Generation

    NASA Astrophysics Data System (ADS)

    Thönssen, Barbara

    'Information Overload' or 'Document Deluge' is a problem enterprises and Public Administrations alike are still dealing with. Although commercial products for Enterprise Content or Records Management are available since more than two decades, especially in Small and Medium Enterprises and Public Administrations they didn't get through. Because of the wide range of document types and formats full-text indexing is not sufficient, but assigning metadata manually is not possible. Thus, automatic, format-independent generation of metadata for (public) enterprise documents is needed. Using context to infer metadata automatically has been researched for example for web-documents or learning objects. If (public) enterprise objects were modelled 'machine understandable' they could be build the context for automatic metadata generation. The approach introduced in this paper is to model context (the (public) enterprise objects) in an ontology and using that ontology to infer content-related metadata.

  17. Automatic analysis of a skull fracture based on image content

    NASA Astrophysics Data System (ADS)

    Shao, Hong; Zhao, Hong

    2003-09-01

    Automatic analysis based on image content is a hotspot with bright future of medical image diagnosis technology research. Analysis of the fracture of skull can help doctors diagnose. In this paper, a new approach is proposed to automatically detect the fracture of skull based on CT image content. First region growing method, whose seeds and growing rules are chosen by k-means clustering dynamically, is applied for image automatic segmentation. The segmented region boundary is found by boundary tracing. Then the shape of the boundary is analyzed, and the circularity measure is taken as description parameter. At last the rules for computer automatic diagnosis of the fracture of the skull are reasoned by entropy function. This method is used to analyze the images from the third ventricles below layer to cerebral cortex top layer. Experimental result shows that the recognition rate is 100% for the 100 images, which are chosen from medical image database randomly and are not included in the training examples. This method integrates color and shape feature, and isn't affected by image size and position. This research achieves high recognition rate and sets a basis for automatic analysis of brain image.

  18. Using suggestion to model different types of automatic writing.

    PubMed

    Walsh, E; Mehta, M A; Oakley, D A; Guilmette, D N; Gabay, A; Halligan, P W; Deeley, Q

    2014-05-01

    Our sense of self includes awareness of our thoughts and movements, and our control over them. This feeling can be altered or lost in neuropsychiatric disorders as well as in phenomena such as "automatic writing" whereby writing is attributed to an external source. Here, we employed suggestion in highly hypnotically suggestible participants to model various experiences of automatic writing during a sentence completion task. Results showed that the induction of hypnosis, without additional suggestion, was associated with a small but significant reduction of control, ownership, and awareness for writing. Targeted suggestions produced a double dissociation between thought and movement components of writing, for both feelings of control and ownership, and additionally, reduced awareness of writing. Overall, suggestion produced selective alterations in the control, ownership, and awareness of thought and motor components of writing, thus enabling key aspects of automatic writing, observed across different clinical and cultural settings, to be modelled.

  19. Application of automatic differentiation to groundwater transport models

    SciTech Connect

    Bischof, C.H.; Ross, A.A.; Whiffen, G.J.; Shoemaker, C.A.; Carle, A.

    1994-06-01

    Automatic differentiation (AD) is a technique for generating efficient and reliable derivative codes from computer programs with a minimum of human effort. Derivatives of model output with respect to input are obtained exactly. No intrinsic limits to program length or complexity exist for this procedure. Calculation of derivatives of complex numerical models is required in systems optimization, parameter identification, and systems identification. We report on our experiences with the ADIFOR (Automatic Differentiation of Fortran) tool on a two-dimensional groundwater flow and contaminant transport finite-element model, ISOQUAD, and a three-dimensional contaminant transport finite-element model, TLS3D. Derivative values and computational times for the automatic differentiation procedure axe compared with values obtained from the divided differences and handwritten analytic approaches. We found that the derivative codes generated by ADIFOR provided accurate derivatives and ran significantly faster than divided-differences approximations, typically in a tenth of the CPU time required for the imprecise divided-differences method for both codes. We also comment on the impact of automatic differentiation technology with respect to accelerating the transfer of general techniques developed for using water resource computer models, such as optimal design, sensitivity analysis, and inverse modeling problems to field problems.

  20. Study on automatic testing network based on LXI

    NASA Astrophysics Data System (ADS)

    Hu, Qin; Xu, Xing

    2006-11-01

    LXI (LAN eXtensions for Instrumentation), which is an extension of the widely used Ethernet technology in the automatic testing field, is the next generation instrumental platform. LXI standard is based on the industry standard Ethernet technolog, using the standard PC interface as the primary communication bus between devices. It implements the IEEE802.3 standard and supports TCP/IP protocol. LXI takes the advantage of the ease of use of GPIB-based instruments, the high performance and compact size of VXI/PXI instruments, and the flexibility and high throughput of Ethernet all at the same time. The paper firstly introduces the specification of LXI standard. Then, an automatic testing network architecture which is based on LXI platform is proposed. The automatic testing network is composed of several sets of LXI-based instruments, which are connected via an Ethernet switch or router. The network is computer-centric, and all the LXI-based instruments in the network are configured and initialized in computer. The computer controls the data acquisition, and displays the data on the screen. The instruments are using Ethernet connection as I/O interface, and can be triggered over a wired trigger interface, over LAN or over IEEE 1588 Precision Time Protocol running over the LAN interface. A hybrid automatic testing network comprised of LXI compliant devices and legacy instruments including LAN instruments as well as GPIB, VXI and PXI products connected via internal or external adaptors is also discussed at the end of the paper.

  1. Super pixel density based clustering automatic image classification method

    NASA Astrophysics Data System (ADS)

    Xu, Mingxing; Zhang, Chuan; Zhang, Tianxu

    2015-12-01

    The image classification is an important means of image segmentation and data mining, how to achieve rapid automated image classification has been the focus of research. In this paper, based on the super pixel density of cluster centers algorithm for automatic image classification and identify outlier. The use of the image pixel location coordinates and gray value computing density and distance, to achieve automatic image classification and outlier extraction. Due to the increased pixel dramatically increase the computational complexity, consider the method of ultra-pixel image preprocessing, divided into a small number of super-pixel sub-blocks after the density and distance calculations, while the design of a normalized density and distance discrimination law, to achieve automatic classification and clustering center selection, whereby the image automatically classify and identify outlier. After a lot of experiments, our method does not require human intervention, can automatically categorize images computing speed than the density clustering algorithm, the image can be effectively automated classification and outlier extraction.

  2. Towards Automatic Processing of Virtual City Models for Simulations

    NASA Astrophysics Data System (ADS)

    Piepereit, R.; Schilling, A.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2016-10-01

    Especially in the field of numerical simulations, such as flow and acoustic simulations, the interest in using virtual 3D models to optimize urban systems is increasing. The few instances in which simulations were already carried out in practice have been associated with an extremely high manual and therefore uneconomical effort for the processing of models. Using different ways of capturing models in Geographic Information System (GIS) and Computer Aided Engineering (CAE), increases the already very high complexity of the processing. To obtain virtual 3D models suitable for simulation, we developed a tool for automatic processing with the goal to establish ties between the world of GIS and CAE. In this paper we introduce a way to use Coons surfaces for the automatic processing of building models in LoD2, and investigate ways to simplify LoD3 models in order to reduce unnecessary information for a numerical simulation.

  3. Automatic Detection of Student Mental Models during Prior Knowledge Activation in MetaTutor

    ERIC Educational Resources Information Center

    Rus, Vasile; Lintean, Mihai; Azevedo, Roger

    2009-01-01

    This paper presents several methods to automatically detecting students' mental models in MetaTutor, an intelligent tutoring system that teaches students self-regulatory processes during learning of complex science topics. In particular, we focus on detecting students' mental models based on student-generated paragraphs during prior knowledge…

  4. Instance-based categorization: automatic versus intentional forms of retrieval.

    PubMed

    Neal, A; Hesketh, B; Andrews, S

    1995-03-01

    Two experiments are reported which attempt to disentangle the relative contribution of intentional and automatic forms of retrieval to instance-based categorization. A financial decision-making task was used in which subjects had to decide whether a bank would approve loans for a series of applicants. Experiment 1 found that categorization was sensitive to instance-specific knowledge, even when subjects had practiced using a simple rule. L. L. Jacoby's (1991) process-dissociation procedure was adapted for use in Experiment 2 to infer the relative contribution of intentional and automatic retrieval processes to categorization decisions. The results provided (1) strong evidence that intentional retrieval processes influence categorization, and (2) some preliminary evidence suggesting that automatic retrieval processes may also contribute to categorization decisions.

  5. On Automatic Support to Indexing a Life Sciences Data Base.

    ERIC Educational Resources Information Center

    Vleduts-Stokolov, N.

    1982-01-01

    Describes technique developed as automatic support to subject heading indexing at BIOSIS based on use of formalized language for semantic representation of biological texts and subject headings. Language structures, experimental results, and analysis of journal/subject heading and author/subject heading correlation data are discussed. References…

  6. A Network of Automatic Control Web-Based Laboratories

    ERIC Educational Resources Information Center

    Vargas, Hector; Sanchez Moreno, J.; Jara, Carlos A.; Candelas, F. A.; Torres, Fernando; Dormido, Sebastian

    2011-01-01

    This article presents an innovative project in the context of remote experimentation applied to control engineering education. Specifically, the authors describe their experience regarding the analysis, design, development, and exploitation of web-based technologies within the scope of automatic control. This work is part of an inter-university…

  7. Automatic Match between Delimitation Line and Real Terrain Based on Least-Cost Path Analysis

    NASA Astrophysics Data System (ADS)

    Feng, C. Q.; Jiang, N.; Zhang, X. N.; Ma, J.

    2013-11-01

    Nowadays, during the international negotiation on separating dispute areas, manual adjusting is lonely applied to the match between delimitation line and real terrain, which not only consumes much time and great labor force, but also cannot ensure high precision. Concerning that, the paper mainly explores automatic match between them and study its general solution based on Least -Cost Path Analysis. First, under the guidelines of delimitation laws, the cost layer is acquired through special disposals of delimitation line and terrain features line. Second, a new delimitation line gets constructed with the help of Least-Cost Path Analysis. Third, the whole automatic match model is built via Module Builder in order to share and reuse it. Finally, the result of automatic match is analyzed from many different aspects, including delimitation laws, two-sided benefits and so on. Consequently, a conclusion is made that the method of automatic match is feasible and effective.

  8. Towards automatic calibration of 2-dimensional flood propagation models

    NASA Astrophysics Data System (ADS)

    Fabio, P.; Aronica, G. T.; Apel, H.

    2009-11-01

    Hydraulic models for flood propagation description are an essential tool in many fields, e.g. civil engineering, flood hazard and risk assessments, evaluation of flood control measures, etc. Nowadays there are many models of different complexity regarding the mathematical foundation and spatial dimensions available, and most of them are comparatively easy to operate due to sophisticated tools for model setup and control. However, the calibration of these models is still underdeveloped in contrast to other models like e.g. hydrological models or models used in ecosystem analysis. This has basically two reasons: first, the lack of relevant data against the models can be calibrated, because flood events are very rarely monitored due to the disturbances inflicted by them and the lack of appropriate measuring equipment in place. Secondly, especially the two-dimensional models are computationally very demanding and therefore the use of available sophisticated automatic calibration procedures is restricted in many cases. This study takes a well documented flood event in August 2002 at the Mulde River in Germany as an example and investigates the most appropriate calibration strategy for a full 2-D hyperbolic finite element model. The model independent optimiser PEST, that gives the possibility of automatic calibrations, is used. The application of the parallel version of the optimiser to the model and calibration data showed that a) it is possible to use automatic calibration in combination of 2-D hydraulic model, and b) equifinality of model parameterisation can also be caused by a too large number of degrees of freedom in the calibration data in contrast to a too simple model setup. In order to improve model calibration and reduce equifinality a method was developed to identify calibration data with likely errors that obstruct model calibration.

  9. A manual and an automatic TERS based virus discrimination

    NASA Astrophysics Data System (ADS)

    Olschewski, Konstanze; Kämmer, Evelyn; Stöckel, Stephan; Bocklitz, Thomas; Deckert-Gaudig, Tanja; Zell, Roland; Cialla-May, Dana; Weber, Karina; Deckert, Volker; Popp, Jürgen

    2015-02-01

    Rapid techniques for virus identification are more relevant today than ever. Conventional virus detection and identification strategies generally rest upon various microbiological methods and genomic approaches, which are not suited for the analysis of single virus particles. In contrast, the highly sensitive spectroscopic technique tip-enhanced Raman spectroscopy (TERS) allows the characterisation of biological nano-structures like virions on a single-particle level. In this study, the feasibility of TERS in combination with chemometrics to discriminate two pathogenic viruses, Varicella-zoster virus (VZV) and Porcine teschovirus (PTV), was investigated. In a first step, chemometric methods transformed the spectral data in such a way that a rapid visual discrimination of the two examined viruses was enabled. In a further step, these methods were utilised to perform an automatic quality rating of the measured spectra. Spectra that passed this test were eventually used to calculate a classification model, through which a successful discrimination of the two viral species based on TERS spectra of single virus particles was also realised with a classification accuracy of 91%.Rapid techniques for virus identification are more relevant today than ever. Conventional virus detection and identification strategies generally rest upon various microbiological methods and genomic approaches, which are not suited for the analysis of single virus particles. In contrast, the highly sensitive spectroscopic technique tip-enhanced Raman spectroscopy (TERS) allows the characterisation of biological nano-structures like virions on a single-particle level. In this study, the feasibility of TERS in combination with chemometrics to discriminate two pathogenic viruses, Varicella-zoster virus (VZV) and Porcine teschovirus (PTV), was investigated. In a first step, chemometric methods transformed the spectral data in such a way that a rapid visual discrimination of the two examined viruses

  10. MEMOPS: data modelling and automatic code generation.

    PubMed

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D

    2010-03-25

    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology.

  11. Uav-Based Automatic Tree Growth Measurement for Biomass Estimation

    NASA Astrophysics Data System (ADS)

    Karpina, M.; Jarząbek-Rychard, M.; Tymków, P.; Borkowski, A.

    2016-06-01

    Manual in-situ measurements of geometric tree parameters for the biomass volume estimation are time-consuming and economically non-effective. Photogrammetric techniques can be deployed in order to automate the measurement procedure. The purpose of the presented work is an automatic tree growth estimation based on Unmanned Aircraft Vehicle (UAV) imagery. The experiment was conducted in an agriculture test field with scots pine canopies. The data was collected using a Leica Aibotix X6V2 platform equipped with a Nikon D800 camera. Reference geometric parameters of selected sample plants were measured manually each week. In situ measurements were correlated with the UAV data acquisition. The correlation aimed at the investigation of optimal conditions for a flight and parameter settings for image acquisition. The collected images are processed in a state of the art tool resulting in a generation of dense 3D point clouds. The algorithm is developed in order to estimate geometric tree parameters from 3D points. Stem positions and tree tops are identified automatically in a cross section, followed by the calculation of tree heights. The automatically derived height values are compared to the reference measurements performed manually. The comparison allows for the evaluation of automatic growth estimation process. The accuracy achieved using UAV photogrammetry for tree heights estimation is about 5cm.

  12. A new method for automatic discontinuity traces sampling on rock mass 3D model

    NASA Astrophysics Data System (ADS)

    Umili, G.; Ferrero, A.; Einstein, H. H.

    2013-02-01

    A new automatic method for discontinuity traces mapping and sampling on a rock mass digital model is described in this work. The implemented procedure allows one to automatically identify discontinuity traces on a Digital Surface Model: traces are detected directly as surface breaklines, by means of maximum and minimum principal curvature values of the vertices that constitute the model surface. Color influence and user errors, that usually characterize the trace mapping on images, are eliminated. Also trace sampling procedures based on circular windows and circular scanlines have been implemented: they are used to infer trace data and to calculate values of mean trace length, expected discontinuity diameter and intensity of rock discontinuities. The method is tested on a case study: results obtained applying the automatic procedure on the DSM of a rock face are compared to those obtained performing a manual sampling on the orthophotograph of the same rock face.

  13. Semi-automatic simulation model generation of virtual dynamic networks for production flow planning

    NASA Astrophysics Data System (ADS)

    Krenczyk, D.; Skolud, B.; Olender, M.

    2016-08-01

    Computer modelling, simulation and visualization of production flow allowing to increase the efficiency of production planning process in dynamic manufacturing networks. The use of the semi-automatic model generation concept based on parametric approach supporting processes of production planning is presented. The presented approach allows the use of simulation and visualization for verification of production plans and alternative topologies of manufacturing network configurations as well as with automatic generation of a series of production flow scenarios. Computational examples with the application of Enterprise Dynamics simulation software comprising the steps of production planning and control for manufacturing network have been also presented.

  14. Edge density based automatic detection of inflammation in colonoscopy videos.

    PubMed

    Ševo, I; Avramović, A; Balasingham, I; Elle, O J; Bergsland, J; Aabakken, L

    2016-05-01

    Colon cancer is one of the deadliest diseases where early detection can prolong life and can increase the survival rates. The early stage disease is typically associated with polyps and mucosa inflammation. The often used diagnostic tools rely on high quality videos obtained from colonoscopy or capsule endoscope. The state-of-the-art image processing techniques of video analysis for automatic detection of anomalies use statistical and neural network methods. In this paper, we investigated a simple alternative model-based approach using texture analysis. The method can easily be implemented in parallel processing mode for real-time applications. A characteristic texture of inflamed tissue is used to distinguish between inflammatory and healthy tissues, where an appropriate filter kernel was proposed and implemented to efficiently detect this specific texture. The basic method is further improved to eliminate the effect of blood vessels present in the lower part of the descending colon. Both approaches of the proposed method were described in detail and tested in two different computer experiments. Our results show that the inflammatory region can be detected in real-time with an accuracy of over 84%. Furthermore, the experimental study showed that it is possible to detect certain segments of video frames containing inflammations with the detection accuracy above 90%. PMID:27043856

  15. MeSH indexing based on automatically generated summaries

    PubMed Central

    2013-01-01

    Background MEDLINE citations are manually indexed at the U.S. National Library of Medicine (NLM) using as reference the Medical Subject Headings (MeSH) controlled vocabulary. For this task, the human indexers read the full text of the article. Due to the growth of MEDLINE, the NLM Indexing Initiative explores indexing methodologies that can support the task of the indexers. Medical Text Indexer (MTI) is a tool developed by the NLM Indexing Initiative to provide MeSH indexing recommendations to indexers. Currently, the input to MTI is MEDLINE citations, title and abstract only. Previous work has shown that using full text as input to MTI increases recall, but decreases precision sharply. We propose using summaries generated automatically from the full text for the input to MTI to use in the task of suggesting MeSH headings to indexers. Summaries distill the most salient information from the full text, which might increase the coverage of automatic indexing approaches based on MEDLINE. We hypothesize that if the results were good enough, manual indexers could possibly use automatic summaries instead of the full texts, along with the recommendations of MTI, to speed up the process while maintaining high quality of indexing results. Results We have generated summaries of different lengths using two different summarizers, and evaluated the MTI indexing on the summaries using different algorithms: MTI, individual MTI components, and machine learning. The results are compared to those of full text articles and MEDLINE citations. Our results show that automatically generated summaries achieve similar recall but higher precision compared to full text articles. Compared to MEDLINE citations, summaries achieve higher recall but lower precision. Conclusions Our results show that automatic summaries produce better indexing than full text articles. Summaries produce similar recall to full text but much better precision, which seems to indicate that automatic summaries can

  16. A cloud-based system for automatic glaucoma screening.

    PubMed

    Fengshou Yin; Damon Wing Kee Wong; Ying Quan; Ai Ping Yow; Ngan Meng Tan; Gopalakrishnan, Kavitha; Beng Hai Lee; Yanwu Xu; Zhuo Zhang; Jun Cheng; Jiang Liu

    2015-08-01

    In recent years, there has been increasing interest in the use of automatic computer-based systems for the detection of eye diseases including glaucoma. However, these systems are usually standalone software with basic functions only, limiting their usage in a large scale. In this paper, we introduce an online cloud-based system for automatic glaucoma screening through the use of medical image-based pattern classification technologies. It is designed in a hybrid cloud pattern to offer both accessibility and enhanced security. Raw data including patient's medical condition and fundus image, and resultant medical reports are collected and distributed through the public cloud tier. In the private cloud tier, automatic analysis and assessment of colour retinal fundus images are performed. The ubiquitous anywhere access nature of the system through the cloud platform facilitates a more efficient and cost-effective means of glaucoma screening, allowing the disease to be detected earlier and enabling early intervention for more efficient intervention and disease management. PMID:26736579

  17. Automatic rainfall recharge model induction by evolutionary computational intelligence

    NASA Astrophysics Data System (ADS)

    Hong, Yoon-Seok Timothy; White, Paul A.; Scott, David M.

    2005-08-01

    Genetic programming (GP) is used to develop models of rainfall recharge from observations of rainfall recharge and rainfall, calculated potential evapotranspiration (PET) and soil profile available water (PAW) at four sites over a 4 year period in Canterbury, New Zealand. This work demonstrates that the automatic model induction method is a useful development in modeling rainfall recharge. The five best performing models evolved by genetic programming show a highly nonlinear relationship between rainfall recharge and the independent variables. These models are dominated by a positive correlation with rainfall, a negative correlation with the square of PET, and a negative correlation with PAW. The best performing GP models are more reliable than a soil water balance model at predicting rainfall recharge when rainfall recharge is observed in the late spring, summer, and early autumn periods. The ``best'' GP model provides estimates of cumulative sums of rainfall recharge that are closer than a soil water balance model to observations at all four sites.

  18. [Research Progress of Automatic Sleep Staging Based on Electroencephalogram Signals].

    PubMed

    Gao, Qunxia; Zhou, Jing; Wu, Xiaoming

    2015-10-01

    The research of sleep staging is not only a basis of diagnosing sleep related diseases but also the precondition of evaluating sleep quality, and has important clinical significance. In recent years, the research of automatic sleep staging based on computer has become a hot spot and got some achievements. The basic knowledge of sleep staging and electroencephalogram (EEG) is introduced in this paper. Then, feature extraction and pattern recognition, two key technologies for automatic sleep staging, are discussed in detail. Wavelet transform and Hilbert-Huang transform, two methods for feature extraction, are compared. Artificial neural network and support vector machine (SVM), two methods for pattern recognition are discussed. In the end, the research status of this field is summarized, and development trends of next phase are pointed out. PMID:26964329

  19. Automatic learning-based beam angle selection for thoracic IMRT

    SciTech Connect

    Amit, Guy; Marshall, Andrea; Purdie, Thomas G. Jaffray, David A.; Levinshtein, Alex; Hope, Andrew J.; Lindsay, Patricia; Pekar, Vladimir

    2015-04-15

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationally efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume

  20. A learning-based automatic spinal MRI segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoqing; Samarabandu, Jagath; Garvin, Greg; Chhem, Rethy; Li, Shuo

    2008-03-01

    Image segmentation plays an important role in medical image analysis and visualization since it greatly enhances the clinical diagnosis. Although many algorithms have been proposed, it is still challenging to achieve an automatic clinical segmentation which requires speed and robustness. Automatically segmenting the vertebral column in Magnetic Resonance Imaging (MRI) image is extremely challenging as variations in soft tissue contrast and radio-frequency (RF) in-homogeneities cause image intensity variations. Moveover, little work has been done in this area. We proposed a generic slice-independent, learning-based method to automatically segment the vertebrae in spinal MRI images. A main feature of our contributions is that the proposed method is able to segment multiple images of different slices simultaneously. Our proposed method also has the potential to be imaging modality independent as it is not specific to a particular imaging modality. The proposed method consists of two stages: candidate generation and verification. The candidate generation stage is aimed at obtaining the segmentation through the energy minimization. In this stage, images are first partitioned into a number of image regions. Then, Support Vector Machines (SVM) is applied on those pre-partitioned image regions to obtain the class conditional distributions, which are then fed into an energy function and optimized with the graph-cut algorithm. The verification stage applies domain knowledge to verify the segmented candidates and reject unsuitable ones. Experimental results show that the proposed method is very efficient and robust with respect to image slices.

  1. [Automatic Measurement of the Stellar Atmospheric Parameters Based Mass Estimation].

    PubMed

    Tu, Liang-ping; Wei, Hui-ming; Luo, A-li; Zhao, Yong-heng

    2015-11-01

    We have collected massive stellar spectral data in recent years, which leads to the research on the automatic measurement of stellar atmospheric physical parameters (effective temperature Teff, surface gravity log g and metallic abundance [Fe/ H]) become an important issue. To study the automatic measurement of these three parameters has important significance for some scientific problems, such as the evolution of the universe and so on. But the research of this problem is not very widely, some of the current methods are not able to estimate the values of the stellar atmospheric physical parameters completely and accurately. So in this paper, an automatic method to predict stellar atmospheric parameters based on mass estimation was presented, which can achieve the prediction of stellar effective temperature Teff, surface gravity log g and metallic abundance [Fe/H]. This method has small amount of computation and fast training speed. The main idea of this method is that firstly it need us to build some mass distributions, secondly the original spectral data was mapped into the mass space and then to predict the stellar parameter with the support vector regression (SVR) in the mass space. we choose the stellar spectral data from the United States SDSS-DR8 for the training and testing. We also compared the predicted results of this method with the SSPP and achieve higher accuracy. The predicted results are more stable and the experimental results show that the method is feasible and can predict the stellar atmospheric physical parameters effectively. PMID:26978937

  2. [Automatic Measurement of the Stellar Atmospheric Parameters Based Mass Estimation].

    PubMed

    Tu, Liang-ping; Wei, Hui-ming; Luo, A-li; Zhao, Yong-heng

    2015-11-01

    We have collected massive stellar spectral data in recent years, which leads to the research on the automatic measurement of stellar atmospheric physical parameters (effective temperature Teff, surface gravity log g and metallic abundance [Fe/ H]) become an important issue. To study the automatic measurement of these three parameters has important significance for some scientific problems, such as the evolution of the universe and so on. But the research of this problem is not very widely, some of the current methods are not able to estimate the values of the stellar atmospheric physical parameters completely and accurately. So in this paper, an automatic method to predict stellar atmospheric parameters based on mass estimation was presented, which can achieve the prediction of stellar effective temperature Teff, surface gravity log g and metallic abundance [Fe/H]. This method has small amount of computation and fast training speed. The main idea of this method is that firstly it need us to build some mass distributions, secondly the original spectral data was mapped into the mass space and then to predict the stellar parameter with the support vector regression (SVR) in the mass space. we choose the stellar spectral data from the United States SDSS-DR8 for the training and testing. We also compared the predicted results of this method with the SSPP and achieve higher accuracy. The predicted results are more stable and the experimental results show that the method is feasible and can predict the stellar atmospheric physical parameters effectively.

  3. Automatic data processing and crustal modeling on Brazilian Seismograph Network

    NASA Astrophysics Data System (ADS)

    Moreira, L. P.; Chimpliganond, C.; Peres Rocha, M.; Franca, G.; Marotta, G. S.; Von Huelsen, M. G.

    2014-12-01

    The Brazilian Seismograph Network (RSBR) is a joint project of four Brazilian research institutions with the support of Petrobras and its main goal is to monitor the seismic activities, generate alerts of seismic hazard and provide data for Brazilian tectonic and structure research. Each institution operates and maintain their seismic network, sharing their data in an virtual private network. These networks have seismic stations transmitting in real time (or near real time) raw data to their respective data centers, where the seismogram files are then shared with other institutions. Currently RSBR has 57 broadband stations, some of them operating since 1994, transmitting data through mobile phone data networks or satellite links. Station management, data acquisition and storage and earthquake data processing at the Seismological Observatory of the University of Brasilia is automatically performed by SeisComP3 (SC3). However, the SC3 data processing is limited to event detection, location and magnitude. An automatic crustal modeling system was designed process raw seismograms and generate 1D S-velocity profiles. This system automatically calculates receiver function (RF) traces, Vp/Vs ratio (h-k stack) and surface waves dispersion (SWD) curves. These traces and curves are then used to calibrate the lithosphere seismic velocity models using a joint inversion scheme The results can be reviewed by an analyst, change processing parameters and selecting/neglecting RF traces and SWD curves used in lithosphere model calibration. The results to be obtained from this system will be used to generate and update a quasi-3D crustal model of Brazil's territory.

  4. Automatic identification of model reductions for discrete stochastic simulation

    NASA Astrophysics Data System (ADS)

    Wu, Sheng; Fu, Jin; Li, Hong; Petzold, Linda

    2012-07-01

    Multiple time scales in cellular chemical reaction systems present a challenge for the efficiency of stochastic simulation. Numerous model reductions have been proposed to accelerate the simulation of chemically reacting systems by exploiting time scale separation. However, these are often identified and deployed manually, requiring expert knowledge. This is time-consuming, prone to error, and opportunities for model reduction may be missed, particularly for large models. We propose an automatic model analysis algorithm using an adaptively weighted Petri net to dynamically identify opportunities for model reductions for both the stochastic simulation algorithm and tau-leaping simulation, with no requirement of expert knowledge input. Results are presented to demonstrate the utility and effectiveness of this approach.

  5. An Automatic Registration Algorithm for 3D Maxillofacial Model

    NASA Astrophysics Data System (ADS)

    Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng

    2016-09-01

    3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.

  6. SummitView 1.0: a code to automatically generate 3D solid models of surface micro-machining based MEMS designs.

    SciTech Connect

    McBride, Cory L. (Elemental Technologies, American Fort, UT); Yarberry, Victor R.; Schmidt, Rodney Cannon; Meyers, Ray J.

    2006-11-01

    This report describes the SummitView 1.0 computer code developed at Sandia National Laboratories. SummitView is designed to generate a 3D solid model, amenable to visualization and meshing, that represents the end state of a microsystem fabrication process such as the SUMMiT (Sandia Ultra-Planar Multilevel MEMS Technology) V process. Functionally, SummitView performs essentially the same computational task as an earlier code called the 3D Geometry modeler [1]. However, because SummitView is based on 2D instead of 3D data structures and operations, it has significant speed and robustness advantages. As input it requires a definition of both the process itself and the collection of individual 2D masks created by the designer and associated with each of the process steps. The definition of the process is contained in a special process definition file [2] and the 2D masks are contained in MEM format files [3]. The code is written in C++ and consists of a set of classes and routines. The classes represent the geometric data and the SUMMiT V process steps. Classes are provided for the following process steps: Planar Deposition, Planar Etch, Conformal Deposition, Dry Etch, Wet Etch and Release Etch. SummitView is built upon the 2D Boolean library GBL-2D [4], and thus contains all of that library's functionality.

  7. Automatic emotion recognition based on body movement analysis: a survey.

    PubMed

    Zacharatos, Haris; Gatzoulis, Christos; Chrysanthou, Yiorgos L

    2014-01-01

    Humans are emotional beings, and their feelings influence how they perform and interact with computers. One of the most expressive modalities for humans is body posture and movement, which researchers have recently started exploiting for emotion recognition. This survey describes emerging techniques and modalities related to emotion recognition based on body movement, as well as recent advances in automatic emotion recognition. It also describes application areas and notation systems and explains the importance of movement segmentation. It then discusses unsolved problems and provides promising directions for future research. The Web extra (a PDF file) contains tables with additional information related to the article. PMID:25216477

  8. A fully automatic system for acid-base coulometric titrations.

    PubMed

    Cladera, A; Caro, A; Estela, J M; Cerdà, V

    1990-01-01

    An automatic system for acid-base titrations by electrogeneration of H(+) and OH(-) ions, with potentiometric end-point detection, was developed. The system includes a PC-compatible computer for instrumental control, data acquisition and processing, which allows up to 13 samples to be analysed sequentially with no human intervention.The system performance was tested on the titration of standard solutions, which it carried out with low errors and RSD. It was subsequently applied to the analysis of various samples of environmental and nutritional interest, specifically waters, soft drinks and wines.

  9. Fully automatic adjoints: a robust and efficient mechanism for generating adjoint ocean models

    NASA Astrophysics Data System (ADS)

    Ham, D. A.; Farrell, P. E.; Funke, S. W.; Rognes, M. E.

    2012-04-01

    The problem of generating and maintaining adjoint models is sufficiently difficult that typically only the most advanced and well-resourced community ocean models achieve it. There are two current technologies which each suffer from their own limitations. Algorithmic differentiation, also called automatic differentiation, is employed by models such as the MITGCM [2] and the Alfred Wegener Institute model FESOM [3]. This technique is very difficult to apply to existing code, and requires a major initial investment to prepare the code for automatic adjoint generation. AD tools may also have difficulty with code employing modern software constructs such as derived data types. An alternative is to formulate the adjoint differential equation and to discretise this separately. This approach, known as the continuous adjoint and employed in ROMS [4], has the disadvantage that two different model code bases must be maintained and manually kept synchronised as the model develops. The discretisation of the continuous adjoint is not automatically consistent with that of the forward model, producing an additional source of error. The alternative presented here is to formulate the flow model in the high level language UFL (Unified Form Language) and to automatically generate the model using the software of the FEniCS project. In this approach it is the high level code specification which is differentiated, a task very similar to the formulation of the continuous adjoint [5]. However since the forward and adjoint models are generated automatically, the difficulty of maintaining them vanishes and the software engineering process is therefore robust. The scheduling and execution of the adjoint model, including the application of an appropriate checkpointing strategy is managed by libadjoint [1]. In contrast to the conventional algorithmic differentiation description of a model as a series of primitive mathematical operations, libadjoint employs a new abstraction of the simulation

  10. Performance Improvement in Automatic Question Answering System Based on Dependency Term

    NASA Astrophysics Data System (ADS)

    Shi, Jianxing; Yuan, Xiaojie; Yu, Shitao; Ning, Hua; Wang, Chenying

    Automatic Question Answering (QA) system has become quite popular in recent years, especially since the QA tracks appeared at Text REtrieval Conference (TREC). However, using only lexical information, the keyword-based information retrieval cannot fully describe the characteristics of natural language, thus the system performance cannot make people satisfied. It is proposed in this paper a definition of dependency term, based on the dependency grammar, employing the natural language dependency structure, as the improvement of the term, to support the typical information retrieval models. It is in fact a solution for a special application in XML information retrieval (XML IR) field. Experiments show that: dependency-term-based information retrieval model effectively describes the characteristics of natural language questions, and improves the performance of automatic question answering system.

  11. Automatic translation of digraph to fault-tree models

    NASA Technical Reports Server (NTRS)

    Iverson, David L.

    1992-01-01

    The author presents a technique for converting digraph models, including those models containing cycles, to a fault-tree format. A computer program which automatically performs this translation using an object-oriented representation of the models has been developed. The fault-trees resulting from translations can be used for fault-tree analysis and diagnosis. Programs to calculate fault-tree and digraph cut sets and perform diagnosis with fault-tree models have also been developed. The digraph to fault-tree translation system has been successfully tested on several digraphs of varying size and complexity. Details of some representative translation problems are presented. Most of the computation performed by the program is dedicated to finding minimal cut sets for digraph nodes in order to break cycles in the digraph. Fault-trees produced by the translator have been successfully used with NASA's Fault-Tree Diagnosis System (FTDS) to produce automated diagnostic systems.

  12. A semi-automatic model for sinkhole identification in a karst area of Zhijin County, China

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Oguchi, Takashi; Wu, Pan

    2015-12-01

    The objective of this study is to investigate the use of DEMs derived from ASTER and SRTM remote sensing images and topographic maps to detect and quantify natural sinkholes in a karst area in Zhijin county, southwest China. Two methodologies were implemented. The first is a semi-automatic approach which stepwise identifies the depression using DEMs: 1) DEM acquisition; 2) sink fill; 3) sink depth calculation using the difference between the original and sinkfree DEMs; and 4) elimination of the spurious sinkholes by the threshold values of morphometric parameters including TPI (topographic position index), geology, and land use. The second is the traditional visual interpretation of depressions based on the integrated analysis of the high-resolution aerial photographs and topographic maps. The threshold values of the depression area, shape, depth and TPI appropriate for distinguishing true depressions were abstained from the maximum overall accuracy generated by the comparison between the depression maps produced by the semi-automatic model or visual interpretation. The result shows that the best performance of the semi-automatic model for meso-scale karst depression delineation was using the DEM from the topographic maps with the thresholds area >~ 60 m2, ellipticity >~ 0.2 and TPI <= 0. With these realistic thresholds, the accuracy of the semi-automatic model ranges from 0.78 to 0.95 for DEM resolutions from 3 to 75 m.

  13. Automatic Dynamic Aircraft Modeler (ADAM) for the Computer Program NASTRAN

    NASA Technical Reports Server (NTRS)

    Griffis, H.

    1985-01-01

    Large general purpose finite element programs require users to develop large quantities of input data. General purpose pre-processors are used to decrease the effort required to develop structural models. Further reduction of effort can be achieved by specific application pre-processors. Automatic Dynamic Aircraft Modeler (ADAM) is one such application specific pre-processor. General purpose pre-processors use points, lines and surfaces to describe geometric shapes. Specifying that ADAM is used only for aircraft structures allows generic structural sections, wing boxes and bodies, to be pre-defined. Hence with only gross dimensions, thicknesses, material properties and pre-defined boundary conditions a complete model of an aircraft can be created.

  14. A graph-based approach for automatic cardiac tractography.

    PubMed

    Frindel, Carole; Robini, Marc; Schaerer, Joël; Croisille, Pierre; Zhu, Yue-Min

    2010-10-01

    A new automatic algorithm for assessing fiber-bundle organization in the human heart using diffusion-tensor magnetic resonance imaging is presented. The proposed approach distinguishes from the locally "greedy" paradigm, which uses voxel-wise seed initialization intrinsic to conventional tracking algorithms. It formulates the fiber tracking problem as the global problem of computing paths in a boolean-weighted undirected graph, where each voxel is a vertex and each pair of neighboring voxels is connected with an edge. This leads to a global optimization task that can be solved by iterated conditional modes-like algorithms or Metropolis-type annealing. A new deterministic optimization strategy, namely iterated conditional modes with α-relaxation using (t(2))- and (t(4))-moves, is also proposed; it has similar performance to annealing but offers a substantial computational gain. This approach offers some important benefits. The global nature of our tractography method reduces sensitivity to noise and modeling errors. The discrete framework allows an optimal balance between the density of fiber bundles and the amount of available data. Besides, seed points are no longer needed; fibers are predicted in one shot for the whole diffusion-tensor magnetic resonance imaging volume, in a completely automatic way. PMID:20665895

  15. A Cough-Based Algorithm for Automatic Diagnosis of Pertussis

    PubMed Central

    Pramono, Renard Xaviero Adhi; Imtiaz, Syed Anas; Rodriguez-Villegas, Esther

    2016-01-01

    Pertussis is a contagious respiratory disease which mainly affects young children and can be fatal if left untreated. The World Health Organization estimates 16 million pertussis cases annually worldwide resulting in over 200,000 deaths. It is prevalent mainly in developing countries where it is difficult to diagnose due to the lack of healthcare facilities and medical professionals. Hence, a low-cost, quick and easily accessible solution is needed to provide pertussis diagnosis in such areas to contain an outbreak. In this paper we present an algorithm for automated diagnosis of pertussis using audio signals by analyzing cough and whoop sounds. The algorithm consists of three main blocks to perform automatic cough detection, cough classification and whooping sound detection. Each of these extract relevant features from the audio signal and subsequently classify them using a logistic regression model. The output from these blocks is collated to provide a pertussis likelihood diagnosis. The performance of the proposed algorithm is evaluated using audio recordings from 38 patients. The algorithm is able to diagnose all pertussis successfully from all audio recordings without any false diagnosis. It can also automatically detect individual cough sounds with 92% accuracy and PPV of 97%. The low complexity of the proposed algorithm coupled with its high accuracy demonstrates that it can be readily deployed using smartphones and can be extremely useful for quick identification or early screening of pertussis and for infection outbreaks control. PMID:27583523

  16. A Cough-Based Algorithm for Automatic Diagnosis of Pertussis.

    PubMed

    Pramono, Renard Xaviero Adhi; Imtiaz, Syed Anas; Rodriguez-Villegas, Esther

    2016-01-01

    Pertussis is a contagious respiratory disease which mainly affects young children and can be fatal if left untreated. The World Health Organization estimates 16 million pertussis cases annually worldwide resulting in over 200,000 deaths. It is prevalent mainly in developing countries where it is difficult to diagnose due to the lack of healthcare facilities and medical professionals. Hence, a low-cost, quick and easily accessible solution is needed to provide pertussis diagnosis in such areas to contain an outbreak. In this paper we present an algorithm for automated diagnosis of pertussis using audio signals by analyzing cough and whoop sounds. The algorithm consists of three main blocks to perform automatic cough detection, cough classification and whooping sound detection. Each of these extract relevant features from the audio signal and subsequently classify them using a logistic regression model. The output from these blocks is collated to provide a pertussis likelihood diagnosis. The performance of the proposed algorithm is evaluated using audio recordings from 38 patients. The algorithm is able to diagnose all pertussis successfully from all audio recordings without any false diagnosis. It can also automatically detect individual cough sounds with 92% accuracy and PPV of 97%. The low complexity of the proposed algorithm coupled with its high accuracy demonstrates that it can be readily deployed using smartphones and can be extremely useful for quick identification or early screening of pertussis and for infection outbreaks control. PMID:27583523

  17. UIVerify: A Web-Based Tool for Verification and Automatic Generation of User Interfaces

    NASA Technical Reports Server (NTRS)

    Shiffman, Smadar; Degani, Asaf; Heymann, Michael

    2004-01-01

    In this poster, we describe a web-based tool for verification and automatic generation of user interfaces. The verification component of the tool accepts as input a model of a machine and a model of its interface, and checks that the interface is adequate (correct). The generation component of the tool accepts a model of a given machine and the user's task, and then generates a correct and succinct interface. This write-up will demonstrate the usefulness of the tool by verifying the correctness of a user interface to a flight-control system. The poster will include two more examples of using the tool: verification of the interface to an espresso machine, and automatic generation of a succinct interface to a large hypothetical machine.

  18. Development of an Automatic Endoscope Positioning System based on Biological Fluctuation

    NASA Astrophysics Data System (ADS)

    Yamada, Yasuo; Nishikawa, Atsushi; Sekimoto, Mitsugu; Toda, Shingo; Takiguchi, Shuji; Miyoshi, Norikatsu; Kobayashi, Takeharu; Kazuhara, Kouhei; Ichihara, Takaharu; Kurashita, Naoto; Doki, Yuichiro; Mori, Masaki; Miyazaki, Fumio

    In general endoscopic surgery, the surgeon operates instruments and the camera assistant operates an endoscope. Because of shortage of physicians, there are demands for transition to endoscopic solo surgery. For the purpose of this demand, the automatic endoscope positioning system which operates the endoscope instead of the camera assistant during surgery has been actively researched from home and abroad. Almost research of automatic endoscope positioning system has tried to make a model of camera assistant's endoscopic operation. However, there is no system which has enough performance as compared with the camera assistant. The difficulty of making an accurate model of camera assistant's endoscopic operation is the most significant factor. We propose a bio-inspired automatic endoscope positioning algorithm which is a nonmodel based approach and develop a system which was implemented the proposed algorithm. We assess the surgeon's procedure through gallbladder removal simulations using the automatic endoscope positioning system. We compare an algorithm which has been seen in previously research with the proposed algorithm. We validate the effectivity of the proposed method as compared with the surgeon's procedure.

  19. An automatic registration method based on runway detection

    NASA Astrophysics Data System (ADS)

    Zhang, Xiuqiong; Yu, Li; Huang, Guo

    2014-04-01

    Runway is seen distinctly that is a crucial condition in the process of approaching and landing. One of the enhanced vision methods is image fusion method between the infrared and visible images in EVS (Enhanced Vision System). The image registration plays a very important role in image fusion. So, an automatic image registration method is proposed based on the accurate runway detection. Firstly, runway is detected using DWT (discrete wavelets transform) from the infrared and visible images respectively. Then, a fitting triangle is constructed according to the edges of runway. The corresponding feature points extracted from the middle points of edges and the centroid of triangle are used to compute the transform parameters. The results of registration are more accurate and efficient than those of registration based on mutual information. This method is robust and has less computation which can be applied to real-time system.

  20. An automatic invisible axion in the SUSY preon model

    NASA Astrophysics Data System (ADS)

    Babu, K. S.; Choi, Kiwoon; Pati, J. C.; Zhang, X.

    1994-08-01

    It is shown that the recently proposed preon model which provides a unified origin of the diverse mass scales and an explanation of family replication as well as of inter-family mass-hierarchy, naturally possesses a Peccei-Quinn (PQ) symmetry whose spontaneous breaking leads to an automatic invisible axion. Existence of the PQ-symmetry is simply a consequence of supersymmetry and the requirement of minimality in the field-content and interactions, which proposes that the lagrangian should possess only those terms which are dictated by the gauge principle and no others. In addition to the axion, the model also generates two superlight Goldstone bosons and their superpartners all of which are cosmologically safe.

  1. Automatic generation of matrix element derivatives for tight binding models

    NASA Astrophysics Data System (ADS)

    Elena, Alin M.; Meister, Matthias

    2005-10-01

    Tight binding (TB) models are one approach to the quantum mechanical many-particle problem. An important role in TB models is played by hopping and overlap matrix elements between the orbitals on two atoms, which of course depend on the relative positions of the atoms involved. This dependence can be expressed with the help of Slater-Koster parameters, which are usually taken from tables. Recently, a way to generate these tables automatically was published. If TB approaches are applied to simulations of the dynamics of a system, also derivatives of matrix elements can appear. In this work we give general expressions for first and second derivatives of such matrix elements. Implemented in a tight binding computer program, like, for instance, DINAMO, they obviate the need to type all the required derivatives of all occurring matrix elements by hand.

  2. Regular algorithm for the automatic refinement of the spectral characteristics of acoustic finite element models

    NASA Astrophysics Data System (ADS)

    Suvorov, A. S.; Sokov, E. M.; V'yushkina, I. A.

    2016-09-01

    A new method is presented for the automatic refinement of finite element models of complex mechanical-acoustic systems using the results of experimental studies. The method is based on control of the spectral characteristics via selection of the optimal distribution of adjustments to the stiffness of a finite element mesh. The results of testing the method are given to show the possibility of its use to significantly increase the simulation accuracy of vibration characteristics of bodies with arbitrary spatial configuration.

  3. Efficient Word Reading: Automaticity of Print-Related Skills Indexed by Rapid Automatized Naming through Cusp-Catastrophe Modeling

    ERIC Educational Resources Information Center

    Sideridis, Georgios D.; Simos, Panagiotis; Mouzaki, Angeliki; Stamovlasis, Dimitrios

    2016-01-01

    The study explored the moderating role of rapid automatized naming (RAN) in reading achievement through a cusp-catastrophe model grounded on nonlinear dynamic systems theory. Data were obtained from a community sample of 496 second through fourth graders who were followed longitudinally over 2 years and split into 2 random subsamples (validation…

  4. Modeling of a data exchange process in the Automatic Process Control System on the base of the universal SCADA-system

    NASA Astrophysics Data System (ADS)

    Topolskiy, D.; Topolskiy, N.; Solomin, E.; Topolskaya, I.

    2016-04-01

    In the present paper the authors discuss some ways of solving energy saving problems in mechanical engineering. In authors' opinion one of the ways of solving this problem is integrated modernization of power engineering objects of mechanical engineering companies, which should be intended for the energy supply control efficiency increase and electric energy commercial accounting improvement. The author have proposed the usage of digital current and voltage transformers for these purposes. To check the compliance of this equipment with the IEC 61850 International Standard, we have built a mathematic model of the data exchange process between measuring transformers and a universal SCADA-system. The results of modeling show that the discussed equipment corresponds to the mentioned Standard requirements and the usage of the universal SCADA-system for these purposes is preferable and economically reasonable. In modeling the authors have used the following software: MasterScada, Master OPC_DI_61850, OPNET.

  5. Weighted ensemble based automatic detection of exudates in fundus photographs.

    PubMed

    Prentasic, Pavle; Loncaric, Sven

    2014-01-01

    Diabetic retinopathy (DR) is a visual complication of diabetes, which has become one of the leading causes of preventable blindness in the world. Exudate detection is an important problem in automatic screening systems for detection of diabetic retinopathy using color fundus photographs. In this paper, we present a method for detection of exudates in color fundus photographs, which combines several preprocessing and candidate extraction algorithms to increase the exudate detection accuracy. The first stage of the method consists of an ensemble of several exudate candidate extraction algorithms. In the learning phase, simulated annealing is used to determine weights for combining the results of the ensemble candidate extraction algorithms. The second stage of the method uses a machine learning-based classification for detection of exudate regions. The experimental validation was performed using the DRiDB color fundus image set. The validation has demonstrated that the proposed method achieved higher accuracy in comparison to state-of-the art methods.

  6. Spike Detection Based on Normalized Correlation with Automatic Template Generation

    PubMed Central

    Hwang, Wen-Jyi; Wang, Szu-Huai; Hsu, Ya-Tzu

    2014-01-01

    A novel feedback-based spike detection algorithm for noisy spike trains is presented in this paper. It uses the information extracted from the results of spike classification for the enhancement of spike detection. The algorithm performs template matching for spike detection by a normalized correlator. The detected spikes are then sorted by the OSortalgorithm. The mean of spikes of each cluster produced by the OSort algorithm is used as the template of the normalized correlator for subsequent detection. The automatic generation and updating of templates enhance the robustness of the spike detection to input trains with various spike waveforms and noise levels. Experimental results show that the proposed algorithm operating in conjunction with OSort is an efficient design for attaining high detection and classification accuracy for spike sorting. PMID:24960082

  7. Towards Automatic Semantic Labelling of 3D City Models

    NASA Astrophysics Data System (ADS)

    Rook, M.; Biljecki, F.; Diakité, A. A.

    2016-10-01

    The lack of semantic information in many 3D city models is a considerable limiting factor in their use, as a lot of applications rely on semantics. Such information is not always available, since it is not collected at all times, it might be lost due to data transformation, or its lack may be caused by non-interoperability in data integration from other sources. This research is a first step in creating an automatic workflow that semantically labels plain 3D city model represented by a soup of polygons, with semantic and thematic information, as defined in the CityGML standard. The first step involves the reconstruction of the topology, which is used in a region growing algorithm that clusters upward facing adjacent triangles. Heuristic rules, embedded in a decision tree, are used to compute a likeliness score for these regions that either represent the ground (terrain) or a RoofSurface. Regions with a high likeliness score, to one of the two classes, are used to create a decision space, which is used in a support vector machine (SVM). Next, topological relations are utilised to select seeds that function as a start in a region growing algorithm, to create regions of triangles of other semantic classes. The topological relationships of the regions are used in the aggregation of the thematic building features. Finally, the level of detail is detected to generate the correct output in CityGML. The results show an accuracy between 85 % and 99 % in the automatic semantic labelling on four different test datasets. The paper is concluded by indicating problems and difficulties implying the next steps in the research.

  8. Incorporating Feature-Based Annotations into Automatically Generated Knowledge Representations

    NASA Astrophysics Data System (ADS)

    Lumb, L. I.; Lederman, J. I.; Aldridge, K. D.

    2006-12-01

    Earth Science Markup Language (ESML) is efficient and effective in representing scientific data in an XML- based formalism. However, features of the data being represented are not accounted for in ESML. Such features might derive from events (e.g., a gap in data collection due to instrument servicing), identifications (e.g., a scientifically interesting area/volume in an image), or some other source. In order to account for features in an ESML context, we consider them from the perspective of annotation, i.e., the addition of information to existing documents without changing the originals. Although it is possible to extend ESML to incorporate feature-based annotations internally (e.g., by extending the XML schema for ESML), there are a number of complicating factors that we identify. Rather than pursuing the ESML-extension approach, we focus on an external representation for feature-based annotations via XML Pointer Language (XPointer). In previous work (Lumb &Aldridge, HPCS 2006, IEEE, doi:10.1109/HPCS.2006.26), we have shown that it is possible to extract relationships from ESML-based representations, and capture the results in the Resource Description Format (RDF). Thus we explore and report on this same requirement for XPointer-based annotations of ESML representations. As in our past efforts, the Global Geodynamics Project (GGP) allows us to illustrate with a real-world example this approach for introducing annotations into automatically generated knowledge representations.

  9. Automatic generation of conceptual database design tools from data model specifications

    SciTech Connect

    Hong, Shuguang.

    1989-01-01

    The problems faced in the design and implementation of database software systems based on object-oriented data models are similar to that of other software design, i.e., difficult, complex, yet redundant effort. Automatic generation of database software system has been proposed as a solution to the problems. In order to generate database software system for a variety of object-oriented data models, two critical issues: data model specification and software generation, must be addressed. SeaWeed is a software system that automatically generates conceptual database design tools from data model specifications. A meta model has been defined for the specification of a class of object-oriented data models. This meta model provides a set of primitive modeling constructs that can be used to express the semantics, or unique characteristics, of specific data models. Software reusability has been adopted for the software generation. The technique of design reuse is utilized to derive the requirement specification of the software to be generated from data model specifications. The mechanism of code reuse is used to produce the necessary reusable software components. This dissertation presents the research results of SeaWeed including the meta model, data model specification, a formal representation of design reuse and code reuse, and the software generation paradigm.

  10. Automatic leukocyte nucleus segmentation by intuitionistic fuzzy divergence based thresholding.

    PubMed

    Jati, Arindam; Singh, Garima; Mukherjee, Rashmi; Ghosh, Madhumala; Konar, Amit; Chakraborty, Chandan; Nagar, Atulya K

    2014-03-01

    The paper proposes a robust approach to automatic segmentation of leukocyte's nucleus from microscopic blood smear images under normal as well as noisy environment by employing a new exponential intuitionistic fuzzy divergence based thresholding technique. The algorithm minimizes the divergence between the actual image and the ideally thresholded image to search for the final threshold. A new divergence formula based on exponential intuitionistic fuzzy entropy has been proposed. Further, to increase its noise handling capacity, a neighborhood-based membership function for the image pixels has been designed. The proposed scheme has been applied on 110 normal and 54 leukemia (chronic myelogenous leukemia) affected blood samples. The nucleus segmentation results have been validated by three expert hematologists. The algorithm achieves an average segmentation accuracy of 98.52% in noise-free environment. It beats the competitor algorithms in terms of several other metrics. The proposed scheme with neighborhood based membership function outperforms the competitor algorithms in terms of segmentation accuracy under noisy environment. It achieves 93.90% and 94.93% accuracies for Speckle and Gaussian noises, respectively. The average area under the ROC curves comes out to be 0.9514 in noisy conditions, which proves the robustness of the proposed algorithm.

  11. Enhancing Automaticity through Task-Based Language Learning

    ERIC Educational Resources Information Center

    De Ridder, Isabelle; Vangehuchten, Lieve; Gomez, Marta Sesena

    2007-01-01

    In general terms automaticity could be defined as the subconscious condition wherein "we perform a complex series of tasks very quickly and efficiently, without having to think about the various components and subcomponents of action involved" (DeKeyser 2001: 125). For language learning, Segalowitz (2003) characterised automaticity as a more…

  12. Search-matching algorithm for acoustics-based automatic sniper localization

    NASA Astrophysics Data System (ADS)

    Aguilar, Juan R.; Salinas, Renato A.; Abidi, Mongi A.

    2007-04-01

    Most of modern automatic sniper localization systems are based on the utilization of the acoustical emissions produced by the gun fire events. In order to estimate the spatial coordinates of the sniper location, these systems measures the time delay of arrival of the acoustical shock wave fronts to a microphone array. In more advanced systems, model based estimation of the nonlinear distortion parameters of the N-waves is used to make projectile trajectory and calibre estimations. In this work we address the sniper localization problem using a model based search-matching approach. The automatic sniper localization algorithm works searching for the acoustics model of ballistic shock waves which best matches the measured data. For this purpose, we implement a previously released acoustics model of ballistic shock waves. Further, the sniper location, the projectile trajectory and calibre, and the muzzle velocity are regarded as the inputs variables of such a model. A search algorithm is implemented in order to found what combination of the input variables minimize a fitness function defined as the distance between measured and simulated data. In such a way, the sniper location, the projectile trajectory and calibre, and the muzzle velocity can be found. In order to evaluate the performance of the algorithm, we conduct computer based experiments using simulated gunfire event data calculated at the nodes of a virtual distributed sensor network. Preliminary simulation results are quite promising showing fast convergence of the algorithm and good localization accuracy.

  13. Automatic optic disc segmentation based on image brightness and contrast

    NASA Astrophysics Data System (ADS)

    Lu, Shijian; Liu, Jiang; Lim, Joo Hwee; Zhang, Zhuo; Tan, Ngan Meng; Wong, Wing Kee; Li, Huiqi; Wong, Tien Yin

    2010-03-01

    Untreated glaucoma leads to permanent damage of the optic nerve and resultant visual field loss, which can progress to blindness. As glaucoma often produces additional pathological cupping of the optic disc (OD), cupdisc- ratio is one measure that is widely used for glaucoma diagnosis. This paper presents an OD localization method that automatically segments the OD and so can be applied for the cup-disc-ratio based glaucoma diagnosis. The proposed OD segmentation method is based on the observations that the OD is normally much brighter and at the same time have a smoother texture characteristics compared with other regions within retinal images. Given a retinal image we first capture the ODs smooth texture characteristic by a contrast image that is constructed based on the local maximum and minimum pixel lightness within a small neighborhood window. The centre of the OD can then be determined according to the density of the candidate OD pixels that are detected by retinal image pixels of the lowest contrast. After that, an OD region is approximately determined by a pair of morphological operations and the OD boundary is finally determined by an ellipse that is fitted by the convex hull of the detected OD region. Experiments over 71 retinal images of different qualities show that the OD region overlapping reaches up to 90.37% according to the OD boundary ellipses determined by our proposed method and the one manually plotted by an ophthalmologist.

  14. Automatic Multi-Scale Calibration Procedure for Nested Hydrological-Hydrogeological Regional Models

    NASA Astrophysics Data System (ADS)

    Labarthe, B.; Abasq, L.; Flipo, N.; de Fouquet, C. D.

    2014-12-01

    Large hydrosystem modelling and understanding is a complex process depending on regional and local processes. A nested interface concept has been implemented in the hydrosystem modelling platform for a large alluvial plain model (300 km2) part of a 11000 km2 multi-layer aquifer system, included in the Seine basin (65000 km2, France). The platform couples hydrological and hydrogeological processes through four spatially distributed modules (Mass balance, Unsaturated Zone, River and Groundwater). An automatic multi-scale calibration procedure is proposed. Using different data sets from regional scale (117 gauging stations and 183 piezometers over the 65000 km2) to the intermediate scale(dense past piezometric snapshot), it permits the calibration and homogenization of model parameters over scales.The stepwise procedure starts with the optimisation of the water mass balance parameters at regional scale using a conceptual 7 parameters bucket model coupled with the inverse modelling tool PEST. The multi-objective function is derived from river discharges and their de-composition by hydrograph separation. The separation is performed at each gauging station using an automatic procedure based one Chapman filter. Then, the model is run at the regional scale to provide recharge estimate and regional fluxes to the groundwater local model. Another inversion method is then used to determine the local hydrodynamic parameters. This procedure used an initial kriged transmissivity field which is successively updated until the simulated hydraulic head distribution equals a reference one obtained by krigging. Then, the local parameters are upscaled to the regional model by renormalisation procedure.This multi-scale automatic calibration procedure enhances both the local and regional processes representation. Indeed, it permits a better description of local heterogeneities and of the associated processes which are transposed into the regional model, improving the overall performances

  15. Automatic annotation of protein function based on family identification.

    PubMed

    Abascal, Federico; Valencia, Alfonso

    2003-11-15

    Although genomes are being sequenced at an impressive rate, the information generated tells us little about protein function, which is slow to characterize by traditional methods. Automatic protein function annotation based on computational methods has alleviated this imbalance. The most powerful current approach for inferring the function of new proteins is by studying the annotations of their homologues, since their common origin is assumed to be reflected in their structure and function. Unfortunately, as proteins evolve they acquire new functions, so annotation based on homology must be carried out in the context of orthologues or subfamilies. Evolution adds new complications through domain shuffling: homology (or orthology) frequently corresponds to domains rather than complete proteins. Moreover, the function of a protein may be seen as the result of combining the functions of its domains. Additionally, automatic annotation has to deal with problems related to the annotations in the databases: errors (which are likely to be propagated), inconsistencies, or different degrees of function specification. We describe a method that addresses these difficulties for the annotation of protein function. Sequence relationships are detected and measured to obtain a map of the sequence space, which is searched for differentiated groups of proteins (similar to islands on the map), which are expected to have a common function and correspond to groups of orthologues or subfamilies. This mapmaking is done by applying a clustering algorithm based on Normalized cuts in graphs. The domain problem is addressed in a simple way: pairwise local alignments are analyzed to determine the extent to which they cover the entire sequence lengths of the two proteins. This analysis determines both what homologues are preferred for functional inheritance and the level of confidence of the annotation. To alleviate the problems associated with database annotations, the information on all the

  16. SU-E-J-16: Automatic Image Contrast Enhancement Based On Automatic Parameter Optimization for Radiation Therapy Setup Verification

    SciTech Connect

    Qiu, J; Li, H. Harlod; Zhang, T; Yang, D; Ma, F

    2015-06-15

    Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The most important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.

  17. GEM System: automatic prototyping of cell-wide metabolic pathway models from genomes

    PubMed Central

    Arakawa, Kazuharu; Yamada, Yohei; Shinoda, Kosaku; Nakayama, Yoichi; Tomita, Masaru

    2006-01-01

    Background Successful realization of a "systems biology" approach to analyzing cells is a grand challenge for our understanding of life. However, current modeling approaches to cell simulation are labor-intensive, manual affairs, and therefore constitute a major bottleneck in the evolution of computational cell biology. Results We developed the Genome-based Modeling (GEM) System for the purpose of automatically prototyping simulation models of cell-wide metabolic pathways from genome sequences and other public biological information. Models generated by the GEM System include an entire Escherichia coli metabolism model comprising 968 reactions of 1195 metabolites, achieving 100% coverage when compared with the KEGG database, 92.38% with the EcoCyc database, and 95.06% with iJR904 genome-scale model. Conclusion The GEM System prototypes qualitative models to reduce the labor-intensive tasks required for systems biology research. Models of over 90 bacterial genomes are available at our web site. PMID:16553966

  18. Automatic Creation of Structural Models from Point Cloud Data: the Case of Masonry Structures

    NASA Astrophysics Data System (ADS)

    Riveiro, B.; Conde-Carnero, B.; González-Jorge, H.; Arias, P.; Caamaño, J. C.

    2015-08-01

    One of the fields where 3D modelling has an important role is in the application of such 3D models to structural engineering purposes. The literature shows an intense activity on the conversion of 3D point cloud data to detailed structural models, which has special relevance in masonry structures where geometry plays a key role. In the work presented in this paper, color data (from Intensity attribute) is used to automatically segment masonry structures with the aim of isolating masonry blocks and defining interfaces in an automatic manner using a 2.5D approach. An algorithm for the automatic processing of laser scanning data based on an improved marker-controlled watershed segmentation was proposed and successful results were found. Geometric accuracy and resolution of point cloud are constrained by the scanning instruments, giving accuracy levels reaching a few millimetres in the case of static instruments and few centimetres in the case of mobile systems. In any case, the algorithm is not significantly sensitive to low quality images because acceptable segmentation results were found in cases where blocks could not be visually segmented.

  19. An Automatic Method for Nucleus Boundary Segmentation Based on a Closed Cubic Spline.

    PubMed

    Feng, Zhao; Li, Anan; Gong, Hui; Luo, Qingming

    2016-01-01

    The recognition of brain nuclei is the basis for localizing brain functions. Traditional histological research, represented by atlas illustration, achieves the goal of nucleus boundary recognition by manual delineation, but it has become increasingly difficult to extend this handmade method to delineating brain regions and nuclei from large datasets acquired by the recently developed single-cell-resolution imaging techniques for the whole brain. Here, we propose a method based on a closed cubic spline (CCS), which can automatically segment the boundaries of nuclei that differ to a relatively high degree in cell density from the surrounding areas and has been validated on model images and Nissl-stained microimages of mouse brain. It may even be extended to the segmentation of target outlines on MRI or CT images. The proposed method for the automatic extraction of nucleus boundaries would greatly accelerate the illustration of high-resolution brain atlases.

  20. An Automatic Method for Nucleus Boundary Segmentation Based on a Closed Cubic Spline.

    PubMed

    Feng, Zhao; Li, Anan; Gong, Hui; Luo, Qingming

    2016-01-01

    The recognition of brain nuclei is the basis for localizing brain functions. Traditional histological research, represented by atlas illustration, achieves the goal of nucleus boundary recognition by manual delineation, but it has become increasingly difficult to extend this handmade method to delineating brain regions and nuclei from large datasets acquired by the recently developed single-cell-resolution imaging techniques for the whole brain. Here, we propose a method based on a closed cubic spline (CCS), which can automatically segment the boundaries of nuclei that differ to a relatively high degree in cell density from the surrounding areas and has been validated on model images and Nissl-stained microimages of mouse brain. It may even be extended to the segmentation of target outlines on MRI or CT images. The proposed method for the automatic extraction of nucleus boundaries would greatly accelerate the illustration of high-resolution brain atlases. PMID:27378903

  1. An Automatic Method for Nucleus Boundary Segmentation Based on a Closed Cubic Spline

    PubMed Central

    Feng, Zhao; Li, Anan; Gong, Hui; Luo, Qingming

    2016-01-01

    The recognition of brain nuclei is the basis for localizing brain functions. Traditional histological research, represented by atlas illustration, achieves the goal of nucleus boundary recognition by manual delineation, but it has become increasingly difficult to extend this handmade method to delineating brain regions and nuclei from large datasets acquired by the recently developed single-cell-resolution imaging techniques for the whole brain. Here, we propose a method based on a closed cubic spline (CCS), which can automatically segment the boundaries of nuclei that differ to a relatively high degree in cell density from the surrounding areas and has been validated on model images and Nissl-stained microimages of mouse brain. It may even be extended to the segmentation of target outlines on MRI or CT images. The proposed method for the automatic extraction of nucleus boundaries would greatly accelerate the illustration of high-resolution brain atlases. PMID:27378903

  2. A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing.

    PubMed

    Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian

    2016-01-01

    Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users' smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users' explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623

  3. A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing

    PubMed Central

    Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian

    2016-01-01

    Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623

  4. A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing.

    PubMed

    Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian

    2016-04-09

    Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users' smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users' explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established.

  5. Automatic target validation based on neuroscientific literature mining for tractography.

    PubMed

    Vasques, Xavier; Richardet, Renaud; Hill, Sean L; Slater, David; Chappelier, Jean-Cedric; Pralong, Etienne; Bloch, Jocelyne; Draganski, Bogdan; Cif, Laura

    2015-01-01

    Target identification for tractography studies requires solid anatomical knowledge validated by an extensive literature review across species for each seed structure to be studied. Manual literature review to identify targets for a given seed region is tedious and potentially subjective. Therefore, complementary approaches would be useful. We propose to use text-mining models to automatically suggest potential targets from the neuroscientific literature, full-text articles and abstracts, so that they can be used for anatomical connection studies and more specifically for tractography. We applied text-mining models to three structures: two well-studied structures, since validated deep brain stimulation targets, the internal globus pallidus and the subthalamic nucleus and, the nucleus accumbens, an exploratory target for treating psychiatric disorders. We performed a systematic review of the literature to document the projections of the three selected structures and compared it with the targets proposed by text-mining models, both in rat and primate (including human). We ran probabilistic tractography on the nucleus accumbens and compared the output with the results of the text-mining models and literature review. Overall, text-mining the literature could find three times as many targets as two man-weeks of curation could. The overall efficiency of the text-mining against literature review in our study was 98% recall (at 36% precision), meaning that over all the targets for the three selected seeds, only one target has been missed by text-mining. We demonstrate that connectivity for a structure of interest can be extracted from a very large amount of publications and abstracts. We believe this tool will be useful in helping the neuroscience community to facilitate connectivity studies of particular brain regions. The text mining tools used for the study are part of the HBP Neuroinformatics Platform, publicly available at http://connectivity-brainer.rhcloud.com/. PMID

  6. Automatic target validation based on neuroscientific literature mining for tractography

    PubMed Central

    Vasques, Xavier; Richardet, Renaud; Hill, Sean L.; Slater, David; Chappelier, Jean-Cedric; Pralong, Etienne; Bloch, Jocelyne; Draganski, Bogdan; Cif, Laura

    2015-01-01

    Target identification for tractography studies requires solid anatomical knowledge validated by an extensive literature review across species for each seed structure to be studied. Manual literature review to identify targets for a given seed region is tedious and potentially subjective. Therefore, complementary approaches would be useful. We propose to use text-mining models to automatically suggest potential targets from the neuroscientific literature, full-text articles and abstracts, so that they can be used for anatomical connection studies and more specifically for tractography. We applied text-mining models to three structures: two well-studied structures, since validated deep brain stimulation targets, the internal globus pallidus and the subthalamic nucleus and, the nucleus accumbens, an exploratory target for treating psychiatric disorders. We performed a systematic review of the literature to document the projections of the three selected structures and compared it with the targets proposed by text-mining models, both in rat and primate (including human). We ran probabilistic tractography on the nucleus accumbens and compared the output with the results of the text-mining models and literature review. Overall, text-mining the literature could find three times as many targets as two man-weeks of curation could. The overall efficiency of the text-mining against literature review in our study was 98% recall (at 36% precision), meaning that over all the targets for the three selected seeds, only one target has been missed by text-mining. We demonstrate that connectivity for a structure of interest can be extracted from a very large amount of publications and abstracts. We believe this tool will be useful in helping the neuroscience community to facilitate connectivity studies of particular brain regions. The text mining tools used for the study are part of the HBP Neuroinformatics Platform, publicly available at http://connectivity-brainer.rhcloud.com/. PMID

  7. Automatic Time Stepping with Global Error Control for Groundwater Flow Models

    SciTech Connect

    Tang, Guoping

    2008-09-01

    An automatic time stepping with global error control is proposed for the time integration of the diffusion equation to simulate groundwater flow in confined aquifers. The scheme is based on an a posteriori error estimate for the discontinuous Galerkin (dG) finite element methods. A stability factor is involved in the error estimate and it is used to adapt the time step and control the global temporal error for the backward difference method. The stability factor can be estimated by solving a dual problem. The stability factor is not sensitive to the accuracy of the dual solution and the overhead computational cost can be minimized by solving the dual problem using large time steps. Numerical experiments are conducted to show the application and the performance of the automatic time stepping scheme. Implementation of the scheme can lead to improvement in accuracy and efficiency for groundwater flow models.

  8. Control of automatic processes: A parallel distributed-processing model of the stroop effect. Technical report

    SciTech Connect

    Cohen, J.D.; Dunbar, K.; McClelland, J.L.

    1988-06-16

    A growing body of evidence suggests that traditional views of automaticity are in need of revision. For example, automaticity has often been treated as an all-or-none phenomenon, and traditional theories have held that automatic processes are independent of attention. Yet recent empirial data suggests that automatic processes are continuous, and furthermore are subject to attentional control. In this paper we present a model of attention which addresses these issues. Using a parallel distributed processing framework we propose that the attributes of automaticity depend upon the strength of a process and that strength increases with training. Using the Stroop effect as an example, we show how automatic processes are continuous and emerge gradually with practice. Specifically, we present a computational model of the Stroop task which simulates the time course of processing as well as the effects of learning.

  9. Neural Signatures of Controlled and Automatic Retrieval Processes in Memory-based Decision-making.

    PubMed

    Khader, Patrick H; Pachur, Thorsten; Weber, Lilian A E; Jost, Kerstin

    2016-01-01

    Decision-making often requires retrieval from memory. Drawing on the neural ACT-R theory [Anderson, J. R., Fincham, J. M., Qin, Y., & Stocco, A. A central circuit of the mind. Trends in Cognitive Sciences, 12, 136-143, 2008] and other neural models of memory, we delineated the neural signatures of two fundamental retrieval aspects during decision-making: automatic and controlled activation of memory representations. To disentangle these processes, we combined a paradigm developed to examine neural correlates of selective and sequential memory retrieval in decision-making with a manipulation of associative fan (i.e., the decision options were associated with one, two, or three attributes). The results show that both the automatic activation of all attributes associated with a decision option and the controlled sequential retrieval of specific attributes can be traced in material-specific brain areas. Moreover, the two facets of memory retrieval were associated with distinct activation patterns within the frontoparietal network: The dorsolateral prefrontal cortex was found to reflect increasing retrieval effort during both automatic and controlled activation of attributes. In contrast, the superior parietal cortex only responded to controlled retrieval, arguably reflecting the sequential updating of attribute information in working memory. This dissociation in activation pattern is consistent with ACT-R and constitutes an important step toward a neural model of the retrieval dynamics involved in memory-based decision-making.

  10. The MSP430-based control system for automatic ELISA tester

    NASA Astrophysics Data System (ADS)

    Zhao, Xinghua; Zhu, Lianqing; Dong, Mingli; Lin, Ting; Niu, Shouwei

    2006-11-01

    This paper introduces the scheme of a control system for a fully automatic ELISA (Enzyme-linked Immunosorbent Assay) tester. This tester is designed to realize the movement and positioning of the robotic arms and the pipettors and to complete the functions of pumping, reading, washing, incubating and so on. It is based on a MSP430 flash chip, a 16-bit MCU manufactured by TI Co, with very low power consumption and powerful functions. This chip is adopted in all devices of the workstation to run the controlling program, to store involved parameters and data, and to drive stepper motors. To the MCUs, motors, sensors, valves and fans are extended. A personal computer (PC) is employed to communicate with the instrument through an interface board. Relevant hardware circuits are provided. Two programs, one running in PC performs users' operation about assay options and results, the other running in MCU initiates the system and waits for commands to drive the mechanisms, are developed. Through various examinations, this control system is proved to be reliable, efficient and flexible.

  11. Automatic comic page image understanding based on edge segment analysis

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai

    2013-12-01

    Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.

  12. Acoustical model of small calibre ballistic shock waves in air for automatic sniper localization applications

    NASA Astrophysics Data System (ADS)

    Aguilar, Juan R.; Salinas, Renato A.; Abidi, Mongi A.

    2007-04-01

    The phenomenon of ballistic shock wave emission by a small calibre projectile at supersonic speed is quite relevant in automatic sniper localization applications. When available, ballistic shock wave analysis makes possible the estimation of the main ballistic features of a gunfire event. The propagation of ballistic shock waves in air is a process which mainly involves nonlinear distortion, or steepening, and atmospheric absorption. Current ballistic shock waves propagation models used in automatic sniper localization systems only consider nonlinear distortion effects. This means that only the rates of change of shock peak pressure and the N-wave duration with distance are considered in the determination of the miss distance. In the present paper we present an improved acoustical model of small calibre ballistic shock wave propagation in air, intended to be used in acoustics-based automatic sniper localization applications. In our approach, we have considered nonlinear distortion, but additionally we have also introduced the effects of atmospheric sound absorption. Atmospheric absorption is implemented in the time domain in order to get faster calculation times than those computed in frequency domain. Furthermore, we take advantage of the fact that atmospheric absorption plays a fundamental role in the rise times of the shocks, and introduce the rate of change of the rise time with distance as a third parameter to be used in the determination of the miss distance. This lead us to a more accurate and robust estimation of the miss distance, and consequently of the projectile trajectory, and the spatial coordinates of the gunshot origin.

  13. An Automatic Optical and SAR Image Registration Method Using Iterative Multi-Level and Refinement Model

    NASA Astrophysics Data System (ADS)

    Xu, C.; Sui, H. G.; Li, D. R.; Sun, K. M.; Liu, J. Y.

    2016-06-01

    Automatic image registration is a vital yet challenging task, particularly for multi-sensor remote sensing images. Given the diversity of the data, it is unlikely that a single registration algorithm or a single image feature will work satisfactorily for all applications. Focusing on this issue, the mainly contribution of this paper is to propose an automatic optical-to-SAR image registration method using -level and refinement model: Firstly, a multi-level strategy of coarse-to-fine registration is presented, the visual saliency features is used to acquire coarse registration, and then specific area and line features are used to refine the registration result, after that, sub-pixel matching is applied using KNN Graph. Secondly, an iterative strategy that involves adaptive parameter adjustment for re-extracting and re-matching features is presented. Considering the fact that almost all feature-based registration methods rely on feature extraction results, the iterative strategy improve the robustness of feature matching. And all parameters can be automatically and adaptively adjusted in the iterative procedure. Thirdly, a uniform level set segmentation model for optical and SAR images is presented to segment conjugate features, and Voronoi diagram is introduced into Spectral Point Matching (VSPM) to further enhance the matching accuracy between two sets of matching points. Experimental results show that the proposed method can effectively and robustly generate sufficient, reliable point pairs and provide accurate registration.

  14. Robust automatic measurement of 3D scanned models for the human body fat estimation.

    PubMed

    Giachetti, Andrea; Lovato, Christian; Piscitelli, Francesco; Milanese, Chiara; Zancanaro, Carlo

    2015-03-01

    In this paper, we present an automatic tool for estimating geometrical parameters from 3-D human scans independent on pose and robustly against the topological noise. It is based on an automatic segmentation of body parts exploiting curve skeleton processing and ad hoc heuristics able to remove problems due to different acquisition poses and body types. The software is able to locate body trunk and limbs, detect their directions, and compute parameters like volumes, areas, girths, and lengths. Experimental results demonstrate that measurements provided by our system on 3-D body scans of normal and overweight subjects acquired in different poses are highly correlated with the body fat estimates obtained on the same subjects with dual-energy X-rays absorptiometry (DXA) scanning. In particular, maximal lengths and girths, not requiring precise localization of anatomical landmarks, demonstrate a good correlation (up to 96%) with the body fat and trunk fat. Regression models based on our automatic measurements can be used to predict body fat values reasonably well.

  15. Automatic prediction of facial trait judgments: appearance vs. structural models.

    PubMed

    Rojas, Mario; Masip, David; Todorov, Alexander; Vitria, Jordi

    2011-01-01

    Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions.

  16. Automatic parameter extraction techniques in IC-CAP for a compact double gate MOSFET model

    NASA Astrophysics Data System (ADS)

    Darbandy, Ghader; Gneiting, Thomas; Alius, Heidrun; Alvarado, Joaquín; Cerdeira, Antonio; Iñiguez, Benjamin

    2013-05-01

    In this paper, automatic parameter extraction techniques of Agilent's IC-CAP modeling package are presented to extract our explicit compact model parameters. This model is developed based on a surface potential model and coded in Verilog-A. The model has been adapted to Trigate MOSFETs, includes short channel effects (SCEs) and allows accurate simulations of the device characteristics. The parameter extraction routines provide an effective way to extract the model parameters. The techniques minimize the discrepancy and error between the simulation results and the available experimental data for more accurate parameter values and reliable circuit simulation. Behavior of the second derivative of the drain current is also verified and proves to be accurate and continuous through the different operating regimes. The results show good agreement with measured transistor characteristics under different conditions and through all operating regimes.

  17. Automatic building of a web-like structure based on thermoplastic adhesive.

    PubMed

    Leach, Derek; Wang, Liyu; Reusser, Dorothea; Iida, Fumiya

    2014-09-01

    Animals build structures to extend their control over certain aspects of the environment; e.g., orb-weaver spiders build webs to capture prey, etc. Inspired by this behaviour of animals, we attempt to develop robotics technology that allows a robot to automatically builds structures to help it accomplish certain tasks. In this paper we show automatic building of a web-like structure with a robot arm based on thermoplastic adhesive (TPA) material. The material properties of TPA, such as elasticity, adhesiveness, and low melting temperature, make it possible for a robot to form threads across an open space by an extrusion-drawing process and then combine several of these threads into a web-like structure. The problems addressed here are discovering which parameters determine the thickness of a thread and determining how web-like structures may be used for certain tasks. We first present a model for the extrusion and the drawing of TPA threads which also includes the temperature-dependent material properties. The model verification result shows that the increasing relative surface area of the TPA thread as it is drawn thinner increases the heat loss of the thread, and that by controlling how quickly the thread is drawn, a range of diameters can be achieved from 0.2-0.75 mm. We then present a method based on a generalized nonlinear finite element truss model. The model was validated and could predict the deformation of various web-like structures when payloads are added. At the end, we demonstrate automatic building of a web-like structure for payload bearing.

  18. Multi-objective automatic calibration of hydrodynamic models utilizing inundation maps and gauge data

    NASA Astrophysics Data System (ADS)

    Dung, N. V.; Merz, B.; Bárdossy, A.; Thang, T. D.; Apel, H.

    2011-04-01

    Automatic and multi-objective calibration of hydrodynamic models is - compared to other disciplines like e.g. hydrology - still underdeveloped. This has mainly two reasons: the lack of appropriate data and the large computational demand in terms of CPU-time. Both aspects are aggravated in large-scale applications. However, there are recent developments that improve the situation on both the data and computing side. Remote sensing, especially radar-based techniques proved to provide highly valuable information on flood extents, and in case high precision DEMs are present, also on spatially distributed inundation depths. On the computing side the use of parallelization techniques brought significant performance gains. In the presented study we build on these developments by calibrating a large-scale 1-dimensional hydrodynamic model of the whole Mekong Delta downstream of Kratie in Cambodia: we combined in-situ data from a network of river gauging stations, i.e. data with high temporal but low spatial resolution, with a series of inundation maps derived from ENVISAT Advanced Synthetic Aperture Radar (ASAR) satellite images, i.e. data with low temporal but high spatial resolution, in an multi-objective automatic calibration process. It is shown that an automatic, multi-objective calibration of hydrodynamic models, even of such complexity and on a large scale and complex as a model for the Mekong Delta, is possible. Furthermore, the calibration process revealed model deficiencies in the model structure, i.e. the representation of the dike system in Vietnam, which would have been difficult to detect by a standard manual calibration procedure.

  19. Fully automatic vertebra detection in x-ray images based on multi-class SVM

    NASA Astrophysics Data System (ADS)

    Lecron, Fabian; Benjelloun, Mohammed; Mahmoudi, Saïd

    2012-02-01

    Automatically detecting vertebral bodies in X-Ray images is a very complex task, especially because of the noise and the low contrast resulting in that kind of medical imagery modality. Therefore, the contributions in the literature are mainly interested in only 2 medical imagery modalities: Computed Tomography (CT) and Magnetic Resonance (MR). Few works are dedicated to the conventional X-Ray radiography and propose mostly semi-automatic methods. However, vertebra detection is a key step in many medical applications such as vertebra segmentation, vertebral morphometry, etc. In this work, we develop a fully automatic approach for the vertebra detection, based on a learning method. The idea is to detect a vertebra by its anterior corners without human intervention. To this end, the points of interest in the radiograph are firstly detected by an edge polygonal approximation. Then, a SIFT descriptor is used to train an SVM-model. Therefore, each point of interest can be classified in order to detect if it belongs to a vertebra or not. Our approach has been assessed by the detection of 250 cervical vertebræ on radiographs. The results show a very high precision with a corner detection rate of 90.4% and a vertebra detection rate from 81.6% to 86.5%.

  20. Semi-automatic registration of 3D orthodontics models from photographs

    NASA Astrophysics Data System (ADS)

    Destrez, Raphaël.; Treuillet, Sylvie; Lucas, Yves; Albouy-Kissi, Benjamin

    2013-03-01

    In orthodontics, a common practice used to diagnose and plan the treatment is the dental cast. After digitization by a CT-scan or a laser scanner, the obtained 3D surface models can feed orthodontics numerical tools for computer-aided diagnosis and treatment planning. One of the pre-processing critical steps is the 3D registration of dental arches to obtain the occlusion of these numerical models. For this task, we propose a vision based method to automatically compute the registration based on photos of patient mouth. From a set of matched singular points between two photos and the dental 3D models, the rigid transformation to apply to the mandible to be in contact with the maxillary may be computed by minimizing the reprojection errors. During a precedent study, we established the feasibility of this visual registration approach with a manual selection of singular points. This paper addresses the issue of automatic point detection. Based on a priori knowledge, histogram thresholding and edge detection are used to extract specific points in 2D images. Concurrently, curvatures information detects 3D corresponding points. To improve the quality of the final registration, we also introduce a combined optimization of the projection matrix with the 2D/3D point positions. These new developments are evaluated on real data by considering the reprojection errors and the deviation angles after registration in respect to the manual reference occlusion realized by a specialist.

  1. Profiling School Shooters: Automatic Text-Based Analysis

    PubMed Central

    Neuman, Yair; Assaf, Dan; Cohen, Yochai; Knoll, James L.

    2015-01-01

    School shooters present a challenge to both forensic psychiatry and law enforcement agencies. The relatively small number of school shooters, their various characteristics, and the lack of in-depth analysis of all of the shooters prior to the shooting add complexity to our understanding of this problem. In this short paper, we introduce a new methodology for automatically profiling school shooters. The methodology involves automatic analysis of texts and the production of several measures relevant for the identification of the shooters. Comparing texts written by 6 school shooters to 6056 texts written by a comparison group of male subjects, we found that the shooters’ texts scored significantly higher on the Narcissistic Personality dimension as well as on the Humilated and Revengeful dimensions. Using a ranking/prioritization procedure, similar to the one used for the automatic identification of sexual predators, we provide support for the validity and relevance of the proposed methodology. PMID:26089804

  2. Profiling School Shooters: Automatic Text-Based Analysis.

    PubMed

    Neuman, Yair; Assaf, Dan; Cohen, Yochai; Knoll, James L

    2015-01-01

    School shooters present a challenge to both forensic psychiatry and law enforcement agencies. The relatively small number of school shooters, their various characteristics, and the lack of in-depth analysis of all of the shooters prior to the shooting add complexity to our understanding of this problem. In this short paper, we introduce a new methodology for automatically profiling school shooters. The methodology involves automatic analysis of texts and the production of several measures relevant for the identification of the shooters. Comparing texts written by 6 school shooters to 6056 texts written by a comparison group of male subjects, we found that the shooters' texts scored significantly higher on the Narcissistic Personality dimension as well as on the Humilated and Revengeful dimensions. Using a ranking/prioritization procedure, similar to the one used for the automatic identification of sexual predators, we provide support for the validity and relevance of the proposed methodology. PMID:26089804

  3. ModelMage: a tool for automatic model generation, selection and management.

    PubMed

    Flöttmann, Max; Schaber, Jörg; Hoops, Stephan; Klipp, Edda; Mendes, Pedro

    2008-01-01

    Mathematical modeling of biological systems usually involves implementing, simulating, and discriminating several candidate models that represent alternative hypotheses. Generating and managing these candidate models is a tedious and difficult task and can easily lead to errors. ModelMage is a tool that facilitates management of candidate models. It is designed for the easy and rapid development, generation, simulation, and discrimination of candidate models. The main idea of the program is to automatically create a defined set of model alternatives from a single master model. The user provides only one SBML-model and a set of directives from which the candidate models are created by leaving out species, modifiers or reactions. After generating models the software can automatically fit all these models to the data and provides a ranking for model selection, in case data is available. In contrast to other model generation programs, ModelMage aims at generating only a limited set of models that the user can precisely define. ModelMage uses COPASI as a simulation and optimization engine. Thus, all simulation and optimization features of COPASI are readily incorporated. ModelMage can be downloaded from http://sysbio.molgen.mpg.de/modelmage and is distributed as free software. PMID:19425122

  4. Automatic segmentation of breast MR images through a Markov random field statistical model.

    PubMed

    Ribes, S; Didierlaurent, D; Decoster, N; Gonneau, E; Risser, L; Feillel, V; Caselles, O

    2014-10-01

    An algorithm dedicated to automatic segmentation of breast magnetic resonance images is presented in this paper. Our approach is based on a pipeline that includes a denoising step and statistical segmentation. The noise removal preprocessing relies on an anisotropic diffusion scheme, whereas the statistical segmentation is conducted through a Markov random field model. The continuous updating of all parameters governing the diffusion process enables automatic denoising, and the partial volume effect is also addressed during the labeling step. To assess the relevance, the Jaccard similarity coefficient was computed. Experiments were conducted on synthetic data and breast magnetic resonance images extracted from a high-risk population. The relevance of the approach for the dataset is highlighted, and we demonstrate accuracy superior to that of traditional clustering algorithms. The results emphasize the benefits of both denoising guided by input data and the inclusion of spatial dependency through a Markov random field. For example, the Jaccard coefficient for the clinical data was increased by 114%, 109%, and 140% with respect to a K-means algorithm and, respectively, for the adipose, glandular and muscle and skin components. Moreover, the agreement between the manual segmentations provided by an experienced radiologist and the automatic segmentations performed with this algorithm was good, with Jaccard coefficients equal to 0.769, 0.756, and 0.694 for the above-mentioned classes.

  5. Model development for automatic guidance of a VTOL aircraft to a small aviation ship

    NASA Technical Reports Server (NTRS)

    Goka, T.; Sorensen, J. A.; Schmidt, S. F.; Paulk, C. H., Jr.

    1980-01-01

    This paper describes a detailed mathematical model which has been assembled to study automatic approach and landing guidance concepts to bring a VTOL aircraft onto a small aviation ship. The model is used to formulate system simulations which in turn are used to evaluate different guidance concepts. Ship motion (Sea State 5), wind-over-deck turbulence, MLS-based navigation, implicit model following flight control, lift fan V/STOL aircraft, ship and aircraft instrumentation errors, various steering laws, and appropriate environmental and human factor constraints are included in the model. Results are given to demonstrate use of the model and simulation to evaluate performance of the flight system and to choose appropriate guidance techniques for further cockpit simulator study.

  6. Automatic parameter extraction technique for gate leakage current modeling in double gate MOSFET

    NASA Astrophysics Data System (ADS)

    Darbandy, Ghader; Gneiting, Thomas; Alius, Heidrun; Alvarado, Joaquín; Cerdeira, Antonio; Iñiguez, Benjamin

    2013-11-01

    Direct Tunneling (DT) and Trap Assisted Tunneling (TAT) gate leakage current parameters have been extracted and verified considering automatic parameter extraction approach. The industry standard package IC-CAP is used to extract our leakage current model parameters. The model is coded in Verilog-A and the comparison between the model and measured data allows to obtain the model parameter values and parameters correlations/relations. The model and parameter extraction techniques have been used to study the impact of parameters in the gate leakage current based on the extracted parameter values. It is shown that the gate leakage current depends on the interfacial barrier height more strongly than the barrier height of the dielectric layer. There is almost the same scenario with respect to the carrier effective masses into the interfacial layer and the dielectric layer. The comparison between the simulated results and available measured gate leakage current transistor characteristics of Trigate MOSFETs shows good agreement.

  7. Automatic Mrf-Based Registration of High Resolution Satellite Video Data

    NASA Astrophysics Data System (ADS)

    Platias, C.; Vakalopoulou, M.; Karantzalos, K.

    2016-06-01

    In this paper we propose a deformable registration framework for high resolution satellite video data able to automatically and accurately co-register satellite video frames and/or register them to a reference map/image. The proposed approach performs non-rigid registration, formulates a Markov Random Fields (MRF) model, while efficient linear programming is employed for reaching the lowest potential of the cost function. The developed approach has been applied and validated on satellite video sequences from Skybox Imaging and compared with a rigid, descriptor-based registration method. Regarding the computational performance, both the MRF-based and the descriptor-based methods were quite efficient, with the first one converging in some minutes and the second in some seconds. Regarding the registration accuracy the proposed MRF-based method significantly outperformed the descriptor-based one in all the performing experiments.

  8. Thesaurus-Based Automatic Indexing: A Study of Indexing Failure.

    ERIC Educational Resources Information Center

    Caplan, Priscilla Louise

    This study examines automatic indexing performed with a manually constructed thesaurus on a document collection of titles and abstracts of library science master's papers. Errors are identified when the meaning of a posted descriptor, as identified by context in the thesaurus, does not match that of the passage of text which occasioned the…

  9. Automatic localization of IASLC-defined mediastinal lymph node stations on CT images using fuzzy models

    NASA Astrophysics Data System (ADS)

    Matsumoto, Monica M. S.; Beig, Niha G.; Udupa, Jayaram K.; Archer, Steven; Torigian, Drew A.

    2014-03-01

    Lung cancer is associated with the highest cancer mortality rates among men and women in the United States. The accurate and precise identification of the lymph node stations on computed tomography (CT) images is important for staging disease and potentially for prognosticating outcome in patients with lung cancer, as well as for pretreatment planning and response assessment purposes. To facilitate a standard means of referring to lymph nodes, the International Association for the Study of Lung Cancer (IASLC) has recently proposed a definition of the different lymph node stations and zones in the thorax. However, nodal station identification is typically performed manually by visual assessment in clinical radiology. This approach leaves room for error due to the subjective and potentially ambiguous nature of visual interpretation, and is labor intensive. We present a method of automatically recognizing the mediastinal IASLC-defined lymph node stations by modifying a hierarchical fuzzy modeling approach previously developed for body-wide automatic anatomy recognition (AAR) in medical imagery. Our AAR-lymph node (AAR-LN) system follows the AAR methodology and consists of two steps. In the first step, the various lymph node stations are manually delineated on a set of CT images following the IASLC definitions. These delineations are then used to build a fuzzy hierarchical model of the nodal stations which are considered as 3D objects. In the second step, the stations are automatically located on any given CT image of the thorax by using the hierarchical fuzzy model and object recognition algorithms. Based on 23 data sets used for model building, 22 independent data sets for testing, and 10 lymph node stations, a mean localization accuracy of within 1-6 voxels has been achieved by the AAR-LN system.

  10. Automatic procedure for realistic 3D finite element modelling of human brain for bioelectromagnetic computations

    NASA Astrophysics Data System (ADS)

    Aristovich, K. Y.; Khan, S. H.

    2010-07-01

    Realistic computer modelling of biological objects requires building of very accurate and realistic computer models based on geometric and material data, type, and accuracy of numerical analyses. This paper presents some of the automatic tools and algorithms that were used to build accurate and realistic 3D finite element (FE) model of whole-brain. These models were used to solve the forward problem in magnetic field tomography (MFT) based on Magnetoencephalography (MEG). The forward problem involves modelling and computation of magnetic fields produced by human brain during cognitive processing. The geometric parameters of the model were obtained from accurate Magnetic Resonance Imaging (MRI) data and the material properties - from those obtained from Diffusion Tensor MRI (DTMRI). The 3D FE models of the brain built using this approach has been shown to be very accurate in terms of both geometric and material properties. The model is stored on the computer in Computer-Aided Parametrical Design (CAD) format. This allows the model to be used in a wide a range of methods of analysis, such as finite element method (FEM), Boundary Element Method (BEM), Monte-Carlo Simulations, etc. The generic model building approach presented here could be used for accurate and realistic modelling of human brain and many other biological objects.

  11. Automatic extraction of mandibular bone geometry for anatomy-based synthetization of radiographs.

    PubMed

    Antila, Kari; Lilja, Mikko; Kalke, Martti; Lötjönen, Jyrki

    2008-01-01

    We present an automatic method for segmenting Cone-Beam Computerized Tomography (CBCT) volumes and synthetizing orthopantomographic, anatomically aligned views of the mandibular bone. The model-based segmentation method was developed having the characteristics of dental CBCT, severe metal artefacts, relatively high noise and high variability of the mandibular bone shape, in mind. First, we applied the segmentation method to delineate the bone. Second, we aligned a model resembling the geometry of orthopantomographic imaging according to the segmented surface. Third, we estimated the tooth orientations based on the local shape of the segmented surface. These results were used in determining the geometry of the synthetized radiograph. Segmentation was done with excellent results: with 14 samples we reached 0.57+/-0.16 mm mean distance from hand drawn reference. The estimation of tooth orientations was accurate with error of 0.65+/-8.0 degrees. An example of these results used in synthetizing panoramic radiographs is presented.

  12. Automatic script identification from images using cluster-based templates

    SciTech Connect

    Hochberg, J.; Kerns, L.; Kelly, P.; Thomas, T.

    1995-02-01

    We have developed a technique for automatically identifying the script used to generate a document that is stored electronically in bit image form. Our approach differs from previous work in that the distinctions among scripts are discovered by an automatic learning procedure, without any handson analysis. We first develop a set of representative symbols (templates) for each script in our database (Cyrillic, Roman, etc.). We do this by identifying all textual symbols in a set of training documents, scaling each symbol to a fixed size, clustering similar symbols, pruning minor clusters, and finding each cluster`s centroid. To identify a new document`s script, we identify and scale a subset of symbols from the document and compare them to the templates for each script. We choose the script whose templates provide the best match. Our current system distinguishes among the Armenian, Burmese, Chinese, Cyrillic, Ethiopic, Greek, Hebrew, Japanese, Korean, Roman, and Thai scripts with over 90% accuracy.

  13. Automatic Evaluation of Voice Quality Using Text-Based Laryngograph Measurements and Prosodic Analysis.

    PubMed

    Haderlein, Tino; Schwemmle, Cornelia; Döllinger, Michael; Matoušek, Václav; Ptok, Martin; Nöth, Elmar

    2015-01-01

    Due to low intra- and interrater reliability, perceptual voice evaluation should be supported by objective, automatic methods. In this study, text-based, computer-aided prosodic analysis and measurements of connected speech were combined in order to model perceptual evaluation of the German Roughness-Breathiness-Hoarseness (RBH) scheme. 58 connected speech samples (43 women and 15 men; 48.7 ± 17.8 years) containing the German version of the text "The North Wind and the Sun" were evaluated perceptually by 19 speech and voice therapy students according to the RBH scale. For the human-machine correlation, Support Vector Regression with measurements of the vocal fold cycle irregularities (CFx) and the closed phases of vocal fold vibration (CQx) of the Laryngograph and 33 features from a prosodic analysis module were used to model the listeners' ratings. The best human-machine results for roughness were obtained from a combination of six prosodic features and CFx (r = 0.71, ρ = 0.57). These correlations were approximately the same as the interrater agreement among human raters (r = 0.65, ρ = 0.61). CQx was one of the substantial features of the hoarseness model. For hoarseness and breathiness, the human-machine agreement was substantially lower. Nevertheless, the automatic analysis method can serve as the basis for a meaningful objective support for perceptual analysis.

  14. Automatic Evaluation of Voice Quality Using Text-Based Laryngograph Measurements and Prosodic Analysis

    PubMed Central

    Haderlein, Tino; Schwemmle, Cornelia; Döllinger, Michael; Matoušek, Václav; Ptok, Martin; Nöth, Elmar

    2015-01-01

    Due to low intra- and interrater reliability, perceptual voice evaluation should be supported by objective, automatic methods. In this study, text-based, computer-aided prosodic analysis and measurements of connected speech were combined in order to model perceptual evaluation of the German Roughness-Breathiness-Hoarseness (RBH) scheme. 58 connected speech samples (43 women and 15 men; 48.7 ± 17.8 years) containing the German version of the text “The North Wind and the Sun” were evaluated perceptually by 19 speech and voice therapy students according to the RBH scale. For the human-machine correlation, Support Vector Regression with measurements of the vocal fold cycle irregularities (CFx) and the closed phases of vocal fold vibration (CQx) of the Laryngograph and 33 features from a prosodic analysis module were used to model the listeners' ratings. The best human-machine results for roughness were obtained from a combination of six prosodic features and CFx (r = 0.71, ρ = 0.57). These correlations were approximately the same as the interrater agreement among human raters (r = 0.65, ρ = 0.61). CQx was one of the substantial features of the hoarseness model. For hoarseness and breathiness, the human-machine agreement was substantially lower. Nevertheless, the automatic analysis method can serve as the basis for a meaningful objective support for perceptual analysis. PMID:26136813

  15. Study of burn scar extraction automatically based on level set method using remote sensing data.

    PubMed

    Liu, Yang; Dai, Qin; Liu, Jianbo; Liu, ShiBin; Yang, Jin

    2014-01-01

    Burn scar extraction using remote sensing data is an efficient way to precisely evaluate burn area and measure vegetation recovery. Traditional burn scar extraction methodologies have no well effect on burn scar image with blurred and irregular edges. To address these issues, this paper proposes an automatic method to extract burn scar based on Level Set Method (LSM). This method utilizes the advantages of the different features in remote sensing images, as well as considers the practical needs of extracting the burn scar rapidly and automatically. This approach integrates Change Vector Analysis (CVA), Normalized Difference Vegetation Index (NDVI) and the Normalized Burn Ratio (NBR) to obtain difference image and modifies conventional Level Set Method Chan-Vese (C-V) model with a new initial curve which results from a binary image applying K-means method on fitting errors of two near-infrared band images. Landsat 5 TM and Landsat 8 OLI data sets are used to validate the proposed method. Comparison with conventional C-V model, OSTU algorithm, Fuzzy C-mean (FCM) algorithm are made to show that the proposed approach can extract the outline curve of fire burn scar effectively and exactly. The method has higher extraction accuracy and less algorithm complexity than that of the conventional C-V model.

  16. Study of burn scar extraction automatically based on level set method using remote sensing data.

    PubMed

    Liu, Yang; Dai, Qin; Liu, Jianbo; Liu, ShiBin; Yang, Jin

    2014-01-01

    Burn scar extraction using remote sensing data is an efficient way to precisely evaluate burn area and measure vegetation recovery. Traditional burn scar extraction methodologies have no well effect on burn scar image with blurred and irregular edges. To address these issues, this paper proposes an automatic method to extract burn scar based on Level Set Method (LSM). This method utilizes the advantages of the different features in remote sensing images, as well as considers the practical needs of extracting the burn scar rapidly and automatically. This approach integrates Change Vector Analysis (CVA), Normalized Difference Vegetation Index (NDVI) and the Normalized Burn Ratio (NBR) to obtain difference image and modifies conventional Level Set Method Chan-Vese (C-V) model with a new initial curve which results from a binary image applying K-means method on fitting errors of two near-infrared band images. Landsat 5 TM and Landsat 8 OLI data sets are used to validate the proposed method. Comparison with conventional C-V model, OSTU algorithm, Fuzzy C-mean (FCM) algorithm are made to show that the proposed approach can extract the outline curve of fire burn scar effectively and exactly. The method has higher extraction accuracy and less algorithm complexity than that of the conventional C-V model. PMID:24503563

  17. Automatic vertebral identification using surface-based registration

    NASA Astrophysics Data System (ADS)

    Herring, Jeannette L.; Dawant, Benoit M.

    2000-06-01

    This work introduces an enhancement to currently existing methods of intra-operative vertebral registration by allowing the portion of the spinal column surface that correctly matches a set of physical vertebral points to be automatically selected from several possible choices. Automatic selection is made possible by the shape variations that exist among lumbar vertebrae. In our experiments, we register vertebral points representing physical space to spinal column surfaces extracted from computed tomography images. The vertebral points are taken from the posterior elements of a single vertebra to represent the region of surgical interest. The surface is extracted using an improved version of the fully automatic marching cubes algorithm, which results in a triangulated surface that contains multiple vertebrae. We find the correct portion of the surface by registering the set of physical points to multiple surface areas, including all vertebral surfaces that potentially match the physical point set. We then compute the standard deviation of the surface error for the set of points registered to each vertebral surface that is a possible match, and the registration that corresponds to the lowest standard deviation designates the correct match. We have performed our current experiments on two plastic spine phantoms and one patient.

  18. Automatic Parallelization Using OpenMP Based on STL Semantics

    SciTech Connect

    Liao, C; Quinlan, D J; Willcock, J J; Panas, T

    2008-06-03

    Automatic parallelization of sequential applications using OpenMP as a target has been attracting significant attention recently because of the popularity of multicore processors and the simplicity of using OpenMP to express parallelism for shared-memory systems. However, most previous research has only focused on C and Fortran applications operating on primitive data types. C++ applications using high level abstractions such as STL containers are largely ignored due to the lack of research compilers that are readily able to recognize high level object-oriented abstractions of STL. In this paper, we use ROSE, a multiple-language source-to-source compiler infrastructure, to build a parallelizer that can recognize such high level semantics and parallelize C++ applications using certain STL containers. The idea of our work is to automatically insert OpenMP constructs using extended conventional dependence analysis and the known domain-specific semantics of high-level abstractions with optional assistance from source code annotations. In addition, the parallelizer is followed by an OpenMP translator to translate the generated OpenMP programs into multi-threaded code targeted to a popular OpenMP runtime library. Our work extends the applicability of automatic parallelization and provides another way to take advantage of multicore processors.

  19. Automatic versus manual model differentiation to compute sensitivities and solve non-linear inverse problems

    NASA Astrophysics Data System (ADS)

    Elizondo, D.; Cappelaere, B.; Faure, Ch.

    2002-04-01

    Emerging tools for automatic differentiation (AD) of computer programs should be of great benefit for the implementation of many derivative-based numerical methods such as those used for inverse modeling. The Odyssée software, one such tool for Fortran 77 codes, has been tested on a sample model that solves a 2D non-linear diffusion-type equation. Odyssée offers both the forward and the reverse differentiation modes, that produce the tangent and the cotangent models, respectively. The two modes have been implemented on the sample application. A comparison is made with a manually-produced differentiated code for this model (MD), obtained by solving the adjoint equations associated with the model's discrete state equations. Following a presentation of the methods and tools and of their relative advantages and drawbacks, the performances of the codes produced by the manual and automatic methods are compared, in terms of accuracy and of computing efficiency (CPU and memory needs). The perturbation method (finite-difference approximation of derivatives) is also used as a reference. Based on the test of Taylor, the accuracy of the two AD modes proves to be excellent and as high as machine precision permits, a good indication of Odyssée's capability to produce error-free codes. In comparison, the manually-produced derivatives (MD) sometimes appear to be slightly biased, which is likely due to the fact that a theoretical model (state equations) and a practical model (computer program) do not exactly coincide, while the accuracy of the perturbation method is very uncertain. The MD code largely outperforms all other methods in computing efficiency, a subject of current research for the improvement of AD tools. Yet these tools can already be of considerable help for the computer implementation of many numerical methods, avoiding the tedious task of hand-coding the differentiation of complex algorithms.

  20. Automatic Generation of Building Models with Levels of Detail 1-3

    NASA Astrophysics Data System (ADS)

    Nguatem, W.; Drauschke, M.; Mayer, H.

    2016-06-01

    We present a workflow for the automatic generation of building models with levels of detail (LOD) 1 to 3 according to the CityGML standard (Gröger et al., 2012). We start with orienting unsorted image sets employing (Mayer et al., 2012), we compute depth maps using semi-global matching (SGM) (Hirschmüller, 2008), and fuse these depth maps to reconstruct dense 3D point clouds (Kuhn et al., 2014). Based on planes segmented from these point clouds, we have developed a stochastic method for roof model selection (Nguatem et al., 2013) and window model selection (Nguatem et al., 2014). We demonstrate our workflow up to the export into CityGML.

  1. Study of variant design of SML-based coordinate measuring machines automatic measurement plan

    NASA Astrophysics Data System (ADS)

    Wang, Rui; Wang, Boxiong; Wang, Junying; Chen, Huacheng; Luo, Xiuzhi

    2006-11-01

    It is the trend of Coordinate Measuring Machine (CMM) measurement technology that creates measurement plan automatically. Based on Pro/CMM module of Pro/E software, the idea for automatic generation of the main DMIS (Dimensional Measuring Interface Standard) file of measurement plan is described. To satisfy the special measurement requirements of different customers conveniently, a method of variant design of DMIS file based on SML (Tabular Layouts of Article Characteristics) and the main DMIS file is proposed.

  2. An image-based automatic mesh generation and numerical simulation for a population-based analysis of aerosol delivery in the human lungs

    NASA Astrophysics Data System (ADS)

    Miyawaki, Shinjiro; Tawhai, Merryn H.; Hoffman, Eric A.; Lin, Ching-Long

    2013-11-01

    The authors propose a method to automatically generate three-dimensional subject-specific airway geometries and meshes for computational fluid dynamics (CFD) studies of aerosol delivery in the human lungs. The proposed method automatically expands computed tomography (CT)-based airway skeleton to generate the centerline (CL)-based model, and then fits it to the CT-segmented geometry to generate the hybrid CL-CT-based model. To produce a turbulent laryngeal jet known to affect aerosol transport, we developed a physiologically-consistent laryngeal model that can be attached to the trachea of the above models. We used Gmsh to automatically generate the mesh for the above models. To assess the quality of the models, we compared the regional aerosol distributions in a human lung predicted by the hybrid model and the manually generated CT-based model. The aerosol distribution predicted by the hybrid model was consistent with the prediction by the CT-based model. We applied the hybrid model to 8 healthy and 16 severe asthmatic subjects, and average geometric error was 3.8% of the branch radius. The proposed method can be potentially applied to the branch-by-branch analyses of a large population of healthy and diseased lungs. NIH Grants R01-HL-094315 and S10-RR-022421, CT data provided by SARP, and computer time provided by XSEDE.

  3. Electroporation-based treatment planning for deep-seated tumors based on automatic liver segmentation of MRI images.

    PubMed

    Pavliha, Denis; Mušič, Maja M; Serša, Gregor; Miklavčič, Damijan

    2013-01-01

    Electroporation is the phenomenon that occurs when a cell is exposed to a high electric field, which causes transient cell membrane permeabilization. A paramount electroporation-based application is electrochemotherapy, which is performed by delivering high-voltage electric pulses that enable the chemotherapeutic drug to more effectively destroy the tumor cells. Electrochemotherapy can be used for treating deep-seated metastases (e.g. in the liver, bone, brain, soft tissue) using variable-geometry long-needle electrodes. To treat deep-seated tumors, patient-specific treatment planning of the electroporation-based treatment is required. Treatment planning is based on generating a 3D model of the organ and target tissue subject to electroporation (i.e. tumor nodules). The generation of the 3D model is done by segmentation algorithms. We implemented and evaluated three automatic liver segmentation algorithms: region growing, adaptive threshold, and active contours (snakes). The algorithms were optimized using a seven-case dataset manually segmented by the radiologist as a training set, and finally validated using an additional four-case dataset that was previously not included in the optimization dataset. The presented results demonstrate that patient's medical images that were not included in the training set can be successfully segmented using our three algorithms. Besides electroporation-based treatments, these algorithms can be used in applications where automatic liver segmentation is required. PMID:23936315

  4. Electroporation-Based Treatment Planning for Deep-Seated Tumors Based on Automatic Liver Segmentation of MRI Images

    PubMed Central

    Pavliha, Denis; Mušič, Maja M.; Serša, Gregor; Miklavčič, Damijan

    2013-01-01

    Electroporation is the phenomenon that occurs when a cell is exposed to a high electric field, which causes transient cell membrane permeabilization. A paramount electroporation-based application is electrochemotherapy, which is performed by delivering high-voltage electric pulses that enable the chemotherapeutic drug to more effectively destroy the tumor cells. Electrochemotherapy can be used for treating deep-seated metastases (e.g. in the liver, bone, brain, soft tissue) using variable-geometry long-needle electrodes. To treat deep-seated tumors, patient-specific treatment planning of the electroporation-based treatment is required. Treatment planning is based on generating a 3D model of the organ and target tissue subject to electroporation (i.e. tumor nodules). The generation of the 3D model is done by segmentation algorithms. We implemented and evaluated three automatic liver segmentation algorithms: region growing, adaptive threshold, and active contours (snakes). The algorithms were optimized using a seven-case dataset manually segmented by the radiologist as a training set, and finally validated using an additional four-case dataset that was previously not included in the optimization dataset. The presented results demonstrate that patient's medical images that were not included in the training set can be successfully segmented using our three algorithms. Besides electroporation-based treatments, these algorithms can be used in applications where automatic liver segmentation is required. PMID:23936315

  5. Template-based automatic extraction of the joint space of foot bones from CT scan

    NASA Astrophysics Data System (ADS)

    Park, Eunbi; Kim, Taeho; Park, Jinah

    2016-03-01

    Clean bone segmentation is critical in studying the joint anatomy for measuring the spacing between the bones. However, separation of the coupled bones in CT images is sometimes difficult due to ambiguous gray values coming from the noise and the heterogeneity of bone materials as well as narrowing of the joint space. For fine reconstruction of the individual local boundaries, manual operation is a common practice where the segmentation remains to be a bottleneck. In this paper, we present an automatic method for extracting the joint space by applying graph cut on Markov random field model to the region of interest (ROI) which is identified by a template of 3D bone structures. The template includes encoded articular surface which identifies the tight region of the high-intensity bone boundaries together with the fuzzy joint area of interest. The localized shape information from the template model within the ROI effectively separates the bones nearby. By narrowing the ROI down to the region including two types of tissue, the object extraction problem was reduced to binary segmentation and solved via graph cut. Based on the shape of a joint space marked by the template, the hard constraint was set by the initial seeds which were automatically generated from thresholding and morphological operations. The performance and the robustness of the proposed method are evaluated on 12 volumes of ankle CT data, where each volume includes a set of 4 tarsal bones (calcaneus, talus, navicular and cuboid).

  6. Design of underwater robot lines based on a hybrid automatic optimization strategy

    NASA Astrophysics Data System (ADS)

    Lyu, Wenjing; Luo, Weilin

    2014-09-01

    In this paper, a hybrid automatic optimization strategy is proposed for the design of underwater robot lines. Isight is introduced as an integration platform. The construction of this platform is based on the user programming and several commercial software including UG6.0, GAMBIT2.4.6 and FLUENT12.0. An intelligent parameter optimization method, the particle swarm optimization, is incorporated into the platform. To verify the strategy proposed, a simulation is conducted on the underwater robot model 5470, which originates from the DTRC SUBOFF project. With the automatic optimization platform, the minimal resistance is taken as the optimization goal; the wet surface area as the constraint condition; the length of the fore-body, maximum body radius and after-body's minimum radius as the design variables. With the CFD calculation, the RANS equations and the standard turbulence model are used for direct numerical simulation. By analyses of the simulation results, it is concluded that the platform is of high efficiency and feasibility. Through the platform, a variety of schemes for the design of the lines are generated and the optimal solution is achieved. The combination of the intelligent optimization algorithm and the numerical simulation ensures a global optimal solution and improves the efficiency of the searching solutions.

  7. Automatic ultrasonic breast lesions detection using support vector machine based algorithm

    NASA Astrophysics Data System (ADS)

    Yeh, Chih-Kuang; Miao, Shan-Jung; Fan, Wei-Che; Chen, Yung-Sheng

    2007-03-01

    It is difficult to automatically detect tumors and extract lesion boundaries in ultrasound images due to the variance in shape, the interference from speckle noise, and the low contrast between objects and background. The enhancement of ultrasonic image becomes a significant task before performing lesion classification, which was usually done with manual delineation of the tumor boundaries in the previous works. In this study, a linear support vector machine (SVM) based algorithm is proposed for ultrasound breast image training and classification. Then a disk expansion algorithm is applied for automatically detecting lesions boundary. A set of sub-images including smooth and irregular boundaries in tumor objects and those in speckle-noised background are trained by the SVM algorithm to produce an optimal classification function. Based on this classification model, each pixel within an ultrasound image is classified into either object or background oriented pixel. This enhanced binary image can highlight the object and suppress the speckle noise; and it can be regarded as degraded paint character (DPC) image containing closure noise, which is well known in perceptual organization of psychology. An effective scheme of removing closure noise using iterative disk expansion method has been successfully demonstrated in our previous works. The boundary detection of ultrasonic breast lesions can be further equivalent to the removal of speckle noise. By applying the disk expansion method to the binary image, we can obtain a significant radius-based image where the radius for each pixel represents the corresponding disk covering the specific object information. Finally, a signal transmission process is used for searching the complete breast lesion region and thus the desired lesion boundary can be effectively and automatically determined. Our algorithm can be performed iteratively until all desired objects are detected. Simulations and clinical images were introduced to

  8. A superpixel-based framework for automatic tumor segmentation on breast DCE-MRI

    NASA Astrophysics Data System (ADS)

    Yu, Ning; Wu, Jia; Weinstein, Susan P.; Gaonkar, Bilwaj; Keller, Brad M.; Ashraf, Ahmed B.; Jiang, YunQing; Davatzikos, Christos; Conant, Emily F.; Kontos, Despina

    2015-03-01

    Accurate and efficient automated tumor segmentation in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is highly desirable for computer-aided tumor diagnosis. We propose a novel automatic segmentation framework which incorporates mean-shift smoothing, superpixel-wise classification, pixel-wise graph-cuts partitioning, and morphological refinement. A set of 15 breast DCE-MR images, obtained from the American College of Radiology Imaging Network (ACRIN) 6657 I-SPY trial, were manually segmented to generate tumor masks (as ground truth) and breast masks (as regions of interest). Four state-of-the-art segmentation approaches based on diverse models were also utilized for comparison. Based on five standard evaluation metrics for segmentation, the proposed framework consistently outperformed all other approaches. The performance of the proposed framework was: 1) 0.83 for Dice similarity coefficient, 2) 0.96 for pixel-wise accuracy, 3) 0.72 for VOC score, 4) 0.79 mm for mean absolute difference, and 5) 11.71 mm for maximum Hausdorff distance, which surpassed the second best method (i.e., adaptive geodesic transformation), a semi-automatic algorithm depending on precise initialization. Our results suggest promising potential applications of our segmentation framework in assisting analysis of breast carcinomas.

  9. Automatic corpus callosum segmentation using a deformable active Fourier contour model

    NASA Astrophysics Data System (ADS)

    Vachet, Clement; Yvernault, Benjamin; Bhatt, Kshamta; Smith, Rachel G.; Gerig, Guido; Cody Hazlett, Heather; Styner, Martin

    2012-03-01

    The corpus callosum (CC) is a structure of interest in many neuroimaging studies of neuro-developmental pathology such as autism. It plays an integral role in relaying sensory, motor and cognitive information from homologous regions in both hemispheres. We have developed a framework that allows automatic segmentation of the corpus callosum and its lobar subdivisions. Our approach employs constrained elastic deformation of flexible Fourier contour model, and is an extension of Szekely's 2D Fourier descriptor based Active Shape Model. The shape and appearance model, derived from a large mixed population of 150+ subjects, is described with complex Fourier descriptors in a principal component shape space. Using MNI space aligned T1w MRI data, the CC segmentation is initialized on the mid-sagittal plane using the tissue segmentation. A multi-step optimization strategy, with two constrained steps and a final unconstrained step, is then applied. If needed, interactive segmentation can be performed via contour repulsion points. Lobar connectivity based parcellation of the corpus callosum can finally be computed via the use of a probabilistic CC subdivision model. Our analysis framework has been integrated in an open-source, end-to-end application called CCSeg both with a command line and Qt-based graphical user interface (available on NITRC). A study has been performed to quantify the reliability of the semi-automatic segmentation on a small pediatric dataset. Using 5 subjects randomly segmented 3 times by two experts, the intra-class correlation coefficient showed a superb reliability (0.99). CCSeg is currently applied to a large longitudinal pediatric study of brain development in autism.

  10. Automatic event detection based on artificial neural networks

    NASA Astrophysics Data System (ADS)

    Doubravová, Jana; Wiszniowski, Jan; Horálek, Josef

    2015-04-01

    The proposed algorithm was developed to be used for Webnet, a local seismic network in West Bohemia. The Webnet network was built to monitor West Bohemia/Vogtland swarm area. During the earthquake swarms there is a large number of events which must be evaluated automatically to get a quick estimate of the current earthquake activity. Our focus is to get good automatic results prior to precise manual processing. With automatic data processing we may also reach a lower completeness magnitude. The first step of automatic seismic data processing is the detection of events. To get a good detection performance we require low number of false detections as well as high number of correctly detected events. We used a single layer recurrent neural network (SLRNN) trained by manual detections from swarms in West Bohemia in the past years. As inputs of the SLRNN we use STA/LTA of half-octave filter bank fed by vertical and horizontal components of seismograms. All stations were trained together to obtain the same network with the same neuron weights. We tried several architectures - different number of neurons - and different starting points for training. Networks giving the best results for training set must not be the optimal ones for unknown waveforms. Therefore we test each network on test set from different swarm (but still with similar characteristics, i.e. location, focal mechanisms, magnitude range). We also apply a coincidence verification for each event. It means that we can lower the number of false detections by rejecting events on one station only and force to declare an event on all stations in the network by coincidence on two or more stations. In further work we would like to retrain the network for each station individually so each station will have its own coefficients (neural weights) set. We would also like to apply this method to data from Reykjanet network located in Reykjanes peninsula, Iceland. As soon as we have a reliable detection, we can proceed to

  11. MatchGUI: A Graphical MATLAB-Based Tool for Automatic Image Co-Registration

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.

    2011-01-01

    MatchGUI software, based on MATLAB, automatically matches two images and displays the match result by superimposing one image on the other. A slider bar allows focus to shift between the two images. There are tools for zoom, auto-crop to overlap region, and basic image markup. Given a pair of ortho-rectified images (focused primarily on Mars orbital imagery for now), this software automatically co-registers the imagery so that corresponding image pixels are aligned. MatchGUI requires minimal user input, and performs a registration over scale and inplane rotation fully automatically

  12. Automatic Calibration of a Semi-Distributed Hydrologic Model Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Bekele, E. G.; Nicklow, J. W.

    2005-12-01

    Hydrologic simulation models need to be calibrated and validated before using them for operational predictions. Spatially-distributed hydrologic models generally have a large number of parameters to capture the various physical characteristics of a hydrologic system. Manual calibration of such models is a very tedious and daunting task, and its success depends on the subjective assessment of a particular modeler, which includes knowledge of the basic approaches and interactions in the model. In order to alleviate these shortcomings, an automatic calibration model, which employs an evolutionary optimization technique known as Particle Swarm Optimizer (PSO) for parameter estimation, is developed. PSO is a heuristic search algorithm that is inspired by social behavior of bird flocking or fish schooling. The newly-developed calibration model is integrated to the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT). SWAT is a physically-based, semi-distributed hydrologic model that was developed to predict the long term impacts of land management practices on water, sediment and agricultural chemical yields in large complex watersheds with varying soils, land use, and management conditions. SWAT was calibrated for streamflow and sediment concentration. The calibration process involves parameter specification, whereby sensitive model parameters are identified, and parameter estimation. In order to reduce the number of parameters to be calibrated, parameterization was performed. The methodology is applied to a demonstration watershed known as Big Creek, which is located in southern Illinois. Application results show the effectiveness of the approach and model predictions are significantly improved.

  13. Automatic lung tumor segmentation on PET/CT images using fuzzy Markov random field model.

    PubMed

    Guo, Yu; Feng, Yuanming; Sun, Jian; Zhang, Ning; Lin, Wang; Sa, Yu; Wang, Ping

    2014-01-01

    The combination of positron emission tomography (PET) and CT images provides complementary functional and anatomical information of human tissues and it has been used for better tumor volume definition of lung cancer. This paper proposed a robust method for automatic lung tumor segmentation on PET/CT images. The new method is based on fuzzy Markov random field (MRF) model. The combination of PET and CT image information is achieved by using a proper joint posterior probability distribution of observed features in the fuzzy MRF model which performs better than the commonly used Gaussian joint distribution. In this study, the PET and CT simulation images of 7 non-small cell lung cancer (NSCLC) patients were used to evaluate the proposed method. Tumor segmentations with the proposed method and manual method by an experienced radiation oncologist on the fused images were performed, respectively. Segmentation results obtained with the two methods were similar and Dice's similarity coefficient (DSC) was 0.85 ± 0.013. It has been shown that effective and automatic segmentations can be achieved with this method for lung tumors which locate near other organs with similar intensities in PET and CT images, such as when the tumors extend into chest wall or mediastinum.

  14. One-Day Offset between Simulated and Observed Daily Hydrographs: An Exploration of the Issue in Automatic Model Calibration

    NASA Astrophysics Data System (ADS)

    Asadzadeh, M.; Leon, L.; Yang, W.

    2014-12-01

    The literature of hydrologic modelling shows that in daily simulation of the rainfall-runoff relationship, the simulated hydrograph response to some rainfall events happens one day earlier than the observed one. This one-day offset issue results in significant residuals between the simulated and observed hydrographs and adversely impacts the model performance metrics that are based on the aggregation of daily residuals. Based on the analysis of sub-daily rainfall and runoff data sets in this study, the one-day offset issue appears to be inevitable when the same time interval, e.g. the calendar day, is used to measure daily rainfall and runoff data sets. This is an error introduced through data aggregation and needs to be properly addressed before calculating the model performance metrics. Otherwise, the metrics would not represent the modelling quality and could mislead the automatic calibration of the model. In this study, an algorithm is developed to scan the simulated hydrograph against the observed one, automatically detect all one-day offset incidents and shift the simulated hydrograph of those incidents one day forward before calculating the performance metrics. This algorithm is employed in the automatic calibration of the Soil and Water Assessment Tool that is set up for the Rouge River watershed in Southern Ontario, Canada. Results show that with the proposed algorithm, the automatic calibration to maximize the daily Nash-Sutcliffe (NS) identifies a solution that accurately estimates the magnitude of peak flow rates and the shape of rising and falling limbs of the observed hydrographs. But, without the proposed algorithm, the same automatic calibration finds a solution that systematically underestimates the peak flow rates in order to perfectly match the timing of simulated and observed peak flows.

  15. A probabilistic union model with automatic order selection for noisy speech recognition.

    PubMed

    Jancovic, P; Ming, J

    2001-09-01

    A critical issue in exploiting the potential of the sub-band-based approach to robust speech recognition is the method of combining the sub-band observations, for selecting the bands unaffected by noise. A new method for this purpose, i.e., the probabilistic union model, was recently introduced. This model has been shown to be capable of dealing with band-limited corruption, requiring no knowledge about the band position and statistical distribution of the noise. A parameter within the model, which we call its order, gives the best results when it equals the number of noisy bands. Since this information may not be available in practice, in this paper we introduce an automatic algorithm for selecting the order, based on the state duration pattern generated by the hidden Markov model (HMM). The algorithm has been tested on the TIDIGITS database corrupted by various types of additive band-limited noise with unknown noisy bands. The results have shown that the union model equipped with the new algorithm can achieve a recognition performance similar to that achieved when the number of noisy bands is known. The results show a very significant improvement over the traditional full-band model, without requiring prior information on either the position or the number of noisy bands. The principle of the algorithm for selecting the order based on state duration may also be applied to other sub-band combination methods.

  16. Wireless Sensor Network-Based Greenhouse Environment Monitoring and Automatic Control System for Dew Condensation Prevention

    PubMed Central

    Park, Dae-Heon; Park, Jang-Woo

    2011-01-01

    Dew condensation on the leaf surface of greenhouse crops can promote diseases caused by fungus and bacteria, affecting the growth of the crops. In this paper, we present a WSN (Wireless Sensor Network)-based automatic monitoring system to prevent dew condensation in a greenhouse environment. The system is composed of sensor nodes for collecting data, base nodes for processing collected data, relay nodes for driving devices for adjusting the environment inside greenhouse and an environment server for data storage and processing. Using the Barenbrug formula for calculating the dew point on the leaves, this system is realized to prevent dew condensation phenomena on the crop’s surface acting as an important element for prevention of diseases infections. We also constructed a physical model resembling the typical greenhouse in order to verify the performance of our system with regard to dew condensation control. PMID:22163813

  17. Evaluation of Automatic Atlas-Based Lymph Node Segmentation for Head-and-Neck Cancer

    SciTech Connect

    Stapleford, Liza J.; Lawson, Joshua D.; Perkins, Charles; Edelman, Scott; Davis, Lawrence

    2010-07-01

    Purpose: To evaluate if automatic atlas-based lymph node segmentation (LNS) improves efficiency and decreases inter-observer variability while maintaining accuracy. Methods and Materials: Five physicians with head-and-neck IMRT experience used computed tomography (CT) data from 5 patients to create bilateral neck clinical target volumes covering specified nodal levels. A second contour set was automatically generated using a commercially available atlas. Physicians modified the automatic contours to make them acceptable for treatment planning. To assess contour variability, the Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm was used to take collections of contours and calculate a probabilistic estimate of the 'true' segmentation. Differences between the manual, automatic, and automatic-modified (AM) contours were analyzed using multiple metrics. Results: Compared with the 'true' segmentation created from manual contours, the automatic contours had a high degree of accuracy, with sensitivity, Dice similarity coefficient, and mean/max surface disagreement values comparable to the average manual contour (86%, 76%, 3.3/17.4 mm automatic vs. 73%, 79%, 2.8/17 mm manual). The AM group was more consistent than the manual group for multiple metrics, most notably reducing the range of contour volume (106-430 mL manual vs. 176-347 mL AM) and percent false positivity (1-37% manual vs. 1-7% AM). Average contouring time savings with the automatic segmentation was 11.5 min per patient, a 35% reduction. Conclusions: Using the STAPLE algorithm to generate 'true' contours from multiple physician contours, we demonstrated that, in comparison with manual segmentation, atlas-based automatic LNS for head-and-neck cancer is accurate, efficient, and reduces interobserver variability.

  18. Expert Knowledge-Based Automatic Sleep Stage Determination by Multi-Valued Decision Making Method

    NASA Astrophysics Data System (ADS)

    Wang, Bei; Sugi, Takenao; Kawana, Fusae; Wang, Xingyu; Nakamura, Masatoshi

    In this study, an expert knowledge-based automatic sleep stage determination system working on a multi-valued decision making method is developed. Visual inspection by a qualified clinician is adopted to obtain the expert knowledge database. The expert knowledge database consists of probability density functions of parameters for various sleep stages. Sleep stages are determined automatically according to the conditional probability. Totally, four subjects were participated. The automatic sleep stage determination results showed close agreements with the visual inspection on sleep stages of awake, REM (rapid eye movement), light sleep and deep sleep. The constructed expert knowledge database reflects the distributions of characteristic parameters which can be adaptive to variable sleep data in hospitals. The developed automatic determination technique based on expert knowledge of visual inspection can be an assistant tool enabling further inspection of sleep disorder cases for clinical practice.

  19. Automatic 3D high-fidelity traffic interchange modeling using 2D road GIS data

    NASA Astrophysics Data System (ADS)

    Wang, Jie; Shen, Yuzhong

    2011-03-01

    3D road models are widely used in many computer applications such as racing games and driving simulations. However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially for those existing in the real world. Real road network contains various elements such as road segments, road intersections and traffic interchanges. Among them, traffic interchanges present the most challenges to model due to their complexity and the lack of height information (vertical position) of traffic interchanges in existing road GIS data. This paper proposes a novel approach that can automatically produce 3D high-fidelity road network models, including traffic interchange models, from real 2D road GIS data that mainly contain road centerline information. The proposed method consists of several steps. The raw road GIS data are first preprocessed to extract road network topology, merge redundant links, and classify road types. Then overlapped points in the interchanges are detected and their elevations are determined based on a set of level estimation rules. Parametric representations of the road centerlines are then generated through link segmentation and fitting, and they have the advantages of arbitrary levels of detail with reduced memory usage. Finally a set of civil engineering rules for road design (e.g., cross slope, superelevation) are selected and used to generate realistic road surfaces. In addition to traffic interchange modeling, the proposed method also applies to other more general road elements. Preliminary results show that the proposed method is highly effective and useful in many applications.

  20. Multi-objective automatic calibration of hydrodynamic models utilizing inundation maps and gauge data

    NASA Astrophysics Data System (ADS)

    Dung, N. V.; Merz, B.; Bárdossy, A.; Thang, T. D.; Apel, H.

    2010-12-01

    Calibration of hydrodynamic models is - compared to other disciplines like e.g. hydrology - still underdeveloped. This has mainly two reasons: the lack of appropriate data and the large computational demand in terms of CPU-time. Both aspects are aggravated in large-scale applications. However, there are recent developments that improve the situation on both the data and computing side. Remote sensing, especially radar-based techniques proved to provide highly valuable information on flood extents, and in case high precision DEMs are present, also on spatially distributed inundation depths. On the computing side the use of parallelization techniques brought significant performance gains. In the presented study we build on these developments by calibrating a large-scale 1-D hydrodynamic model of the whole Mekong Delta downstream of Kratie in Cambodia: we combined in-situ data from a network of river gauging stations, i.e. data with high temporal but low spatial resolution, with a series of inundation maps derived from ENVISAT Advanced Synthetic Aperture Radar (ASAR) satellite images, i.e. data with low temporal but high spatial resolution, in an multi-objective automatic calibration process. It is shown that an automatic, multi-objective calibration of hydrodynamic models, even of such complexity and on a large scale and complex as a model for the Mekong Delta, is possible. Furthermore, the calibration process revealed model deficiencies in the model structure, i.e. the representation of the dike system in Vietnam, which would have been difficult to detect by a standard manual calibration procedure.

  1. Automatic prone to supine haustral fold matching in CT colonography using a Markov random field model.

    PubMed

    Hampshire, Thomas; Roth, Holger; Hu, Mingxing; Boone, Darren; Slabaugh, Greg; Punwani, Shonit; Halligan, Steve; Hawkes, David

    2011-01-01

    CT colonography is routinely performed with the patient prone and supine to differentiate fixed colonic pathology from mobile faecal residue. We propose a novel method to automatically establish correspondence. Haustral folds are detected using a graph cut method applied to a surface curvature-based metric, where image patches are generated using endoluminal CT colonography surface rendering. The intensity difference between image pairs, along with additional neighbourhood information to enforce geometric constraints, are used with a Markov Random Field (MRF) model to estimate the fold labelling assignment. The method achieved fold matching accuracy of 83.1% and 88.5% with and without local colonic collapse. Moreover, it improves an existing surface-based registration algorithm, decreasing mean registration error from 9.7mm to 7.7mm in cases exhibiting collapse.

  2. Automatic identification of sources and trajectories of atmospheric Saharan dust aerosols with Latent Gaussian Models

    NASA Astrophysics Data System (ADS)

    Garbe, Christoph; Bachl, Fabian

    2013-04-01

    Dust transported from the Sahara across the ocean has a high impact on radiation fluxes and marine nutrient cycles. Significant progress has been made in characterising Saharan dust properties (Formenti et al., 2011) and its radiative effects through the 'SAharan Mineral dUst experiMent' (SAMUM) (Ansmann et al., 2011). While the models simulating Saharan dust transport processes have been considerably improved in recent years, it is still an open question which meteorological processes and surface characteristics are mainly responsible for dust transported to the Sub-Tropical Atlantic (Schepanski et al., 2009; Tegen et al., 2012). Currently, there exists a large discrepancy between modelled dust emission events and those observed from satellites. In this contribution we present an approach for classifying and tracking dust plumes based on a Bayesian hierarchical model. Recent developments in computational statistics known as Integrated Nested Laplace Approximations (INLA) have paved the way for efficient inference in a respective subclass, the Generalized Linear Model (GLM) (Rue et al., 2009). We present the results of our approach based on data from the SIVIRI instrument on board the Meteosat Second Generation (MSG) satellite. We demonstrate the accuracy for automatically detecting sources of dust and aerosol concentrations in the atmosphere. The trajectories of aerosols are also computed very efficiently. In our framework, we automatically identify optimal parameters for the computation of atmospheric aerosol motion. The applicability of our approach to a wide range of conditions will be discussed, as well as the ground truthing of our results and future directions in this field of research.

  3. Wavelet-based learning vector quantization for automatic target recognition

    NASA Astrophysics Data System (ADS)

    Chan, Lipchen A.; Nasrabadi, Nasser M.; Mirelli, Vincent

    1996-06-01

    An automatic target recognition classifier is constructed that uses a set of dedicated vector quantizers (VQs). The background pixels in each input image are properly clipped out by a set of aspect windows. The extracted target area for each aspect window is then enlarged to a fixed size, after which a wavelet decomposition splits the enlarged extraction into several subbands. A dedicated VQ codebook is generated for each subband of a particular target class at a specific range of aspects. Thus, each codebook consists of a set of feature templates that are iteratively adapted to represent a particular subband of a given target class at a specific range of aspects. These templates are then further trained by a modified learning vector quantization (LVQ) algorithm that enhances their discriminatory characteristics. A recognition rate of 69.0 percent is achieved on a highly cluttered test set.

  4. Image-based mobile service: automatic text extraction and translation

    NASA Astrophysics Data System (ADS)

    Berclaz, Jérôme; Bhatti, Nina; Simske, Steven J.; Schettino, John C.

    2010-01-01

    We present a new mobile service for the translation of text from images taken by consumer-grade cell-phone cameras. Such capability represents a new paradigm for users where a simple image provides the basis for a service. The ubiquity and ease of use of cell-phone cameras enables acquisition and transmission of images anywhere and at any time a user wishes, delivering rapid and accurate translation over the phone's MMS and SMS facilities. Target text is extracted completely automatically, requiring no bounding box delineation or related user intervention. The service uses localization, binarization, text deskewing, and optical character recognition (OCR) in its analysis. Once the text is translated, an SMS message is sent to the user with the result. Further novelties include that no software installation is required on the handset, any service provider or camera phone can be used, and the entire service is implemented on the server side.

  5. Automatic P-S phase picking procedure based on Kurtosis: Vanuatu region case study

    NASA Astrophysics Data System (ADS)

    Baillard, C.; Crawford, W. C.; Ballu, V.; Hibert, C.

    2012-12-01

    Automatic P and S phase picking is indispensable for large seismological data sets. Robust algorithms, based on short term and long term average ratio comparison (Allen, 1982), are commonly used for event detection, but further improvements can be made in phase identification and picking. We present a picking scheme using consecutively Kurtosis-derived Characteristic Functions (CF) and Eigenvalue decompositions on 3-component seismic data to independently pick P and S arrivals. When computed over a sliding window of the signal, a sudden increase in the CF reveals a transition from a gaussian to a non-gaussian distribution, characterizing the phase onset (Saragiotis, 2002). One advantage of the method is that it requires much fewer adjustable parameters than competing methods. We modified the Kurtosis CF to improve pick precision, by computing the CF over several frequency bandwidths, window sizes and smoothing parameters. Once phases were picked, we determined the onset type (P or S) using polarization parameters (rectilinearity, azimuth and dip) calculated using Eigenvalue decompositions of the covariance matrix (Cichowicz, 1993). Finally, we removed bad picks using a clustering procedure and the signal-to-noise ratio (SNR). The pick quality index was also assigned based on the SNR value. Amplitude calculation is integrated into the procedure to enable automatic magnitude calculation. We applied this procedure to data from a network of 30 wideband seismometers (including 10 oceanic bottom seismometers) in Vanuatu that ran for 10 months from May 2008 to February 2009. We manually picked the first 172 events of June, whose local magnitudes range from 0.7 to 3.7. We made a total of 1601 picks, 1094 P and 507 S. We then applied our automatic picking to the same dataset. 70% of the manually picked onsets were picked automatically. For P-picks, the difference between manual and automatic picks is 0.01 ± 0.08 s overall; for the best quality picks (quality index 0: 64

  6. Automatic Detection and Boundary Extraction of Lunar Craters Based on LOLA DEM Data

    NASA Astrophysics Data System (ADS)

    Li, Bo; Ling, ZongCheng; Zhang, Jiang; Wu, ZhongChen

    2015-07-01

    Impact-induced circular structures, known as craters, are the most obvious geographic and geomorphic features on the Moon. The studies of lunar carters' patterns and spatial distributions play an important role in understanding geologic processes of the Moon. In this paper, we proposed a method based on digital elevation model (DEM) data from lunar orbiter laser altimeter to detect the lunar craters automatically. Firstly, the DEM data of study areas are converted to a series of spatial fields having different scales, in which all overlapping depressions are detected in order (larger depressions first, then the smaller ones). Then, every depression's true boundary is calculated by Fourier expansion and shape parameters are computed. Finally, we recognize the craters from training sets manually and build a binary decision tree to automatically classify the identified depressions into craters and non-craters. In addition, our crater-detection method can provides a fast and reliable evaluation of ages of lunar geologic units, which is of great significance in lunar stratigraphy studies as well as global geologic mapping.

  7. Patch-based label fusion for automatic multi-atlas-based prostate segmentation in MR images

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Jani, Ashesh B.; Rossi, Peter J.; Mao, Hui; Curran, Walter J.; Liu, Tian

    2016-03-01

    In this paper, we propose a 3D multi-atlas-based prostate segmentation method for MR images, which utilizes patch-based label fusion strategy. The atlases with the most similar appearance are selected to serve as the best subjects in the label fusion. A local patch-based atlas fusion is performed using voxel weighting based on anatomical signature. This segmentation technique was validated with a clinical study of 13 patients and its accuracy was assessed using the physicians' manual segmentations (gold standard). Dice volumetric overlapping was used to quantify the difference between the automatic and manual segmentation. In summary, we have developed a new prostate MR segmentation approach based on nonlocal patch-based label fusion, demonstrated its clinical feasibility, and validated its accuracy with manual segmentations.

  8. Fully Automatic Guidance and Control for Rotorcraft Nap-of-the-earth Flight Following Planned Profiles. Volume 2: Mathematical Model

    NASA Technical Reports Server (NTRS)

    Clement, Warren F.; Gorder, Peter J.; Jewell, Wayne F.

    1991-01-01

    Developing a single-pilot, all-weather nap-of-the-earth (NOE) capability requires fully automatic NOE (ANOE) navigation and flight control. Innovative guidance and control concepts are investigated in a four-fold research effort that: (1) organizes the on-board computer-based storage and real-time updating of NOE terrain profiles and obstacles in course-oriented coordinates indexed to the mission flight plan; (2) defines a class of automatic anticipative pursuit guidance algorithms and necessary data preview requirements to follow the vertical, lateral, and longitudinal guidance commands dictated by the updated flight profiles; (3) automates a decision-making process for unexpected obstacle avoidance; and (4) provides several rapid response maneuvers. Acquired knowledge from the sensed environment is correlated with the forehand knowledge of the recorded environment (terrain, cultural features, threats, and targets), which is then used to determine an appropriate evasive maneuver if a nonconformity of the sensed and recorded environments is observed. This four-fold research effort was evaluated in both fixed-base and moving-base real-time piloted simulations; thereby, providing a practical demonstration for evaluating pilot acceptance of the automated concepts, supervisory override, manual operation, and re-engagement of the automatic system. Volume one describes the major components of the guidance and control laws as well as the results of the piloted simulations. Volume two describes the complete mathematical model of the fully automatic guidance system for rotorcraft NOE flight following planned flight profiles.

  9. Mapping of Planetary Surface Age Based on Crater Statistics Obtained by AN Automatic Detection Algorithm

    NASA Astrophysics Data System (ADS)

    Salih, A. L.; Mühlbauer, M.; Grumpe, A.; Pasckert, J. H.; Wöhler, C.; Hiesinger, H.

    2016-06-01

    The analysis of the impact crater size-frequency distribution (CSFD) is a well-established approach to the determination of the age of planetary surfaces. Classically, estimation of the CSFD is achieved by manual crater counting and size determination in spacecraft images, which, however, becomes very time-consuming for large surface areas and/or high image resolution. With increasing availability of high-resolution (nearly) global image mosaics of planetary surfaces, a variety of automated methods for the detection of craters based on image data and/or topographic data have been developed. In this contribution a template-based crater detection algorithm is used which analyses image data acquired under known illumination conditions. Its results are used to establish the CSFD for the examined area, which is then used to estimate the absolute model age of the surface. The detection threshold of the automatic crater detection algorithm is calibrated based on a region with available manually determined CSFD such that the age inferred from the manual crater counts corresponds to the age inferred from the automatic crater detection results. With this detection threshold, the automatic crater detection algorithm can be applied to a much larger surface region around the calibration area. The proposed age estimation method is demonstrated for a Kaguya Terrain Camera image mosaic of 7.4 m per pixel resolution of the floor region of the lunar crater Tsiolkovsky, which consists of dark and flat mare basalt and has an area of nearly 10,000 km2. The region used for calibration, for which manual crater counts are available, has an area of 100 km2. In order to obtain a spatially resolved age map, CSFDs and surface ages are computed for overlapping quadratic regions of about 4.4 x 4.4 km2 size offset by a step width of 74 m. Our constructed surface age map of the floor of Tsiolkovsky shows age values of typically 3.2-3.3 Ga, while for small regions lower (down to 2.9 Ga) and higher

  10. Towards Automatic Validation and Healing of Citygml Models for Geometric and Semantic Consistency

    NASA Astrophysics Data System (ADS)

    Alam, N.; Wagner, D.; Wewetzer, M.; von Falkenhausen, J.; Coors, V.; Pries, M.

    2013-09-01

    A steadily growing number of application fields for large 3D city models have emerged in recent years. Like in many other domains, data quality is recognized as a key factor for successful business. Quality management is mandatory in the production chain nowadays. Automated domain-specific tools are widely used for validation of business-critical data but still common standards defining correct geometric modeling are not precise enough to define a sound base for data validation of 3D city models. Although the workflow for 3D city models is well-established from data acquisition to processing, analysis and visualization, quality management is not yet a standard during this workflow. Processing data sets with unclear specification leads to erroneous results and application defects. We show that this problem persists even if data are standard compliant. Validation results of real-world city models are presented to demonstrate the potential of the approach. A tool to repair the errors detected during the validation process is under development; first results are presented and discussed. The goal is to heal defects of the models automatically and export a corrected CityGML model.

  11. Automatic Generation of Directive-Based Parallel Programs for Shared Memory Parallel Systems

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Yan, Jerry; Frumkin, Michael

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. Due to its ease of programming and its good performance, the technique has become very popular. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate directive-based, OpenMP, parallel programs. We outline techniques used in the implementation of the tool and present test results on the NAS parallel benchmarks and ARC3D, a CFD application. This work demonstrates the great potential of using computer-aided tools to quickly port parallel programs and also achieve good performance.

  12. Detection and classification of football players with automatic generation of models

    NASA Astrophysics Data System (ADS)

    Gómez, Jorge R.; Jaraba, Elias Herrero; Montañés, Miguel Angel; Contreras, Francisco Martínez; Uruñuela, Carlos Orrite

    2010-01-01

    We focus on the automatic detection and classification of players in a football match. Our approach is not based on any a priori knowledge of the outfits, but on the assumption that the two main uniforms detected correspond to the two football teams. The algorithm is designed to be able to operate in real time, once it has been trained, and is able to detect partially occluded players and update the color of the kits to cope with some gradual illumination changes through time. Our method, evaluated from real sequences, gave better detection and classification results than those obtained by a system using a manual selection of samples to compute a Gaussian mixture model.

  13. A semi-automatic multiple view texture mapping for the surface model extracted by laser scanning

    NASA Astrophysics Data System (ADS)

    Zhang, Zhichao; Huang, Xianfeng; Zhang, Fan; Chang, Yongmin; Li, Deren

    2008-12-01

    Laser scanning is an effective way to acquire geometry data of the cultural heritage with complex architecture. After generating the 3D model of the object, it's difficult to do the exactly texture mapping for the real object. we take effort to create seamless texture maps for a virtual heritage of arbitrary topology. Texture detail is acquired directly from the real object in a light condition as uniform as we can make. After preprocessing, images are then registered on the 3D mesh by a semi-automatic way. Then we divide the mesh into mesh patches overlapped with each other according to the valid texture area of each image. An optimal correspondence between mesh patches and sections of the acquired images is built. Then, a smoothing approach is proposed to erase the seam between different images that map on adjacent mesh patches, based on texture blending. The obtained result with a Buddha of Dunhuang Mogao Grottoes is presented and discussed.

  14. Multiple-reason decision making based on automatic processing.

    PubMed

    Glöckner, Andreas; Betsch, Tilmann

    2008-09-01

    It has been repeatedly shown that in decisions under time constraints, individuals predominantly use noncompensatory strategies rather than complex compensatory ones. The authors argue that these findings might be due not to limitations of cognitive capacity but instead to limitations of information search imposed by the commonly used experimental tool Mouselab (J. W. Payne, J. R. Bettman, & E. J. Johnson, 1988). The authors tested this assumption in 3 experiments. In the 1st experiment, information was openly presented, whereas in the 2nd experiment, the standard Mouselab program was used under different time limits. The results indicate that individuals are able to compute weighted additive decision strategies extremely quickly if information search is not restricted by the experimental procedure. In a 3rd experiment, these results were replicated using more complex decision tasks, and the major alternative explanations that individuals use more complex heuristics or that they merely encode the constellation of cues were ruled out. In sum, the findings challenge the fundaments of bounded rationality and highlight the importance of automatic processes in decision making.

  15. Automatic Trading Agent. RMT Based Portfolio Theory and Portfolio Selection

    NASA Astrophysics Data System (ADS)

    Snarska, M.; Krzych, J.

    2006-11-01

    Portfolio theory is a very powerful tool in the modern investment theory. It is helpful in estimating risk of an investor's portfolio, arosen from lack of information, uncertainty and incomplete knowledge of reality, which forbids a perfect prediction of future price changes. Despite of many advantages this tool is not known and not widely used among investors on Warsaw Stock Exchange. The main reason for abandoning this method is a high level of complexity and immense calculations. The aim of this paper is to introduce an automatic decision-making system, which allows a single investor to use complex methods of Modern Portfolio Theory (MPT). The key tool in MPT is an analysis of an empirical covariance matrix. This matrix, obtained from historical data, biased by such a high amount of statistical uncertainty, that it can be seen as random. By bringing into practice the ideas of Random Matrix Theory (RMT), the noise is removed or significantly reduced, so the future risk and return are better estimated and controlled. These concepts are applied to the Warsaw Stock Exchange Simulator {http://gra.onet.pl}. The result of the simulation is 18% level of gains in comparison with respective 10% loss of the Warsaw Stock Exchange main index WIG.

  16. Automatic 3D segmentation of spinal cord MRI using propagated deformable models

    NASA Astrophysics Data System (ADS)

    De Leener, B.; Cohen-Adad, J.; Kadoury, S.

    2014-03-01

    Spinal cord diseases or injuries can cause dysfunction of the sensory and locomotor systems. Segmentation of the spinal cord provides measures of atrophy and allows group analysis of multi-parametric MRI via inter-subject registration to a template. All these measures were shown to improve diagnostic and surgical intervention. We developed a framework to automatically segment the spinal cord on T2-weighted MR images, based on the propagation of a deformable model. The algorithm is divided into three parts: first, an initialization step detects the spinal cord position and orientation by using the elliptical Hough transform on multiple adjacent axial slices to produce an initial tubular mesh. Second, a low-resolution deformable model is iteratively propagated along the spinal cord. To deal with highly variable contrast levels between the spinal cord and the cerebrospinal fluid, the deformation is coupled with a contrast adaptation at each iteration. Third, a refinement process and a global deformation are applied on the low-resolution mesh to provide an accurate segmentation of the spinal cord. Our method was evaluated against a semi-automatic edge-based snake method implemented in ITK-SNAP (with heavy manual adjustment) by computing the 3D Dice coefficient, mean and maximum distance errors. Accuracy and robustness were assessed from 8 healthy subjects. Each subject had two volumes: one at the cervical and one at the thoracolumbar region. Results show a precision of 0.30 +/- 0.05 mm (mean absolute distance error) in the cervical region and 0.27 +/- 0.06 mm in the thoracolumbar region. The 3D Dice coefficient was of 0.93 for both regions.

  17. Smart-card-based automatic meal record system intervention tool for analysis using data mining approach.

    PubMed

    Zenitani, Satoko; Nishiuchi, Hiromu; Kiuchi, Takahiro

    2010-04-01

    The Smart-card-based Automatic Meal Record system for company cafeterias (AutoMealRecord system) was recently developed and used to monitor employee eating habits. The system could be a unique nutrition assessment tool for automatically monitoring the meal purchases of all employees, although it only focuses on company cafeterias and has never been validated. Before starting an interventional study, we tested the reliability of the data collected by the system using the data mining approach. The AutoMealRecord data were examined to determine if it could predict current obesity. All data used in this study (n = 899) were collected by a major electric company based in Tokyo, which has been operating the AutoMealRecord system for several years. We analyzed dietary patterns by principal component analysis using data from the system and extracted 5 major dietary patterns: healthy, traditional Japanese, Chinese, Japanese noodles, and pasta. The ability to predict current body mass index (BMI) with dietary preference was assessed with multiple linear regression analyses, and in the current study, BMI was positively correlated with male gender, preference for "Japanese noodles," mean energy intake, protein content, and frequency of body measurement at a body measurement booth in the cafeteria. There was a negative correlation with age, dietary fiber, and lunchtime cafeteria use (R(2) = 0.22). This regression model predicted "would-be obese" participants (BMI >or= 23) with 68.8% accuracy by leave-one-out cross validation. This shows that there was sufficient predictability of BMI based on data from the AutoMealRecord System. We conclude that the AutoMealRecord system is valuable for further consideration as a health care intervention tool.

  18. Smart-card-based automatic meal record system intervention tool for analysis using data mining approach.

    PubMed

    Zenitani, Satoko; Nishiuchi, Hiromu; Kiuchi, Takahiro

    2010-04-01

    The Smart-card-based Automatic Meal Record system for company cafeterias (AutoMealRecord system) was recently developed and used to monitor employee eating habits. The system could be a unique nutrition assessment tool for automatically monitoring the meal purchases of all employees, although it only focuses on company cafeterias and has never been validated. Before starting an interventional study, we tested the reliability of the data collected by the system using the data mining approach. The AutoMealRecord data were examined to determine if it could predict current obesity. All data used in this study (n = 899) were collected by a major electric company based in Tokyo, which has been operating the AutoMealRecord system for several years. We analyzed dietary patterns by principal component analysis using data from the system and extracted 5 major dietary patterns: healthy, traditional Japanese, Chinese, Japanese noodles, and pasta. The ability to predict current body mass index (BMI) with dietary preference was assessed with multiple linear regression analyses, and in the current study, BMI was positively correlated with male gender, preference for "Japanese noodles," mean energy intake, protein content, and frequency of body measurement at a body measurement booth in the cafeteria. There was a negative correlation with age, dietary fiber, and lunchtime cafeteria use (R(2) = 0.22). This regression model predicted "would-be obese" participants (BMI >or= 23) with 68.8% accuracy by leave-one-out cross validation. This shows that there was sufficient predictability of BMI based on data from the AutoMealRecord System. We conclude that the AutoMealRecord system is valuable for further consideration as a health care intervention tool. PMID:20534329

  19. Sensitivity based segmentation and identification in automatic speech recognition

    NASA Astrophysics Data System (ADS)

    Absher, R.

    1984-03-01

    This research program continued an investigation of sensitivity analysis, and its use in the segmentation and identification of the phonetic units of speech, that was initiated during the 1982 Summer Faculty Research Program. The elements of the sensitivity matrix, which express the relative change in each pole of the speech model to a relative change in each coefficient of the characteristic equation, were evaluated for an expanded set of data which consisted of six vowels contained in single words spoken in a simple carrier phrase by five males with differing dialects. The objectives were to evaluate the sensitivity matrix, interpret its changes during the production of the vowels, and to evaluate inter-speaker variations. It was determined that the sensitivity analysis (1) serves to segment the vowel interval, (2) provides a measure of when a vowel is on target, and (3) should provide sufficient information to identify each particular vowel. Based on the results presented, sensitivity analysis should result in more accurate segmentation and identification of phonemes and should provide a practicable framework for incorporation of acoustic-phonetic variance as well as time and talker normalization.

  20. A Telesurveillance System With Automatic Electrocardiogram Interpretation Based on Support Vector Machine and Rule-Based Processing

    PubMed Central

    Lin, Ching-Miao; Lai, Feipei; Ho, Yi-Lwun; Hung, Chi-Sheng

    2015-01-01

    Background Telehealth care is a global trend affecting clinical practice around the world. To mitigate the workload of health professionals and provide ubiquitous health care, a comprehensive surveillance system with value-added services based on information technologies must be established. Objective We conducted this study to describe our proposed telesurveillance system designed for monitoring and classifying electrocardiogram (ECG) signals and to evaluate the performance of ECG classification. Methods We established a telesurveillance system with an automatic ECG interpretation mechanism. The system included: (1) automatic ECG signal transmission via telecommunication, (2) ECG signal processing, including noise elimination, peak estimation, and feature extraction, (3) automatic ECG interpretation based on the support vector machine (SVM) classifier and rule-based processing, and (4) display of ECG signals and their analyzed results. We analyzed 213,420 ECG signals that were diagnosed by cardiologists as the gold standard to verify the classification performance. Results In the clinical ECG database from the Telehealth Center of the National Taiwan University Hospital (NTUH), the experimental results showed that the ECG classifier yielded a specificity value of 96.66% for normal rhythm detection, a sensitivity value of 98.50% for disease recognition, and an accuracy value of 81.17% for noise detection. For the detection performance of specific diseases, the recognition model mainly generated sensitivity values of 92.70% for atrial fibrillation, 89.10% for pacemaker rhythm, 88.60% for atrial premature contraction, 72.98% for T-wave inversion, 62.21% for atrial flutter, and 62.57% for first-degree atrioventricular block. Conclusions Through connected telehealth care devices, the telesurveillance system, and the automatic ECG interpretation system, this mechanism was intentionally designed for continuous decision-making support and is reliable enough to reduce the

  1. Speech Recognition-based and Automaticity Programs to Help Students with Severe Reading and Spelling Problems

    ERIC Educational Resources Information Center

    Higgins, Eleanor L.; Raskind, Marshall H.

    2004-01-01

    This study was conducted to assess the effectiveness of two programs developed by the Frostig Center Research Department to improve the reading and spelling of students with learning disabilities (LD): a computer Speech Recognition-based Program (SRBP) and a computer and text-based Automaticity Program (AP). Twenty-eight LD students with reading…

  2. BioASF: a framework for automatically generating executable pathway models specified in BioPAX

    PubMed Central

    Haydarlou, Reza; Jacobsen, Annika; Bonzanni, Nicola; Feenstra, K. Anton; Abeln, Sanne; Heringa, Jaap

    2016-01-01

    Motivation: Biological pathways play a key role in most cellular functions. To better understand these functions, diverse computational and cell biology researchers use biological pathway data for various analysis and modeling purposes. For specifying these biological pathways, a community of researchers has defined BioPAX and provided various tools for creating, validating and visualizing BioPAX models. However, a generic software framework for simulating BioPAX models is missing. Here, we attempt to fill this gap by introducing a generic simulation framework for BioPAX. The framework explicitly separates the execution model from the model structure as provided by BioPAX, with the advantage that the modelling process becomes more reproducible and intrinsically more modular; this ensures natural biological constraints are satisfied upon execution. The framework is based on the principles of discrete event systems and multi-agent systems, and is capable of automatically generating a hierarchical multi-agent system for a given BioPAX model. Results: To demonstrate the applicability of the framework, we simulated two types of biological network models: a gene regulatory network modeling the haematopoietic stem cell regulators and a signal transduction network modeling the Wnt/β-catenin signaling pathway. We observed that the results of the simulations performed using our framework were entirely consistent with the simulation results reported by the researchers who developed the original models in a proprietary language. Availability and Implementation: The framework, implemented in Java, is open source and its source code, documentation and tutorial are available at http://www.ibi.vu.nl/programs/BioASF. Contact: j.heringa@vu.nl PMID:27307645

  3. Three Modeling Applications to Promote Automatic Item Generation for Examinations in Dentistry.

    PubMed

    Lai, Hollis; Gierl, Mark J; Byrne, B Ellen; Spielman, Andrew I; Waldschmidt, David M

    2016-03-01

    Test items created for dentistry examinations are often individually written by content experts. This approach to item development is expensive because it requires the time and effort of many content experts but yields relatively few items. The aim of this study was to describe and illustrate how items can be generated using a systematic approach. Automatic item generation (AIG) is an alternative method that allows a small number of content experts to produce large numbers of items by integrating their domain expertise with computer technology. This article describes and illustrates how three modeling approaches to item content-item cloning, cognitive modeling, and image-anchored modeling-can be used to generate large numbers of multiple-choice test items for examinations in dentistry. Test items can be generated by combining the expertise of two content specialists with technology supported by AIG. A total of 5,467 new items were created during this study. From substitution of item content, to modeling appropriate responses based upon a cognitive model of correct responses, to generating items linked to specific graphical findings, AIG has the potential for meeting increasing demands for test items. Further, the methods described in this study can be generalized and applied to many other item types. Future research applications for AIG in dental education are discussed. PMID:26933110

  4. Automatic vehicle detection based on automatic histogram-based fuzzy C-means algorithm and perceptual grouping using very high-resolution aerial imagery and road vector data

    NASA Astrophysics Data System (ADS)

    Ghaffarian, Saman; Gökaşar, Ilgın

    2016-01-01

    This study presents an approach for the automatic detection of vehicles using very high-resolution images and road vector data. Initially, road vector data and aerial images are integrated to extract road regions. Then, the extracted road/street region is clustered using an automatic histogram-based fuzzy C-means algorithm, and edge pixels are detected using the Canny edge detector. In order to automatically detect vehicles, we developed a local perceptual grouping approach based on fusion of edge detection and clustering outputs. To provide the locality, an ellipse is generated using characteristics of the candidate clusters individually. Then, ratio of edge pixels to nonedge pixels in the corresponding ellipse is computed to distinguish the vehicles. Finally, a point-merging rule is conducted to merge the points that satisfy a predefined threshold and are supposed to denote the same vehicles. The experimental validation of the proposed method was carried out on six very high-resolution aerial images that illustrate two highways, two shadowed roads, a crowded narrow street, and a street in a dense urban area with crowded parked vehicles. The evaluation of the results shows that our proposed method performed 86% and 83% in overall correctness and completeness, respectively.

  5. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano

    2016-07-01

    Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology.

  6. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography.

    PubMed

    Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano

    2016-07-01

    Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology. PMID:27273293

  7. A VxD-based automatic blending system using multithreaded programming.

    PubMed

    Wang, L; Jiang, X; Chen, Y; Tan, K C

    2004-01-01

    This paper discusses the object-oriented software design for an automatic blending system. By combining the advantages of a programmable logic controller (PLC) and an industrial control PC (ICPC), an automatic blending control system is developed for a chemical plant. The system structure and multithread-based communication approach are first presented in this paper. The overall software design issues, such as system requirements and functionalities, are then discussed in detail. Furthermore, by replacing the conventional dynamic link library (DLL) with virtual X device drivers (VxD's), a practical and cost-effective solution is provided to improve the robustness of the Windows platform-based automatic blending system in small- and medium-sized plants.

  8. Automatic fitting of spiking neuron models to electrophysiological recordings.

    PubMed

    Rossant, Cyrille; Goodman, Dan F M; Platkiewicz, Jonathan; Brette, Romain

    2010-01-01

    Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains) that can run in parallel on graphics processing units (GPUs). The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models. PMID:20224819

  9. Automatic atlas-based volume estimation of human brain regions from MR images

    SciTech Connect

    Andreasen, N.C.; Rajarethinam, R.; Cizadlo, T.; Arndt, S.

    1996-01-01

    MRI offers many opportunities for noninvasive in vivo measurement of structure-function relationships in the human brain. Although automated methods are now available for whole-brain measurements, an efficient and valid automatic method for volume estimation of subregions such as the frontal or temporal lobes is still needed. We adapted the Talairach atlas to the study of brain subregions. We supplemented the atlas with additional boxes to include the cerebellum. We assigned all the boxes to 1 of 12 regions of interest (ROIs) (frontal, parietal, temporal, and occipital lobes, cerebellum, and subcortical regions on right and left sides of the brain).Using T1-weighted MR scans collected with an SPGR sequence (slice thickness = 1.5 mm), we manually traced these ROIs and produced volume estimates. We then transformed the scans into Talairach space and compared the volumes produced by the two methods ({open_quotes}traced{close_quotes} versus {open_quotes}automatic{close_quotes}). The traced measurements were considered to be the {open_quotes}gold standard{close_quotes} against which the automatic measurements were compared. The automatic method was found to produce measurements that were nearly identical to the traced method. We compared absolute measurements of volume produced by the two methods, as well as the sensitivity and specificity of the automatic method. We also compared the measurements of cerebral blood flow obtained through [{sup 15}O]H{sub 2}O PET studies in a sample of nine subjects. Absolute measurements of volume produced by the two methods were very similar, and the sensitivity and specificity of the automatic method were found to be high for all regions. The flow values were also found to be very similar by both methods. The automatic atlas-based method for measuring the volume of brain subregions produces results that are similar to manual techniques. 39 refs., 4 figs., 3 tabs.

  10. Vertebral shape: automatic measurement with dynamically sequenced active appearance models.

    PubMed

    Roberts, M G; Cootes, T F; Adams, J E

    2005-01-01

    The shape and appearance of vertebrae on lateral dual x-ray absorptiometry (DXA) scans were statistically modelled. The spine was modelled by a sequence of overlapping triplets of vertebrae, using Active Appearance Models (AAMs). To automate vertebral morphometry, the sequence of trained models was matched to previously unseen scans. The dataset includes a significant number of pathologies. A new dynamic ordering algorithm was assessed for the model fitting sequence, using the best quality of fit achieved by multiple sub-model candidates. The accuracy of the search was improved by dynamically imposing the best quality candidate first. The results confirm the feasibility of substantially automating vertebral morphometry measurements even with fractures or noisy images.

  11. Automatic system for brain MRI analysis using a novel combination of fuzzy rule-based and automatic clustering techniques

    NASA Astrophysics Data System (ADS)

    Hillman, Gilbert R.; Chang, Chih-Wei; Ying, Hao; Kent, T. A.; Yen, John

    1995-05-01

    Analysis of magnetic resonance images (MRI) of the brain permits the identification and measurement of brain compartments. These compartments include normal subdivisions of brain tissue, such as gray matter, white matter and specific structures, and also include pathologic lesions associated with stroke or viral infection. A fuzzy system has been developed to analyze images of animal and human brain, segmenting the images into physiologically meaningful regions for display and measurement. This image segmentation system consists of two stages which include a fuzzy rule-based system and fuzzy c-means algorithm (FCM). The first stage of this system is a fuzzy rule-based system which classifies most pixels in MR images into several known classes and one `unclassified' group, which fails to fit the predetermined rules. In the second stage, this system uses the result of the first stage as initial estimates for the properties of the compartments and applies FCM to classify all the previously unclassified pixels. The initial prototypes are estimated by using the averages of the previously classified pixels. The combined processes constitute a fast, accurate and robust image segmentation system. This method can be applied to many clinical image segmentation problems. While the rule-based portion of the system allows specialized knowledge about the images to be incorporated, the FCM allows the resolution of ambiguities that result from noise and artifacts in the image data. The volumes and locations of the compartments can easily be measured and reported quantitatively once they are identified. It is easy to adapt this approach to new imaging problems, by introducing a new set of fuzzy rules and adjusting the number of expected compartments. However, for the purpose of building a practical fully automatic system, a rule learning mechanism may be necessary to improve the efficiency of modification of the fuzzy rules.

  12. Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.

    PubMed

    Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek

    2016-02-01

    Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/.

  13. Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations

    PubMed Central

    Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek

    2016-01-01

    Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-‘one-click’ experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/. PMID:26894674

  14. Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.

    PubMed

    Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek

    2016-02-01

    Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/. PMID:26894674

  15. Automatic specification of reliability models for fault-tolerant computers

    NASA Technical Reports Server (NTRS)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1993-01-01

    The calculation of reliability measures using Markov models is required for life-critical processor-memory-switch structures that have standby redundancy or that are subject to transient or intermittent faults or repair. The task of specifying these models is tedious and prone to human error because of the large number of states and transitions required in any reasonable system. Therefore, model specification is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model specification. Automation requires a general system description language (SDL). For practicality, this SDL should also provide a high level of abstraction and be easy to learn and use. The first attempt to define and implement an SDL with those characteristics is presented. A program named Automated Reliability Modeling (ARM) was constructed as a research vehicle. The ARM program uses a graphical interface as its SDL, and it outputs a Markov reliability model specification formulated for direct use by programs that generate and evaluate the model.

  16. The feasibility of atlas-based automatic segmentation of MRI for H&N radiotherapy planning.

    PubMed

    Wardman, Kieran; Prestwich, Robin J D; Gooding, Mark J; Speight, Richard J

    2016-01-01

    Atlas-based autosegmentation is an established tool for segmenting structures for CT-planned head and neck radiotherapy. MRI is being increasingly integrated into the planning process. The aim of this study is to assess the feasibility of MRI-based, atlas-based autosegmentation for organs at risk (OAR) and lymph node levels, and to compare the segmentation accuracy with CT-based autosegmentation. Fourteen patients with locally advanced head and neck cancer in a prospective imaging study underwent a T1-weighted MRI and a PET-CT (with dedicated contrast-enhanced CT) in an immobilization mask. Organs at risk (orbits, parotids, brainstem, and spinal cord) and the left level II lymph node region were manually delineated on the CT and MRI separately. A 'leave one out' approach was used to automatically segment structures onto the remaining images separately for CT and MRI. Contour comparison was performed using multiple positional metrics: Dice index, mean distance to conformity (MDC), sensitivity index (Se Idx), and inclusion index (Incl Idx). Automatic segmentation using MRI of orbits, parotids, brainstem, and lymph node level was acceptable with a DICE coefficient of 0.73-0.91, MDC 2.0-5.1mm, Se Idx 0.64-0.93, Incl Idx 0.76-0.93. Segmentation of the spinal cord was poor (Dice coefficient 0.37). The process of automatic segmentation was significantly better on MRI compared to CT for orbits, parotid glands, brainstem, and left lymph node level II by multiple positional metrics; spinal cord segmentation based on MRI was inferior compared with CT. Accurate atlas-based automatic segmentation of OAR and lymph node levels is feasible using T1-MRI; segmentation of the spinal cord was found to be poor. Comparison with CT-based automatic segmentation suggests that the process is equally as, or more accurate, using MRI. These results support further translation of MRI-based segmentation methodology into clinicalpractice. PMID:27455480

  17. A Corpus-Based Approach for Automatic Thai Unknown Word Recognition Using Boosting Techniques

    NASA Astrophysics Data System (ADS)

    Techo, Jakkrit; Nattee, Cholwich; Theeramunkong, Thanaruk

    While classification techniques can be applied for automatic unknown word recognition in a language without word boundary, it faces with the problem of unbalanced datasets where the number of positive unknown word candidates is dominantly smaller than that of negative candidates. To solve this problem, this paper presents a corpus-based approach that introduces a so-called group-based ranking evaluation technique into ensemble learning in order to generate a sequence of classification models that later collaborate to select the most probable unknown word from multiple candidates. Given a classification model, the group-based ranking evaluation (GRE) is applied to construct a training dataset for learning the succeeding model, by weighing each of its candidates according to their ranks and correctness when the candidates of an unknown word are considered as one group. A number of experiments have been conducted on a large Thai medical text to evaluate performance of the proposed group-based ranking evaluation approach, namely V-GRE, compared to the conventional naïve Bayes classifier and our vanilla version without ensemble learning. As the result, the proposed method achieves an accuracy of 90.93±0.50% when the first rank is selected while it gains 97.26±0.26% when the top-ten candidates are considered, that is 8.45% and 6.79% improvement over the conventional record-based naïve Bayes classifier and the vanilla version. Another result on applying only best features show 93.93±0.22% and up to 98.85±0.15% accuracy for top-1 and top-10, respectively. They are 3.97% and 9.78% improvement over naive Bayes and the vanilla version. Finally, an error analysis is given.

  18. Automatic generation of fuzzy rules for the sensor-based navigation of a mobile robot

    SciTech Connect

    Pin, F.G.; Watanabe, Y.

    1994-10-01

    A system for automatic generation of fuzzy rules is proposed which is based on a new approach, called {open_quotes}Fuzzy Behaviorist,{close_quotes} and on its associated formalism for rule base development in behavior-based robot control systems. The automated generator of fuzzy rules automatically constructs the set of rules and the associated membership functions that implement reasoning schemes that have been expressed in qualitative terms. The system also checks for completeness of the rule base and independence and/or redundancy of the rules to ensure that the requirements of the formalism are satisfied. Examples of the automatic generation of fuzzy rules for cases involving suppression and/or inhibition of fuzzy behaviors are given and discussed. Experimental results obtained with the automated fuzzy rule generator applied to the domain of sensor-based navigation in a priori unknown environments using one of our autonomous test-bed robots are then presented and discussed to illustrate the feasibility of large-scale automatic fuzzy rule generation using our proposed {open_quotes}Fuzzy Behaviorist{close_quotes} approach.

  19. Sensor-based navigation of a mobile robot using automatically constructed fuzzy rules

    SciTech Connect

    Watanabe, Y.; Pin, F.G.

    1993-10-01

    A system for automatic generation of fuzzy rules is proposed which is based on a new approach, called ``Fuzzy Behaviorist,`` and on its associated formalism for rule base development in behavior-based robot control systems. The automated generator of fuzzy rules automatically constructs the set of rules and the associated membership functions that implement reasoning schemes that have been expressed in qualitative terms. The system also checks for completeness of the rule base and independence and/or redundancy of the rules to ensure that the requirements of the formalism are satisfied. Examples of the automatic generation of fuzzy rules for cases involving suppression and/or inhibition of fuzzy behaviors are given and discussed. Experimental results obtained with the automated fuzzy rule generator applied to the domain of sensor-based navigation in a priori unknown environments using one of our autonomous test-bed robots are then presented and discussed to illustrate the feasibility of large-scale automatic fuzzy rule generation using our proposed ``Fuzzy Behaviorist`` approach.

  20. AUTOMATIC CALIBRATION OF A STOCHASTIC-LAGRANGIAN TRANSPORT MODEL (SLAM)

    EPA Science Inventory

    Numerical models are a useful tool in evaluating and designing NAPL remediation systems. Traditional constitutive finite difference and finite element models are complex and expensive to apply. For this reason, this paper presents the application of a simplified stochastic-Lagran...

  1. Showing Automatically Generated Students' Conceptual Models to Students and Teachers

    ERIC Educational Resources Information Center

    Perez-Marin, Diana; Pascual-Nieto, Ismael

    2010-01-01

    A student conceptual model can be defined as a set of interconnected concepts associated with an estimation value that indicates how well these concepts are used by the students. It can model just one student or a group of students, and can be represented as a concept map, conceptual diagram or one of several other knowledge representation…

  2. Towards automatic Markov reliability modeling of computer architectures

    NASA Technical Reports Server (NTRS)

    Liceaga, C. A.; Siewiorek, D. P.

    1986-01-01

    The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.

  3. Automatic calibration of space based manipulators and mechanisms

    NASA Technical Reports Server (NTRS)

    Everett, Louis J.

    1988-01-01

    Four tasks in manipulator kinematic calibration are summarized. Calibration of a seven degree of freedom manipulator was simulated. A calibration model is presented that can be applied on a closed-loop robot. It is an expansion of open-loop kinematic calibration algorithms subject to constraints. A closed-loop robot with a five-bar linkage transmission was tested. Results show that the algorithm converges within a few iterations. The concept of model differences is formalized. Differences are categorized as structural and numerical, with emphasis on the structural. The work demonstrates that geometric manipulators can be visualized as points in a vector space with the dimension of the space depending solely on the number and type of manipulator joint. Visualizing parameters in a kinematic model as the coordinates locating the manipulator in vector space enables a standard evaluation of the models. Key results include a derivation of the maximum number of parameters necessary for models, a formal discussion on the inclusion of extra parameters, and a method to predetermine a minimum model structure for a kinematic manipulator. A technique is presented that enables single point sensors to gather sufficient information to complete a calibration.

  4. Automatic background updating for video-based vehicle detection

    NASA Astrophysics Data System (ADS)

    Hu, Chunhai; Li, Dongmei; Liu, Jichuan

    2008-03-01

    Video-based vehicle detection is one of the most valuable techniques for the Intelligent Transportation System (ITS). The widely used video-based vehicle detection technique is the background subtraction method. The key problem of this method is how to subtract and update the background effectively. In this paper an efficient background updating scheme based on Zone-Distribution for vehicle detection is proposed to resolve the problems caused by sudden camera perturbation, sudden or gradual illumination change and the sleeping person problem. The proposed scheme is robust and fast enough to satisfy the real-time constraints of vehicle detection.

  5. FishCam - A semi-automatic video-based monitoring system of fish migration

    NASA Astrophysics Data System (ADS)

    Kratzert, Frederik; Mader, Helmut

    2016-04-01

    One of the main objectives of the Water Framework Directive is to preserve and restore the continuum of river networks. Regarding vertebrate migration, fish passes are widely used measure to overcome anthropogenic constructions. Functionality of this measure needs to be verified by monitoring. In this study we propose a newly developed monitoring system, named FishCam, to observe fish migration especially in fish passes without contact and without imposing stress on fish. To avoid time and cost consuming field work for fish pass monitoring, this project aims to develop a semi-automatic monitoring system that enables a continuous observation of fish migration. The system consists of a detection tunnel and a high resolution camera, which is mainly based on the technology of security cameras. If changes in the image, e.g. by migrating fish or drifting particles, are detected by a motion sensor, the camera system starts recording and continues until no further motion is detectable. An ongoing key challenge in this project is the development of robust software, which counts, measures and classifies the passing fish. To achieve this goal, many different computer vision tasks and classification steps have to be combined. Moving objects have to be detected and separated from the static part of the image, objects have to be tracked throughout the entire video and fish have to be separated from non-fish objects (e.g. foliage and woody debris, shadows and light reflections). Subsequently, the length of all detected fish needs to be determined and fish should be classified into species. The object classification in fish and non-fish objects is realized through ensembles of state-of-the-art classifiers on a single image per object. The choice of the best image for classification is implemented through a newly developed "fish benchmark" value. This value compares the actual shape of the object with a schematic model of side-specific fish. To enable an automatization of the

  6. Template-based automatic breast segmentation on MRI by excluding the chest region

    SciTech Connect

    Lin, Muqing; Chen, Jeon-Hor; Wang, Xiaoyong; Su, Min-Ying; Chan, Siwa; Chen, Siping

    2013-12-15

    Purpose: Methods for quantification of breast density on MRI using semiautomatic approaches are commonly used. In this study, the authors report on a fully automatic chest template-based method. Methods: Nonfat-suppressed breast MR images from 31 healthy women were analyzed. Among them, one case was randomly selected and used as the template, and the remaining 30 cases were used for testing. Unlike most model-based breast segmentation methods that use the breast region as the template, the chest body region on a middle slice was used as the template. Within the chest template, three body landmarks (thoracic spine and bilateral boundary of the pectoral muscle) were identified for performing the initial V-shape cut to determine the posterior lateral boundary of the breast. The chest template was mapped to each subject's image space to obtain a subject-specific chest model for exclusion. On the remaining image, the chest wall muscle was identified and excluded to obtain clean breast segmentation. The chest and muscle boundaries determined on the middle slice were used as the reference for the segmentation of adjacent slices, and the process continued superiorly and inferiorly until all 3D slices were segmented. The segmentation results were evaluated by an experienced radiologist to mark voxels that were wrongly included or excluded for error analysis. Results: The breast volumes measured by the proposed algorithm were very close to the radiologist's corrected volumes, showing a % difference ranging from 0.01% to 3.04% in 30 tested subjects with a mean of 0.86% ± 0.72%. The total error was calculated by adding the inclusion and the exclusion errors (so they did not cancel each other out), which ranged from 0.05% to 6.75% with a mean of 3.05% ± 1.93%. The fibroglandular tissue segmented within the breast region determined by the algorithm and the radiologist were also very close, showing a % difference ranging from 0.02% to 2.52% with a mean of 1.03% ± 1.03%. The

  7. Automatic hearing loss detection system based on auditory brainstem response

    NASA Astrophysics Data System (ADS)

    Aldonate, J.; Mercuri, C.; Reta, J.; Biurrun, J.; Bonell, C.; Gentiletti, G.; Escobar, S.; Acevedo, R.

    2007-11-01

    Hearing loss is one of the pathologies with the highest prevalence in newborns. If it is not detected in time, it can affect the nervous system and cause problems in speech, language and cognitive development. The recommended methods for early detection are based on otoacoustic emissions (OAE) and/or auditory brainstem response (ABR). In this work, the design and implementation of an automated system based on ABR to detect hearing loss in newborns is presented. Preliminary evaluation in adults was satisfactory.

  8. Automatic generation of a metamodel from an existing knowledge base to assist the development of a new analogous knowledge base.

    PubMed

    Bouaud, J; Séroussi, B

    2002-01-01

    Knowledge acquisition is a key step in the development of knowledge-based systems and methods have been proposed to help elicitating a domain-specific task model from a generic task model. We explored how an existing validated knowledge base (KB) represented by a decision tree could be automatically processed to infer a higher level domain-specific task model. On-codoc is a guideline-based decision support system applied to breast cancer therapy. Assuming task identity and ontological proximity between breast and lung cancer domains, the generalization of the breast can-cer KB should allow to build a metamodel to serve as a guide for the elaboration of a new specific KB on lung cancer. Two types of parametrized generalization methods based on tree structure simplification and ontological abstraction were used. We defined a similarity distance and a generalization coefficient to select the best metamodel identified as the closest to the original decision tree of the most generalized metamodels. PMID:12463788

  9. Automatic Implementation of Ttethernet-Based Time-Triggered Avionics Applications

    NASA Astrophysics Data System (ADS)

    Gorcitz, Raul Adrian; Carle, Thomas; Lesens, David; Monchaux, David; Potop-Butucaruy, Dumitru; Sorel, Yves

    2015-09-01

    The design of safety-critical embedded systems such as those used in avionics still involves largely manual phases. But in avionics the definition of standard interfaces embodied in standards such as ARINC 653 or TTEthernet should allow the definition of fully automatic code generation flows that reduce the costs while improving the quality of the generated code, much like compilers have done when replacing manual assembly coding. In this paper, we briefly present such a fully automatic implementation tool, called Lopht, for ARINC653-based time-triggered systems, and then explain how it is currently extended to include support for TTEthernet networks.

  10. Automatic Method of Supernovae Classification by Modeling Human Procedure of Spectrum Analysis

    NASA Astrophysics Data System (ADS)

    Módolo, Marcelo; Rosa, Reinaldo; Guimaraes, Lamartine N. F.

    2016-07-01

    The classification of a recently discovered supernova must be done as quickly as possible in order to define what information will be captured and analyzed in the following days. This classification is not trivial and only a few experts astronomers are able to perform it. This paper proposes an automatic method that models the human procedure of classification. It uses Multilayer Perceptron Neural Networks to analyze the supernovae spectra. Experiments were performed using different pre-processing and multiple neural network configurations to identify the classic types of supernovae. Significant results were obtained indicating the viability of using this method in places that have no specialist or that require an automatic analysis.

  11. A Zipfian Model of an Automatic Bibliographic System: An Application to MEDLINE.

    ERIC Educational Resources Information Center

    Fedorowicz, Jane

    1982-01-01

    Derives the underlying structure of the Zipf distribution, with emphasis on its application to word frequencies in the inverted files of automatic bibliographic systems, and applies the Zipfian model to the National Library of Medicine's MEDLINE database. An appendix on the Zipfian mean and 12 references are included. (Author/JL)

  12. Automatic Summarization of MEDLINE Citations for Evidence–Based Medical Treatment: A Topic-Oriented Evaluation

    PubMed Central

    Fiszman, Marcelo; Demner-Fushman, Dina; Kilicoglu, Halil; Rindflesch, Thomas C.

    2009-01-01

    As the number of electronic biomedical textual resources increases, it becomes harder for physicians to find useful answers at the point of care. Information retrieval applications provide access to databases; however, little research has been done on using automatic summarization to help navigate the documents returned by these systems. After presenting a semantic abstraction automatic summarization system for MEDLINE citations, we concentrate on evaluating its ability to identify useful drug interventions for fifty-three diseases. The evaluation methodology uses existing sources of evidence-based medicine as surrogates for a physician-annotated reference standard. Mean average precision (MAP) and a clinical usefulness score developed for this study were computed as performance metrics. The automatic summarization system significantly outperformed the baseline in both metrics. The MAP gain was 0.17 (p < 0.01) and the increase in the overall score of clinical usefulness was 0.39 (p < 0.05). PMID:19022398

  13. Adaptive finite element program for automatic modeling of thermal processes during laser-tissue interaction

    NASA Astrophysics Data System (ADS)

    Yakunin, Alexander N.; Scherbakov, Yury N.

    1994-02-01

    The absence of satisfactory criteria for discrete model parameters choice during computer modeling of thermal processes of laser-biotissue interaction may be the premier sign for the accuracy of the numerical results obtained. The approach realizing the new concept of direct automatical adaptive grid construction is suggested. The intellectual program provides high calculation accuracy and is simple in practical usage so that a physician receives the ability to prescribe treatment without any assistance of a specialist in mathematical modeling.

  14. Knowledge Base for Automatic Generation of Online IMS LD Compliant Course Structures

    ERIC Educational Resources Information Center

    Pacurar, Ecaterina Giacomini; Trigano, Philippe; Alupoaie, Sorin

    2006-01-01

    Our article presents a pedagogical scenarios-based web application that allows the automatic generation and development of pedagogical websites. These pedagogical scenarios are represented in the IMS Learning Design standard. Our application is a web portal helping teachers to dynamically generate web course structures, to edit pedagogical content…

  15. Evaluating Automatic Speech Recognition-Based Language Learning Systems: A Case Study

    ERIC Educational Resources Information Center

    van Doremalen, Joost; Boves, Lou; Colpaert, Jozef; Cucchiarini, Catia; Strik, Helmer

    2016-01-01

    The purpose of this research was to evaluate a prototype of an automatic speech recognition (ASR)-based language learning system that provides feedback on different aspects of speaking performance (pronunciation, morphology and syntax) to students of Dutch as a second language. We carried out usability reviews, expert reviews and user tests to…

  16. Retrieval, Automaticity, Vocabulary Elaboration, Orthography (RAVE-O): A Comprehensive, Fluency-based Reading Intervention Program.

    ERIC Educational Resources Information Center

    Wolf, Maryanne; Miller, Lynne; Donnelly, Katharine

    2000-01-01

    The RAVE-O (Retrieval, Automaticity, Vocabulary Elaboration, Orthography) program is an experimental, fluency-based approach to reading intervention that is designed to accompany a phonological analysis program for children with developmental reading disabilities. The goals, theoretical principles, and applied activities of the RAVE-O curriculum…

  17. Global Location-Based Access to Web Applications Using Atom-Based Automatic Update

    NASA Astrophysics Data System (ADS)

    Singh, Kulwinder; Park, Dong-Won

    We propose an architecture which enables people to enquire about information available in directory services by voice using regular phones. We implement a Virtual User Agent (VUA) which mediates between the human user and a business directory service. The system enables the user to search for the nearest clinic, gas station by price, motel by price, food / coffee, banks/ATM etc. and fix an appointment, or automatically establish a call between the user and the business party if the user prefers. The user also has an option to receive appointment confirmation by phone, SMS, or e-mail. The VUA is accessible by a toll free DID (Direct Inward Dialing) number using a phone by anyone, anywhere, anytime. We use the Euclidean formula for distance measurement. Since, shorter geodesic distances (on the Earth’s surface) correspond to shorter Euclidean distances (measured by a straight line through the Earth). Our proposed architecture uses Atom XML syndication format protocol for data integration, VoiceXML for creating the voice user interface (VUI) and CCXML for controlling the call components. We also provide an efficient algorithm for parsing Atom feeds which provide data to the system. Moreover, we describe a cost-effective way for providing global access to the VUA based on Asterisk (an open source IP-PBX). We also provide some information on how our system can be integrated with GPS for locating the user coordinates and therefore efficiently and spontaneously enhancing the system response. Additionally, the system has a mechanism for validating the phone numbers in its database, and it updates the number and other information such as daily price of gas, motel etc. automatically using an Atom-based feed. Currently, the commercial directory services (Example 411) do not have facilities to update the listing in the database automatically, so that why callers most of the times get out-of-date phone numbers or other information. Our system can be integrated very easily

  18. Automatic generation of skeletal mechanisms for ignition combustion based on level of importance analysis

    SciTech Connect

    Loevaas, Terese

    2009-07-15

    A level of importance (LOI) selection parameter is employed in order to identify species with general low importance to the overall accuracy of a chemical model. This enables elimination of the minor reaction paths in which these species are involved. The generation of such skeletal mechanisms is performed automatically in a pre-processing step ranking species according to their level of importance. This selection criterion is a combined parameter based on a time scale and sensitivity analysis, identifying both short lived species and species with respect to which the observable of interest has low sensitivity. In this work a careful element flux analysis demonstrates that such species do not interact in major reaction paths. Employing the LOI procedure replaces the previous method of identifying redundant species through a two step procedure involving a reaction flow analysis followed by a sensitivity analysis. The flux analysis is performed using DARS {sup copyright}, a digital analysis tool modelling reactive systems. Simplified chemical models are generated based on a detailed ethylene mechanism involving 111 species and 784 reactions (1566 forward and backward reactions) proposed by Wang et al. Eliminating species from detailed mechanisms introduces errors in the predicted combustion parameters. In the present work these errors are systematically studied for a wide range of conditions, including temperature, pressure and mixtures. Results show that the accuracy of simplified models is particularly lowered when the initial temperatures are close to the transition between low- and high-temperature chemistry. A speed-up factor of 5 is observed when using a simplified model containing only 27% of the original species and 19% of the original reactions. (author)

  19. A novel fully automatic scheme for fiducial marker-based alignment in electron tomography.

    PubMed

    Han, Renmin; Wang, Liansan; Liu, Zhiyong; Sun, Fei; Zhang, Fa

    2015-12-01

    Although the topic of fiducial marker-based alignment in electron tomography (ET) has been widely discussed for decades, alignment without human intervention remains a difficult problem. Specifically, the emergence of subtomogram averaging has increased the demand for batch processing during tomographic reconstruction; fully automatic fiducial marker-based alignment is the main technique in this process. However, the lack of an accurate method for detecting and tracking fiducial markers precludes fully automatic alignment. In this paper, we present a novel, fully automatic alignment scheme for ET. Our scheme has two main contributions: First, we present a series of algorithms to ensure a high recognition rate and precise localization during the detection of fiducial markers. Our proposed solution reduces fiducial marker detection to a sampling and classification problem and further introduces an algorithm to solve the parameter dependence of marker diameter and marker number. Second, we propose a novel algorithm to solve the tracking of fiducial markers by reducing the tracking problem to an incomplete point set registration problem. Because a global optimization of a point set registration occurs, the result of our tracking is independent of the initial image position in the tilt series, allowing for the robust tracking of fiducial markers without pre-alignment. The experimental results indicate that our method can achieve an accurate tracking, almost identical to the current best one in IMOD with half automatic scheme. Furthermore, our scheme is fully automatic, depends on fewer parameters (only requires a gross value of the marker diameter) and does not require any manual interaction, providing the possibility of automatic batch processing of electron tomographic reconstruction.

  20. CADLIVE toolbox for MATLAB: automatic dynamic modeling of biochemical networks with comprehensive system analysis.

    PubMed

    Inoue, Kentaro; Maeda, Kazuhiro; Miyabe, Takaaki; Matsuoka, Yu; Kurata, Hiroyuki

    2014-09-01

    Mathematical modeling has become a standard technique to understand the dynamics of complex biochemical systems. To promote the modeling, we had developed the CADLIVE dynamic simulator that automatically converted a biochemical map into its associated mathematical model, simulated its dynamic behaviors and analyzed its robustness. To enhance the feasibility by CADLIVE and extend its functions, we propose the CADLIVE toolbox available for MATLAB, which implements not only the existing functions of the CADLIVE dynamic simulator, but also the latest tools including global parameter search methods with robustness analysis. The seamless, bottom-up processes consisting of biochemical network construction, automatic construction of its dynamic model, simulation, optimization, and S-system analysis greatly facilitate dynamic modeling, contributing to the research of systems biology and synthetic biology. This application can be freely downloaded from http://www.cadlive.jp/CADLIVE_MATLAB/ together with an instruction.

  1. An automatic damage detection algorithm based on the Short Time Impulse Response Function

    NASA Astrophysics Data System (ADS)

    Auletta, Gianluca; Carlo Ponzo, Felice; Ditommaso, Rocco; Iacovino, Chiara

    2016-04-01

    also during the strong motion phase. This approach helps to overcome the limitation derived from the use of techniques based on simple Fourier Transform that provide good results when the response of the monitored system is stationary, but fails when the system exhibits a non-stationary behaviour. The main advantage derived from the use of the proposed approach for Structural Health Monitoring is based on the simplicity of the interpretation of the nonlinear variations of the fundamental frequency. The proposed methodology has been tested on numerical models of reinforced concrete structures designed for only gravity loads without and with the presence of infill panels. In order to verify the effectiveness of the proposed approach for the automatic evaluation of the fundamental frequency over time, the results of an experimental campaign of shaking table tests conducted at the seismic laboratory of University of Basilicata (SISLAB) have been used. Acknowledgements This study was partially funded by the Italian Civil Protection Department within the project DPC-RELUIS 2015 - RS4 ''Seismic observatory of structures and health monitoring''. References Ditommaso, R., Mucciarelli, M., Ponzo, F.C. (2012) Analysis of non-stationary structural systems by using a band-variable filter. Bulletin of Earthquake Engineering. DOI: 10.1007/s10518-012-9338-y.

  2. Engineering model of the electric drives of separation device for simulation of automatic control systems of reactive power compensation by means of serially connected capacitors

    NASA Astrophysics Data System (ADS)

    Juromskiy, V. M.

    2016-09-01

    It is developed a mathematical model for an electric drive of high-speed separation device in terms of the modeling dynamic systems Simulink, MATLAB. The model is focused on the study of the automatic control systems of the power factor (Cosφ) of an actuator by compensating the reactive component of the total power by switching a capacitor bank in series with the actuator. The model is based on the methodology of the structural modeling of dynamic processes.

  3. Automatic Adjustment of Wide-Base Google Street View Panoramas

    NASA Astrophysics Data System (ADS)

    Boussias-Alexakis, E.; Tsironisa, V.; Petsa, E.; Karras, G.

    2016-06-01

    This paper focuses on the issue of sparse matching in cases of extremely wide-base panoramic images such as those acquired by Google Street View in narrow urban streets. In order to effectively use affine point operators for bundle adjustment, panoramas must be suitably rectified to simulate affinity. To this end, a custom piecewise planar projection (triangular prism projection) is applied. On the assumption that the image baselines run parallel to the street façades, the estimated locations of the vanishing lines of the façade plane allow effectively removing projectivity and applying the ASIFT point operator on panorama pairs. Results from comparisons with multi-panorama adjustment, based on manually measured image points, and ground truth indicate that such an approach, if further elaborated, may well provide a realistic answer to the matching problem in the case of demanding panorama configurations.

  4. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    NASA Astrophysics Data System (ADS)

    Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.

    2016-03-01

    Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  5. A new approach for automatic sleep scoring: Combining Taguchi based complex-valued neural network and complex wavelet transform.

    PubMed

    Peker, Musa

    2016-06-01

    Automatic classification of sleep stages is one of the most important methods used for diagnostic procedures in psychiatry and neurology. This method, which has been developed by sleep specialists, is a time-consuming and difficult process. Generally, electroencephalogram (EEG) signals are used in sleep scoring. In this study, a new complex classifier-based approach is presented for automatic sleep scoring using EEG signals. In this context, complex-valued methods were utilized in the feature selection and classification stages. In the feature selection stage, features of EEG data were extracted with the help of a dual tree complex wavelet transform (DTCWT). In the next phase, five statistical features were obtained. These features are classified using complex-valued neural network (CVANN) algorithm. The Taguchi method was used in order to determine the effective parameter values in this CVANN. The aim was to develop a stable model involving parameter optimization. Different statistical parameters were utilized in the evaluation phase. Also, results were obtained in terms of two different sleep standards. In the study in which a 2nd level DTCWT and CVANN hybrid model was used, 93.84% accuracy rate was obtained according to the Rechtschaffen & Kales (R&K) standard, while a 95.42% accuracy rate was obtained according to the American Academy of Sleep Medicine (AASM) standard. Complex-valued classifiers were found to be promising in terms of the automatic sleep scoring and EEG data. PMID:26787511

  6. A new approach for automatic sleep scoring: Combining Taguchi based complex-valued neural network and complex wavelet transform.

    PubMed

    Peker, Musa

    2016-06-01

    Automatic classification of sleep stages is one of the most important methods used for diagnostic procedures in psychiatry and neurology. This method, which has been developed by sleep specialists, is a time-consuming and difficult process. Generally, electroencephalogram (EEG) signals are used in sleep scoring. In this study, a new complex classifier-based approach is presented for automatic sleep scoring using EEG signals. In this context, complex-valued methods were utilized in the feature selection and classification stages. In the feature selection stage, features of EEG data were extracted with the help of a dual tree complex wavelet transform (DTCWT). In the next phase, five statistical features were obtained. These features are classified using complex-valued neural network (CVANN) algorithm. The Taguchi method was used in order to determine the effective parameter values in this CVANN. The aim was to develop a stable model involving parameter optimization. Different statistical parameters were utilized in the evaluation phase. Also, results were obtained in terms of two different sleep standards. In the study in which a 2nd level DTCWT and CVANN hybrid model was used, 93.84% accuracy rate was obtained according to the Rechtschaffen & Kales (R&K) standard, while a 95.42% accuracy rate was obtained according to the American Academy of Sleep Medicine (AASM) standard. Complex-valued classifiers were found to be promising in terms of the automatic sleep scoring and EEG data.

  7. Chemical name extraction based on automatic training data generation and rich feature set.

    PubMed

    Yan, Su; Spangler, W Scott; Chen, Ying

    2013-01-01

    The automation of extracting chemical names from text has significant value to biomedical and life science research. A major barrier in this task is the difficulty of getting a sizable and good quality data to train a reliable entity extraction model. Another difficulty is the selection of informative features of chemical names, since comprehensive domain knowledge on chemistry nomenclature is required. Leveraging random text generation techniques, we explore the idea of automatically creating training sets for the task of chemical name extraction. Assuming the availability of an incomplete list of chemical names, called a dictionary, we are able to generate well-controlled, random, yet realistic chemical-like training documents. We statistically analyze the construction of chemical names based on the incomplete dictionary, and propose a series of new features, without relying on any domain knowledge. Compared to state-of-the-art models learned from manually labeled data and domain knowledge, our solution shows better or comparable results in annotating real-world data with less human effort. Moreover, we report an interesting observation about the language for chemical names. That is, both the structural and semantic components of chemical names follow a Zipfian distribution, which resembles many natural languages.

  8. Automatic segmentation of vertebral arteries in CT angiography using combined circular and cylindrical model fitting

    NASA Astrophysics Data System (ADS)

    Lee, Min Jin; Hong, Helen; Chung, Jin Wook

    2014-03-01

    We propose an automatic vessel segmentation method of vertebral arteries in CT angiography using combined circular and cylindrical model fitting. First, to generate multi-segmented volumes, whole volume is automatically divided into four segments by anatomical properties of bone structures along z-axis of head and neck. To define an optimal volume circumscribing vertebral arteries, anterior-posterior bounding and side boundaries are defined as initial extracted vessel region. Second, the initial vessel candidates are tracked using circular model fitting. Since boundaries of the vertebral arteries are ambiguous in case the arteries pass through the transverse foramen in the cervical vertebra, the circle model is extended along z-axis to cylinder model for considering additional vessel information of neighboring slices. Finally, the boundaries of the vertebral arteries are detected using graph-cut optimization. From the experiments, the proposed method provides accurate results without bone artifacts and eroded vessels in the cervical vertebra.

  9. Calibration of the Hydrological Simulation Program Fortran (HSPF) model using automatic calibration and geographical information systems

    NASA Astrophysics Data System (ADS)

    Al-Abed, N. A.; Whiteley, H. R.

    2002-11-01

    Calibrating a comprehensive, multi-parameter conceptual hydrological model, such as the Hydrological Simulation Program Fortran model, is a major challenge. This paper describes calibration procedures for water-quantity parameters of the HSPF version 10·11 using the automatic-calibration parameter estimator model coupled with a geographical information system (GIS) approach for spatially averaged properties. The study area was the Grand River watershed, located in southern Ontario, Canada, between 79° 30 and 80° 57W longitude and 42° 51 and 44° 31N latitude. The drainage area is 6965 km2. Calibration efforts were directed to those model parameters that produced large changes in model response during sensitivity tests run prior to undertaking calibration. A GIS was used extensively in this study. It was first used in the watershed segmentation process. During calibration, the GIS data were used to establish realistic starting values for the surface and subsurface zone parameters LZSN, UZSN, COVER, and INFILT and physically reasonable ratios of these parameters among watersheds were preserved during calibration with the ratios based on the known properties of the subwatersheds determined using GIS. This calibration procedure produced very satisfactory results; the percentage difference between the simulated and the measured yearly discharge ranged between 4 to 16%, which is classified as good to very good calibration. The average simulated daily discharge for the watershed outlet at Brantford for the years 1981-85 was 67 m3 s-1 and the average measured discharge at Brantford was 70 m3 s-1. The coupling of a GIS with automatice calibration produced a realistic and accurate calibration for the HSPF model with much less effort and subjectivity than would be required for unassisted calibration.

  10. Automatic key frame selection using a wavelet-based approach

    NASA Astrophysics Data System (ADS)

    Campisi, Patrizio; Longari, Andrea; Neri, Alessandro

    1999-10-01

    In a multimedia framework, digital image sequences (videos) are by far the most demanding as far as storage, search, browsing and retrieval requirements are concerned. In order to reduce the computational burden associated to video browsing and retrieval, a video sequence is usually decomposed into several scenes (shots) and each of them is characterized by means of some key frames. The proper selection of these key frames, i.e. the most representative frames in the scene, is of paramount importance for computational efficiency. In this contribution a novel key frame extraction technique based on the wavelet analysis is presented. Experimental results show the capability of the proposed algorithm to select key frames properly summarizing the shot.

  11. Applications of hydrologic information automatically extracted from digital elevation models

    USGS Publications Warehouse

    Jenson, S.K.

    1991-01-01

    Digital elevation models (DEMs) can be used to derive a wealth of information about the morphology of a land surface. Traditional raster analysis methods can be used to derive slope, aspect, and shaded relief information; recently-developed computer programs can be used to delineate depressions, overland flow paths, and watershed boundaries. These methods were used to delineate watershed boundaries for a geochemical stream sediment survey, to compare the results of extracting slope and flow paths from DEMs of varying resolutions, and to examine the geomorphology of a Martian DEM. -Author

  12. Handheld laser scanner automatic registration based on random coding

    NASA Astrophysics Data System (ADS)

    He, Lei; Yu, Chun-ping; Wang, Li

    2011-06-01

    Current research on Laser Scanner often focuses mainly on the static measurement. Little use has been made of dynamic measurement, that are appropriate for more problems and situations. In particular, traditional Laser Scanner must Keep stable to scan and measure coordinate transformation parameters between different station. In order to make the scanning measurement intelligently and rapidly, in this paper ,we developed a new registration algorithm for handleheld laser scanner based on the positon of target, which realize the dynamic measurement of handheld laser scanner without any more complex work. the double camera on laser scanner can take photograph of the artificial target points to get the three-dimensional coordinates, this points is designed by random coding. And then, a set of matched points is found from control points to realize the orientation of scanner by the least-square common points transformation. After that the double camera can directly measure the laser point cloud in the surface of object and get the point cloud data in an unified coordinate system. There are three major contributions in the paper. Firstly, a laser scanner based on binocular vision is designed with double camera and one laser head. By those, the real-time orientation of laser scanner is realized and the efficiency is improved. Secondly, the coding marker is introduced to solve the data matching, a random coding method is proposed. Compared with other coding methods,the marker with this method is simple to match and can avoid the shading for the object. Finally, a recognition method of coding maker is proposed, with the use of the distance recognition, it is more efficient. The method present here can be used widely in any measurement from small to huge obiect, such as vehicle, airplane which strengthen its intelligence and efficiency. The results of experiments and theory analzing demonstrate that proposed method could realize the dynamic measurement of handheld laser

  13. An object-based classification method for automatic detection of lunar impact craters from topographic data

    NASA Astrophysics Data System (ADS)

    Vamshi, Gasiganti T.; Martha, Tapas R.; Vinod Kumar, K.

    2016-05-01

    Identification of impact craters is a primary requirement to study past geological processes such as impact history. They are also used as proxies for measuring relative ages of various planetary or satellite bodies and help to understand the evolution of planetary surfaces. In this paper, we present a new method using object-based image analysis (OBIA) technique to detect impact craters of wide range of sizes from topographic data. Multiresolution image segmentation of digital terrain models (DTMs) available from the NASA's LRO mission was carried out to create objects. Subsequently, objects were classified into impact craters using shape and morphometric criteria resulting in 95% detection accuracy. The methodology developed in a training area in parts of Mare Imbrium in the form of a knowledge-based ruleset when applied in another area, detected impact craters with 90% accuracy. The minimum and maximum sizes (diameters) of impact craters detected in parts of Mare Imbrium by our method are 29 m and 1.5 km, respectively. Diameters of automatically detected impact craters show good correlation (R2 > 0.85) with the diameters of manually detected impact craters.

  14. Feedback linearization based computer controlled medication design for automatic treatment of parturient paresis of cows.

    PubMed

    Padhi, Radhakant

    2006-10-01

    Based on an existing model for calcium homeostatis (dynamics) and taking the help of feedback linearization philosophy of nonlinear control theory, two control design (medication) strategies are presented for automatic treatment of parturient paresis (milk fever) disease of cows. An important advantage of the new approach is that it results in a simple and straightforward method and eliminates the necessity of a significantly more complex neural network based nonlinear optimal control technique, as proposed by the author earlier. As an added advantage, unlike the neural network technique, the new approach leads to 'closed form solution' for the nonlinear controller. Moreover, global asymptotic stability of the closed loop system is always guaranteed. Besides theoretical justifications, the resulting controllers (medication strategies) are validated from numerical simulation studies of the nonlinear system as well. Moreover, from a numerical study about the robustness of the algorithms with respect to parametric uncertainty, it was observed that the optimal control formulation is a better option over the dynamic inversion formulation.

  15. Automatic construction of rule-based ICD-9-CM coding systems

    PubMed Central

    Farkas, Richárd; Szarvas, György

    2008-01-01

    Background In this paper we focus on the problem of automatically constructing ICD-9-CM coding systems for radiology reports. ICD-9-CM codes are used for billing purposes by health institutes and are assigned to clinical records manually following clinical treatment. Since this labeling task requires expert knowledge in the field of medicine, the process itself is costly and is prone to errors as human annotators have to consider thousands of possible codes when assigning the right ICD-9-CM labels to a document. In this study we use the datasets made available for training and testing automated ICD-9-CM coding systems by the organisers of an International Challenge on Classifying Clinical Free Text Using Natural Language Processing in spring 2007. The challenge itself was dominated by entirely or partly rule-based systems that solve the coding task using a set of hand crafted expert rules. Since the feasibility of the construction of such systems for thousands of ICD codes is indeed questionable, we decided to examine the problem of automatically constructing similar rule sets that turned out to achieve a remarkable accuracy in the shared task challenge. Results Our results are very promising in the sense that we managed to achieve comparable results with purely hand-crafted ICD-9-CM classifiers. Our best model got a 90.26% F measure on the training dataset and an 88.93% F measure on the challenge test dataset, using the micro-averaged Fβ=1 measure, the official evaluation metric of the International Challenge on Classifying Clinical Free Text Using Natural Language Processing. This result would have placed second in the challenge, with a hand-crafted system achieving slightly better results. Conclusions Our results demonstrate that hand-crafted systems – which proved to be successful in ICD-9-CM coding – can be reproduced by replacing several laborious steps in their construction with machine learning models. These hybrid systems preserve the favourable

  16. Automatic detection of volcano-seismic events by modeling state and event duration in hidden Markov models

    NASA Astrophysics Data System (ADS)

    Bhatti, Sohail Masood; Khan, Muhammad Salman; Wuth, Jorge; Huenupan, Fernando; Curilem, Millaray; Franco, Luis; Yoma, Nestor Becerra

    2016-09-01

    In this paper we propose an automatic volcano event detection system based on Hidden Markov Model (HMM) with state and event duration models. Since different volcanic events have different durations, therefore the state and whole event durations learnt from the training data are enforced on the corresponding state and event duration models within the HMM. Seismic signals from the Llaima volcano are used to train the system. Two types of events are employed in this study, Long Period (LP) and Volcano-Tectonic (VT). Experiments show that the standard HMMs can detect the volcano events with high accuracy but generates false positives. The results presented in this paper show that the incorporation of duration modeling can lead to reductions in false positive rate in event detection as high as 31% with a true positive accuracy equal to 94%. Further evaluation of the false positives indicate that the false alarms generated by the system were mostly potential events based on the signal-to-noise ratio criteria recommended by a volcano expert.

  17. Automatic generation of computable implementation guides from clinical information models.

    PubMed

    Boscá, Diego; Maldonado, José Alberto; Moner, David; Robles, Montserrat

    2015-06-01

    Clinical information models are increasingly used to describe the contents of Electronic Health Records. Implementation guides are a common specification mechanism used to define such models. They contain, among other reference materials, all the constraints and rules that clinical information must obey. However, these implementation guides typically are oriented to human-readability, and thus cannot be processed by computers. As a consequence, they must be reinterpreted and transformed manually into an executable language such as Schematron or Object Constraint Language (OCL). This task can be difficult and error prone due to the big gap between both representations. The challenge is to develop a methodology for the specification of implementation guides in such a way that humans can read and understand easily and at the same time can be processed by computers. In this paper, we propose and describe a novel methodology that uses archetypes as basis for generation of implementation guides. We use archetypes to generate formal rules expressed in Natural Rule Language (NRL) and other reference materials usually included in implementation guides such as sample XML instances. We also generate Schematron rules from NRL rules to be used for the validation of data instances. We have implemented these methods in LinkEHR, an archetype editing platform, and exemplify our approach by generating NRL rules and implementation guides from EN ISO 13606, openEHR, and HL7 CDA archetypes. PMID:25910958

  18. Automatic and quantitative measurement of collagen gel contraction using model-guided segmentation

    NASA Astrophysics Data System (ADS)

    Chen, Hsin-Chen; Yang, Tai-Hua; Thoreson, Andrew R.; Zhao, Chunfeng; Amadio, Peter C.; Sun, Yung-Nien; Su, Fong-Chin; An, Kai-Nan

    2013-08-01

    Quantitative measurement of collagen gel contraction plays a critical role in the field of tissue engineering because it provides spatial-temporal assessment (e.g., changes of gel area and diameter during the contraction process) reflecting the cell behavior and tissue material properties. So far the assessment of collagen gels relies on manual segmentation, which is time-consuming and suffers from serious intra- and inter-observer variability. In this study, we propose an automatic method combining various image processing techniques to resolve these problems. The proposed method first detects the maximal feasible contraction range of circular references (e.g., culture dish) and avoids the interference of irrelevant objects in the given image. Then, a three-step color conversion strategy is applied to normalize and enhance the contrast between the gel and background. We subsequently introduce a deformable circular model which utilizes regional intensity contrast and circular shape constraint to locate the gel boundary. An adaptive weighting scheme was employed to coordinate the model behavior, so that the proposed system can overcome variations of gel boundary appearances at different contraction stages. Two measurements of collagen gels (i.e., area and diameter) can readily be obtained based on the segmentation results. Experimental results, including 120 gel images for accuracy validation, showed high agreement between the proposed method and manual segmentation with an average dice similarity coefficient larger than 0.95. The results also demonstrated obvious improvement in gel contours obtained by the proposed method over two popular, generic segmentation methods.

  19. Towards an Automatic and Application-Based EigensolverSelection

    SciTech Connect

    Zhang, Yeliang; Li, Xiaoye S.; Marques, Osni

    2005-09-09

    The computation of eigenvalues and eigenvectors is an important and often time-consuming phase in computer simulations. Recent efforts in the development of eigensolver libraries have given users good algorithms without the need for users to spend much time in programming. Yet, given the variety of numerical algorithms that are available to domain scientists, choosing the ''best'' algorithm suited for a particular application is a daunting task. As simulations become increasingly sophisticated and larger, it becomes infeasible for a user to try out every reasonable algorithm configuration in a timely fashion. Therefore, there is a need for an intelligent engine that can guide the user through the maze of various solvers with various configurations. In this paper, we present a methodology and a software architecture aiming at determining the best solver based on the application type and the matrix properties. We combine a decision tree and an intelligent engine to select a solver and a preconditioner combination for the application submitted by the user. We also discuss how our system interface is implemented with third party numerical libraries. In the case study, we demonstrate the feasibility and usefulness of our system with a simplified linear solving system. Our experiments show that our proposed intelligent engine is quite adept in choosing a suitable algorithm for different applications.

  20. Fast and automatic watermark resynchronization based on zernike moments

    NASA Astrophysics Data System (ADS)

    Kang, Xiangui; Liu, Chunhui; Zeng, Wenjun; Huang, Jiwu; Liu, Congbai

    2007-02-01

    In some applications such as real-time video applications, watermark detection needs to be performed in real time. To address image watermark robustness against geometric transformations such as the combination of rotation, scaling, translation and/or cropping (RST), many prior works choose exhaustive search method or template matching method to find the RST distortion parameters, then reverse the distortion to resynchronize the watermark. These methods typically impose huge computation burden because the search space is typically a multiple dimensional space. Some other prior works choose to embed watermarks in an RST invariant domain to meet the real time requirement. But it might be difficult to construct such an RST invariant domain. Zernike moments are useful tools in pattern recognition and image watermarking due to their orthogonality and rotation invariance property. In this paper, we propose a fast watermark resynchronization method based on Zernike moments, which requires only search over scaling factor to combat RST geometric distortion, thus significantly reducing the computation load. We apply the proposed method to circularly symmetric watermarking. According to Plancherel's Theorem and the rotation invariance property of Zernike moments, the rotation estimation only requires performing DFT on Zernike moments correlation value once. Thus for RST attack, we can estimate both rotation angle and scaling factor by searching for the scaling factor to find the overall maximum DFT magnitude mentioned above. With the estimated rotation angle and scaling factor parameters, the watermark can be resynchronized. In watermark detection, the normalized correlation between the watermark and the DFT magnitude of the test image is used. Our experimental results demonstrate the advantage of our proposed method. The watermarking scheme is robust to global RST distortion as well as JPEG compression. In particular, the watermark is robust to print-rescanning and

  1. Support Vector Machine Model for Automatic Detection and Classification of Seismic Events

    NASA Astrophysics Data System (ADS)

    Barros, Vesna; Barros, Lucas

    2016-04-01

    The automated processing of multiple seismic signals to detect, localize and classify seismic events is a central tool in both natural hazards monitoring and nuclear treaty verification. However, false detections and missed detections caused by station noise and incorrect classification of arrivals are still an issue and the events are often unclassified or poorly classified. Thus, machine learning techniques can be used in automatic processing for classifying the huge database of seismic recordings and provide more confidence in the final output. Applied in the context of the International Monitoring System (IMS) - a global sensor network developed for the Comprehensive Nuclear-Test-Ban Treaty (CTBT) - we propose a fully automatic method for seismic event detection and classification based on a supervised pattern recognition technique called the Support Vector Machine (SVM). According to Kortström et al., 2015, the advantages of using SVM are handleability of large number of features and effectiveness in high dimensional spaces. Our objective is to detect seismic events from one IMS seismic station located in an area of high seismicity and mining activity and classify them as earthquakes or quarry blasts. It is expected to create a flexible and easily adjustable SVM method that can be applied in different regions and datasets. Taken a step further, accurate results for seismic stations could lead to a modification of the model and its parameters to make it applicable to other waveform technologies used to monitor nuclear explosions such as infrasound and hydroacoustic waveforms. As an authorized user, we have direct access to all IMS data and bulletins through a secure signatory account. A set of significant seismic waveforms containing different types of events (e.g. earthquake, quarry blasts) and noise is being analysed to train the model and learn the typical pattern of the signal from these events. Moreover, comparing the performance of the support

  2. Preservation of memory-based automaticity in reading for older adults.

    PubMed

    Rawson, Katherine A; Touron, Dayna R

    2015-12-01

    Concerning age-related effects on cognitive skill acquisition, the modal finding is that older adults do not benefit from practice to the same extent as younger adults in tasks that afford a shift from slower algorithmic processing to faster memory-based processing. In contrast, Rawson and Touron (2009) demonstrated a relatively rapid shift to memory-based processing in the context of a reading task. The current research extended beyond this initial study to provide more definitive evidence for relative preservation of memory-based automaticity in reading tasks for older adults. Younger and older adults read short stories containing unfamiliar noun phrases (e.g., skunk mud) followed by disambiguating information indicating the combination's meaning (either the normatively dominant meaning or an alternative subordinate meaning). Stories were repeated across practice blocks, and then the noun phrases were presented in novel sentence frames in a transfer task. Both age groups shifted from computation to retrieval after relatively few practice trials (as evidenced by convergence of reading times for dominant and subordinate items). Most important, both age groups showed strong evidence for memory-based processing of the noun phrases in the transfer task. In contrast, older adults showed minimal shifting to retrieval in an alphabet arithmetic task, indicating that the preservation of memory-based automaticity in reading was task-specific. Discussion focuses on important implications for theories of memory-based automaticity in general and for specific theoretical accounts of age effects on memory-based automaticity, as well as fruitful directions for future research. PMID:26302027

  3. Preservation of memory-based automaticity in reading for older adults.

    PubMed

    Rawson, Katherine A; Touron, Dayna R

    2015-12-01

    Concerning age-related effects on cognitive skill acquisition, the modal finding is that older adults do not benefit from practice to the same extent as younger adults in tasks that afford a shift from slower algorithmic processing to faster memory-based processing. In contrast, Rawson and Touron (2009) demonstrated a relatively rapid shift to memory-based processing in the context of a reading task. The current research extended beyond this initial study to provide more definitive evidence for relative preservation of memory-based automaticity in reading tasks for older adults. Younger and older adults read short stories containing unfamiliar noun phrases (e.g., skunk mud) followed by disambiguating information indicating the combination's meaning (either the normatively dominant meaning or an alternative subordinate meaning). Stories were repeated across practice blocks, and then the noun phrases were presented in novel sentence frames in a transfer task. Both age groups shifted from computation to retrieval after relatively few practice trials (as evidenced by convergence of reading times for dominant and subordinate items). Most important, both age groups showed strong evidence for memory-based processing of the noun phrases in the transfer task. In contrast, older adults showed minimal shifting to retrieval in an alphabet arithmetic task, indicating that the preservation of memory-based automaticity in reading was task-specific. Discussion focuses on important implications for theories of memory-based automaticity in general and for specific theoretical accounts of age effects on memory-based automaticity, as well as fruitful directions for future research.

  4. A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine

    PubMed Central

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896

  5. Automatic bladder segmentation on CBCT for multiple plan ART of bladder cancer using a patient-specific bladder model.

    PubMed

    Chai, Xiangfei; van Herk, Marcel; Betgen, Anja; Hulshof, Maarten; Bel, Arjan

    2012-06-21

    In multiple plan adaptive radiotherapy (ART) strategies of bladder cancer, a library of plans corresponding to different bladder volumes is created based on images acquired in early treatment sessions. Subsequently, the plan for the smallest PTV safely covering the bladder on cone-beam CT (CBCT) is selected as the plan of the day. The aim of this study is to develop an automatic bladder segmentation approach suitable for CBCT scans and test its ability to select the appropriate plan from the library of plans for such an ART procedure. Twenty-three bladder cancer patients with a planning CT and on average 11.6 CBCT scans were included in our study. For each patient, all CBCT scans were matched to the planning CT on bony anatomy. Bladder contours were manually delineated for each planning CT (for model building) and CBCT (for model building and validation). The automatic segmentation method consisted of two steps. A patient-specific bladder deformation model was built from the training data set of each patient (the planning CT and the first five CBCT scans). Then, the model was applied to automatically segment bladders in the validation data of the same patient (the remaining CBCT scans). Principal component analysis (PCA) was applied to the training data to model patient-specific bladder deformation patterns. The number of PCA modes for each patient was chosen such that the bladder shapes in the training set could be represented by such number of PCA modes with less than 0.1 cm mean residual error. The automatic segmentation started from the bladder shape of a reference CBCT, which was adjusted by changing the weight of each PCA mode. As a result, the segmentation contour was deformed consistently with the training set to fit the bladder in the validation image. A cost function was defined by the absolute difference between the directional gradient field of reference CBCT sampled on the corresponding bladder contour and the directional gradient field of validation

  6. Automatic bladder segmentation on CBCT for multiple plan ART of bladder cancer using a patient-specific bladder model

    NASA Astrophysics Data System (ADS)

    Chai, Xiangfei; van Herk, Marcel; Betgen, Anja; Hulshof, Maarten; Bel, Arjan

    2012-06-01

    In multiple plan adaptive radiotherapy (ART) strategies of bladder cancer, a library of plans corresponding to different bladder volumes is created based on images acquired in early treatment sessions. Subsequently, the plan for the smallest PTV safely covering the bladder on cone-beam CT (CBCT) is selected as the plan of the day. The aim of this study is to develop an automatic bladder segmentation approach suitable for CBCT scans and test its ability to select the appropriate plan from the library of plans for such an ART procedure. Twenty-three bladder cancer patients with a planning CT and on average 11.6 CBCT scans were included in our study. For each patient, all CBCT scans were matched to the planning CT on bony anatomy. Bladder contours were manually delineated for each planning CT (for model building) and CBCT (for model building and validation). The automatic segmentation method consisted of two steps. A patient-specific bladder deformation model was built from the training data set of each patient (the planning CT and the first five CBCT scans). Then, the model was applied to automatically segment bladders in the validation data of the same patient (the remaining CBCT scans). Principal component analysis (PCA) was applied to the training data to model patient-specific bladder deformation patterns. The number of PCA modes for each patient was chosen such that the bladder shapes in the training set could be represented by such number of PCA modes with less than 0.1 cm mean residual error. The automatic segmentation started from the bladder shape of a reference CBCT, which was adjusted by changing the weight of each PCA mode. As a result, the segmentation contour was deformed consistently with the training set to fit the bladder in the validation image. A cost function was defined by the absolute difference between the directional gradient field of reference CBCT sampled on the corresponding bladder contour and the directional gradient field of validation

  7. Automatic classification of intracardiac tumor and thrombi in echocardiography based on sparse representation.

    PubMed

    Guo, Yi; Wang, Yuanyuan; Kong, Dehong; Shu, Xianhong

    2015-03-01

    Identification of intracardiac masses in echocardiograms is one important task in cardiac disease diagnosis. To improve diagnosis accuracy, a novel fully automatic classification method based on the sparse representation is proposed to distinguish intracardiac tumor and thrombi in echocardiography. First, a region of interest is cropped to define the mass area. Then, a unique globally denoising method is employed to remove the speckle and preserve the anatomical structure. Subsequently, the contour of the mass and its connected atrial wall are described by the K-singular value decomposition and a modified active contour model. Finally, the motion, the boundary as well as the texture features are processed by a sparse representation classifier to distinguish two masses. Ninety-seven clinical echocardiogram sequences are collected to assess the effectiveness. Compared with other state-of-the-art classifiers, our proposed method demonstrates the best performance by achieving an accuracy of 96.91%, a sensitivity of 100%, and a specificity of 93.02%. It explicates that our method is capable of classifying intracardiac tumors and thrombi in echocardiography, potentially to assist the cardiologists in the clinical practice.

  8. Four-chamber heart modeling and automatic segmentation for 3-D cardiac CT volumes using marginal space learning and steerable features.

    PubMed

    Zheng, Yefeng; Barbu, Adrian; Georgescu, Bogdan; Scheuering, Michael; Comaniciu, Dorin

    2008-11-01

    We propose an automatic four-chamber heart segmentation system for the quantitative functional analysis of the heart from cardiac computed tomography (CT) volumes. Two topics are discussed: heart modeling and automatic model fitting to an unseen volume. Heart modeling is a nontrivial task since the heart is a complex nonrigid organ. The model must be anatomically accurate, allow manual editing, and provide sufficient information to guide automatic detection and segmentation. Unlike previous work, we explicitly represent important landmarks (such as the valves and the ventricular septum cusps) among the control points of the model. The control points can be detected reliably to guide the automatic model fitting process. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3-D CT volumes. We formulate the segmentation as a two-step learning problem: anatomical structure localization and boundary delineation. In both steps, we exploit the recent advances in learning discriminative models. A novel algorithm, marginal space learning (MSL), is introduced to solve the 9-D similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3-D shape through learning-based boundary delineation. The proposed method has been extensively tested on the largest dataset (with 323 volumes from 137 patients) ever reported in the literature. To the best of our knowledge, our system is the fastest with a speed of 4.0 s per volume (on a dual-core 3.2-GHz processor) for the automatic segmentation of all four chambers.

  9. An automatic method for CASP9 free modeling structure prediction assessment

    PubMed Central

    Cong, Qian; Kinch, Lisa N.; Pei, Jimin; Shi, Shuoyong; Grishin, Vyacheslav N.; Li, Wenlin; Grishin, Nick V.

    2011-01-01

    Motivation: Manual inspection has been applied to and is well accepted for assessing critical assessment of protein structure prediction (CASP) free modeling (FM) category predictions over the years. Such manual assessment requires expertise and significant time investment, yet has the problems of being subjective and unable to differentiate models of similar quality. It is beneficial to incorporate the ideas behind manual inspection to an automatic score system, which could provide objective and reproducible assessment of structure models. Results: Inspired by our experience in CASP9 FM category assessment, we developed an automatic superimposition independent method named Quality Control Score (QCS) for structure prediction assessment. QCS captures both global and local structural features, with emphasis on global topology. We applied this method to all FM targets from CASP9, and overall the results showed the best agreement with Manual Inspection Scores among automatic prediction assessment methods previously applied in CASPs, such as Global Distance Test Total Score (GDT_TS) and Contact Score (CS). As one of the important components to guide our assessment of CASP9 FM category predictions, this method correlates well with other scoring methods and yet is able to reveal good-quality models that are missed by GDT_TS. Availability: The script for QCS calculation is available at http://prodata.swmed.edu/QCS/. Contact: grishin@chop.swmed.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21994223

  10. Automatic reconstruction of physiological gestures used in a model of birdsong production.

    PubMed

    Boari, Santiago; Perl, Yonatan Sanz; Amador, Ana; Margoliash, Daniel; Mindlin, Gabriel B

    2015-11-01

    Highly coordinated learned behaviors are key to understanding neural processes integrating the body and the environment. Birdsong production is a widely studied example of such behavior in which numerous thoracic muscles control respiratory inspiration and expiration: the muscles of the syrinx control syringeal membrane tension, while upper vocal tract morphology controls resonances that modulate the vocal system output. All these muscles have to be coordinated in precise sequences to generate the elaborate vocalizations that characterize an individual's song. Previously we used a low-dimensional description of the biomechanics of birdsong production to investigate the associated neural codes, an approach that complements traditional spectrographic analysis. The prior study used algorithmic yet manual procedures to model singing behavior. In the present work, we present an automatic procedure to extract low-dimensional motor gestures that could predict vocal behavior. We recorded zebra finch songs and generated synthetic copies automatically, using a biomechanical model for the vocal apparatus and vocal tract. This dynamical model described song as a sequence of physiological parameters the birds control during singing. To validate this procedure, we recorded electrophysiological activity of the telencephalic nucleus HVC. HVC neurons were highly selective to the auditory presentation of the bird's own song (BOS) and gave similar selective responses to the automatically generated synthetic model of song (AUTO). Our results demonstrate meaningful dimensionality reduction in terms of physiological parameters that individual birds could actually control. Furthermore, this methodology can be extended to other vocal systems to study fine motor control. PMID:26378204

  11. Automatic segmentation and supervised learning-based selection of nuclei in cancer tissue images.

    PubMed

    Nandy, Kaustav; Gudla, Prabhakar R; Amundsen, Ryan; Meaburn, Karen J; Misteli, Tom; Lockett, Stephen J

    2012-09-01

    Analysis of preferential localization of certain genes within the cell nuclei is emerging as a new technique for the diagnosis of breast cancer. Quantitation requires accurate segmentation of 100-200 cell nuclei in each tissue section to draw a statistically significant result. Thus, for large-scale analysis, manual processing is too time consuming and subjective. Fortuitously, acquired images generally contain many more nuclei than are needed for analysis. Therefore, we developed an integrated workflow that selects, following automatic segmentation, a subpopulation of accurately delineated nuclei for positioning of fluorescence in situ hybridization-labeled genes of interest. Segmentation was performed by a multistage watershed-based algorithm and screening by an artificial neural network-based pattern recognition engine. The performance of the workflow was quantified in terms of the fraction of automatically selected nuclei that were visually confirmed as well segmented and by the boundary accuracy of the well-segmented nuclei relative to a 2D dynamic programming-based reference segmentation method. Application of the method was demonstrated for discriminating normal and cancerous breast tissue sections based on the differential positioning of the HES5 gene. Automatic results agreed with manual analysis in 11 out of 14 cancers, all four normal cases, and all five noncancerous breast disease cases, thus showing the accuracy and robustness of the proposed approach.

  12. Automatic characterisation of primary, secondary and mixed sludge inflow in terms of the mathematical generalised sludge digester model.

    PubMed

    de Gracia, M; Huete, E; Beltrán, S; Grau, P; Ayesa, E

    2011-01-01

    This paper presents the characterisation procedure of different types of sludge generated in a wastewater treatment plant to be reproduced in a mathematical model of the sludge digestion process. The automatic calibration method used is based on an optimisation problem and uses a set of mathematical equations related to the a priori knowledge of the sludge composition, the experimental measurements applied to the real sludge, and the definition of the model components. In this work, the potential of the characterisation methodology is shown by means of a real example, taking into account that sludge is a very complex matter to characterise and that the models for digestion also have a considerable number of model components. The results obtained suit both the previously reported characteristics of the primary, secondary and mixed sludge, and the experimental measurements specially done for this work. These three types of sludge have been successfully characterised to be used in complex mathematical models. PMID:22097032

  13. The Role of Modeling and Automatic Reinforcement in the Construction of the Passive Voice

    PubMed Central

    Wright, Anhvinh N

    2006-01-01

    Language acquisition has been a contentious topic among linguists, psycholinguists, and behaviorists for decades. Although numerous theories of language acquisition have surfaced, none have sufficiently accounted for the subtleties of the language that children acquire. The present study attempts to explain the role of modeling and automatic reinforcement in the acquisition of the passive voice. Six children, ages 3 to 5, participated in this study. The results indicated that the children began using the passive voice only after the experimenter modeled passive sentences. Furthermore, the usage of the passive voice increased with repeated exposure to the experimenter's verbal behavior. Given that the children were not explicitly reinforced, it is proposed that their behavior was automatically reinforced for using the passive voice. PMID:22477353

  14. Automatic face detection and tracking based on Adaboost with camshift algorithm

    NASA Astrophysics Data System (ADS)

    Lin, Hui; Long, JianFeng

    2011-10-01

    With the development of information technology, video surveillance is widely used in security monitoring and identity recognition. For most of pure face tracking algorithms are hard to specify the initial location and scale of face automatically, this paper proposes a fast and robust method to detect and track face by combining adaboost with camshift algorithm. At first, the location and scale of face is specified by adaboost algorithm based on Haar-like features and it will be conveyed to the initial search window automatically. Then, we apply camshift algorithm to track face. The experimental results based on OpenCV software yield good results, even in some special circumstances, such as light changing and face rapid movement. Besides, by drawing out the tracking trajectory of face movement, some abnormal behavior events can be analyzed.

  15. An automatic stain removal algorithm of series aerial photograph based on flat-field correction

    NASA Astrophysics Data System (ADS)

    Wang, Gang; Yan, Dongmei; Yang, Yang

    2010-10-01

    The dust on the camera's lens will leave dark stains on the image. Calibrating and compensating the intensity of the stained pixels play an important role in the airborne image processing. This article introduces an automatic compensation algorithm for the dark stains. It's based on the theory of flat-field correction. We produced a whiteboard reference image by aggregating hundreds of images recorded in one flight and use their average pixel values to simulate the uniform white light irradiation. Then we constructed a look-up table function based on this whiteboard image to calibrate the stained image. The experiment result shows that the proposed procedure can remove lens stains effectively and automatically.

  16. Automatic illumination compensation device based on a photoelectrochemical biofuel cell driven by visible light.

    PubMed

    Yu, You; Han, Yanchao; Xu, Miao; Zhang, Lingling; Dong, Shaojun

    2016-04-28

    Inverted illumination compensation is important in energy-saving projects, artificial photosynthesis and some forms of agriculture, such as hydroponics. However, only a few illumination adjustments based on self-powered biodetectors that quantitatively detect the intensity of visible light have been reported. We constructed an automatic illumination compensation device based on a photoelectrochemical biofuel cell (PBFC) driven by visible light. The PBFC consisted of a glucose dehydrogenase modified bioanode and a p-type semiconductor cuprous oxide photocathode. The PBFC had a high power output of 161.4 μW cm(-2) and an open circuit potential that responded rapidly to visible light. It adjusted the amount of illumination inversely irrespective of how the external illumination was changed. This rational design of utilizing PBFCs provides new insights into automatic light adjustable devices and may be of benefit to intelligent applications. PMID:27076202

  17. Automatic three-dimensional measurement of large-scale structure based on vision metrology.

    PubMed

    Zhu, Zhaokun; Guan, Banglei; Zhang, Xiaohu; Li, Daokui; Yu, Qifeng

    2014-01-01

    All relevant key techniques involved in photogrammetric vision metrology for fully automatic 3D measurement of large-scale structure are studied. A new kind of coded target consisting of circular retroreflective discs is designed, and corresponding detection and recognition algorithms based on blob detection and clustering are presented. Then a three-stage strategy starting with view clustering is proposed to achieve automatic network orientation. As for matching of noncoded targets, the concept of matching path is proposed, and matches for each noncoded target are found by determination of the optimal matching path, based on a novel voting strategy, among all possible ones. Experiments on a fixed keel of airship have been conducted to verify the effectiveness and measuring accuracy of the proposed methods. PMID:24701143

  18. Automatic Three-Dimensional Measurement of Large-Scale Structure Based on Vision Metrology

    PubMed Central

    Zhu, Zhaokun; Guan, Banglei; Zhang, Xiaohu; Li, Daokui; Yu, Qifeng

    2014-01-01

    All relevant key techniques involved in photogrammetric vision metrology for fully automatic 3D measurement of large-scale structure are studied. A new kind of coded target consisting of circular retroreflective discs is designed, and corresponding detection and recognition algorithms based on blob detection and clustering are presented. Then a three-stage strategy starting with view clustering is proposed to achieve automatic network orientation. As for matching of noncoded targets, the concept of matching path is proposed, and matches for each noncoded target are found by determination of the optimal matching path, based on a novel voting strategy, among all possible ones. Experiments on a fixed keel of airship have been conducted to verify the effectiveness and measuring accuracy of the proposed methods. PMID:24701143

  19. Automatic single questionnaire intensity (SQI, EMS98 scale) estimation using ranking models built on the existing BCSF database

    NASA Astrophysics Data System (ADS)

    Schlupp, A.; Sira, C.; Schmitt, K.; Schaming, M.

    2013-12-01

    In charge of intensity estimations in France, BCSF has collected and manually analyzed more than 47000 online individual macroseismic questionnaires since 2000 up to intensity VI. These macroseismic data allow us to estimate one SQI value (Single Questionnaire Intensity) for each form following the EMS98 scale. The reliability of the automatic intensity estimation is important as they are today used for automatic shakemaps communications and crisis management. Today, the automatic intensity estimation at BCSF is based on the direct use of thumbnails selected on a menu by the witnesses. Each thumbnail corresponds to an EMS-98 intensity value, allowing us to quickly issue an intensity map of the communal intensity by averaging the SQIs at each city. Afterwards an expert, to determine a definitive SQI, manually analyzes each form. This work is time consuming and not anymore suitable considering the increasing number of testimonies at BCSF. Nevertheless, it can take into account incoherent answers. We tested several automatic methods (USGS algorithm, Correlation coefficient, Thumbnails) (Sira et al. 2013, IASPEI) and compared them with 'expert' SQIs. These methods gave us medium score (between 50 to 60% of well SQI determined and 35 to 40% with plus one or minus one intensity degree). The best fit was observed with the thumbnails. Here, we present new approaches based on 3 statistical ranking methods as 1) Multinomial logistic regression model, 2) Discriminant analysis DISQUAL and 3) Support vector machines (SVMs). The two first methods are standard methods, while the third one is more recent. Theses methods could be applied because the BCSF has already in his database more then 47000 forms and because their questions and answers are well adapted for a statistical analysis. The ranking models could then be used as automatic method constrained on expert analysis. The performance of the automatic methods and the reliability of the estimated SQI can be evaluated thanks to

  20. Automatic illumination compensation device based on a photoelectrochemical biofuel cell driven by visible light

    NASA Astrophysics Data System (ADS)

    Yu, You; Han, Yanchao; Xu, Miao; Zhang, Lingling; Dong, Shaojun

    2016-04-01

    Inverted illumination compensation is important in energy-saving projects, artificial photosynthesis and some forms of agriculture, such as hydroponics. However, only a few illumination adjustments based on self-powered biodetectors that quantitatively detect the intensity of visible light have been reported. We constructed an automatic illumination compensation device based on a photoelectrochemical biofuel cell (PBFC) driven by visible light. The PBFC consisted of a glucose dehydrogenase modified bioanode and a p-type semiconductor cuprous oxide photocathode. The PBFC had a high power output of 161.4 μW cm-2 and an open circuit potential that responded rapidly to visible light. It adjusted the amount of illumination inversely irrespective of how the external illumination was changed. This rational design of utilizing PBFCs provides new insights into automatic light adjustable devices and may be of benefit to intelligent applications.Inverted illumination compensation is important in energy-saving projects, artificial photosynthesis and some forms of agriculture, such as hydroponics. However, only a few illumination adjustments based on self-powered biodetectors that quantitatively detect the intensity of visible light have been reported. We constructed an automatic illumination compensation device based on a photoelectrochemical biofuel cell (PBFC) driven by visible light. The PBFC consisted of a glucose dehydrogenase modified bioanode and a p-type semiconductor cuprous oxide photocathode. The PBFC had a high power output of 161.4 μW cm-2 and an open circuit potential that responded rapidly to visible light. It adjusted the amount of illumination inversely irrespective of how the external illumination was changed. This rational design of utilizing PBFCs provides new insights into automatic light adjustable devices and may be of benefit to intelligent applications. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr00759g

  1. Suggestion does not de-automatize word reading: evidence from the semantically based Stroop task.

    PubMed

    Augustinova, Maria; Ferrand, Ludovic

    2012-06-01

    Recent studies have shown that the suggestion for participants to construe words as meaningless symbols reduces, or even eliminates, standard Stroop interference in highly suggestible individuals (Raz, Fan, & Posner, 2005; Raz, Kirsch, Pollard, & Nitkin-Kaner, 2006). In these studies, the researchers consequently concluded that this suggestion de-automatizes word reading. The aim of the present study was to closely examine this claim. To this end, highly suggestible individuals completed both standard and semantically based Stroop tasks, either with or without a suggestion to construe the words as meaningless symbols (manipulated in both a between-participants [Exp. 1] and a within-participants [Exp. 2] design). By showing that suggestion substantially reduced standard Stroop interference, these two experiments replicated Raz et al.'s (2006) results. However, in both experiments we also found significant semantically based Stroop effects of similar magnitudes in all suggestion conditions. Taken together, these results indicate that the suggestion to construe words as meaningless symbols does not eliminate, or even reduce, semantic activation (assessed by the semantically based Stroop effect) in highly suggestible individuals, and that such an intervention most likely reduces nonsemantic task-relevant response competition related to the standard Stroop task. In sum, contrary to Raz et al.'s claim, suggestion does not de-automatize or prevent reading (as shown by a significant amount of semantic processing), but rather seems to influence response competition. These results also add to the growing body of evidence showing that semantic activation in the Stroop task is indeed automatic.

  2. [A wavelet-transform-based method for the automatic detection of late-type stars].

    PubMed

    Liu, Zhong-tian; Zhao, Rrui-zhen; Zhao, Yong-heng; Wu, Fu-chao

    2005-07-01

    The LAMOST project, the world largest sky survey project, urgently needs an automatic late-type stars detection system. However, to our knowledge, no effective methods for automatic late-type stars detection have been reported in the literature up to now. The present study work is intended to explore possible ways to deal with this issue. Here, by "late-type stars" we mean those stars with strong molecule absorption bands, including oxygen-rich M, L and T type stars and carbon-rich C stars. Based on experimental results, the authors find that after a wavelet transform with 5 scales on the late-type stars spectra, their frequency spectrum of the transformed coefficient on the 5th scale consistently manifests a unimodal distribution, and the energy of frequency spectrum is largely concentrated on a small neighborhood centered around the unique peak. However, for the spectra of other celestial bodies, the corresponding frequency spectrum is of multimodal and the energy of frequency spectrum is dispersible. Based on such a finding, the authors presented a wavelet-transform-based automatic late-type stars detection method. The proposed method is shown by extensive experiments to be practical and of good robustness.

  3. An automatic method for producing robust regression models from hyperspectral data using multiple simple genetic algorithms

    NASA Astrophysics Data System (ADS)

    Sykas, Dimitris; Karathanassi, Vassilia

    2015-06-01

    This paper presents a new method for automatically determining the optimum regression model, which enable the estimation of a parameter. The concept lies on the combination of k spectral pre-processing algorithms (SPPAs) that enhance spectral features correlated to the desired parameter. Initially a pre-processing algorithm uses as input a single spectral signature and transforms it according to the SPPA function. A k-step combination of SPPAs uses k preprocessing algorithms serially. The result of each SPPA is used as input to the next SPPA, and so on until the k desired pre-processed signatures are reached. These signatures are then used as input to three different regression methods: the Normalized band Difference Regression (NDR), the Multiple Linear Regression (MLR) and the Partial Least Squares Regression (PLSR). Three Simple Genetic Algorithms (SGAs) are used, one for each regression method, for the selection of the optimum combination of k SPPAs. The performance of the SGAs is evaluated based on the RMS error of the regression models. The evaluation not only indicates the selection of the optimum SPPA combination but also the regression method that produces the optimum prediction model. The proposed method was applied on soil spectral measurements in order to predict Soil Organic Matter (SOM). In this study, the maximum value assigned to k was 3. PLSR yielded the highest accuracy while NDR's accuracy was satisfactory compared to its complexity. MLR method showed severe drawbacks due to the presence of noise in terms of collinearity at the spectral bands. Most of the regression methods required a 3-step combination of SPPAs for achieving the highest performance. The selected preprocessing algorithms were different for each regression method since each regression method handles with a different way the explanatory variables.

  4. Landslide susceptibility mapping using decision-tree based CHi-squared automatic interaction detection (CHAID) and Logistic regression (LR) integration

    NASA Astrophysics Data System (ADS)

    Althuwaynee, Omar F.; Pradhan, Biswajeet; Ahmad, Noordin

    2014-06-01

    This article uses methodology based on chi-squared automatic interaction detection (CHAID), as a multivariate method that has an automatic classification capacity to analyse large numbers of landslide conditioning factors. This new algorithm was developed to overcome the subjectivity of the manual categorization of scale data of landslide conditioning factors, and to predict rainfall-induced susceptibility map in Kuala Lumpur city and surrounding areas using geographic information system (GIS). The main objective of this article is to use CHi-squared automatic interaction detection (CHAID) method to perform the best classification fit for each conditioning factor, then, combining it with logistic regression (LR). LR model was used to find the corresponding coefficients of best fitting function that assess the optimal terminal nodes. A cluster pattern of landslide locations was extracted in previous study using nearest neighbor index (NNI), which were then used to identify the clustered landslide locations range. Clustered locations were used as model training data with 14 landslide conditioning factors such as; topographic derived parameters, lithology, NDVI, land use and land cover maps. Pearson chi-squared value was used to find the best classification fit between the dependent variable and conditioning factors. Finally the relationship between conditioning factors were assessed and the landslide susceptibility map (LSM) was produced. An area under the curve (AUC) was used to test the model reliability and prediction capability with the training and validation landslide locations respectively. This study proved the efficiency and reliability of decision tree (DT) model in landslide susceptibility mapping. Also it provided a valuable scientific basis for spatial decision making in planning and urban management studies.

  5. Automatic stress-relieving music recommendation system based on photoplethysmography-derived heart rate variability analysis.

    PubMed

    Shin, Il-Hyung; Cha, Jaepyeong; Cheon, Gyeong Woo; Lee, Choonghee; Lee, Seung Yup; Yoon, Hyung-Jin; Kim, Hee Chan

    2014-01-01

    This paper presents an automatic stress-relieving music recommendation system (ASMRS) for individual music listeners. The ASMRS uses a portable, wireless photoplethysmography module with a finger-type sensor, and a program that translates heartbeat signals from the sensor to the stress index. The sympathovagal balance index (SVI) was calculated from heart rate variability to assess the user's stress levels while listening to music. Twenty-two healthy volunteers participated in the experiment. The results have shown that the participants' SVI values are highly correlated with their prespecified music preferences. The sensitivity and specificity of the favorable music classification also improved as the number of music repetitions increased to 20 times. Based on the SVI values, the system automatically recommends favorable music lists to relieve stress for individuals. PMID:25571461

  6. An Automatic and Power Spectra-based Rotate Correcting Algorithm for Microarray Image.

    PubMed

    Deng, Ning; Duan, Huilong

    2005-01-01

    Microarray image analysis, an important aspect of microarray technology, faces vast amount of data processing. At present, the speed of microarray image analysis is quite limited by excessive manual intervention. The geometric structure of microarray determines that, while being analyzed, microarray image should be collimated in the scanning vertical orientation. If rotation or tilt happens in microarray image, the analysis result may be incorrect. Although some automatic image analysis algorithms are used for microarray, still few methods are reported to calibrate the microarray image rotation problem. In this paper, an automatic rotate correcting algorithm is presented which aims at the deflective problem of microarray image. This method is based on image power spectra. Examined by hundreds of samples of clinical data, the algorithm is proved to achieve high precision. As a result, adopting this algorithm, the overall procedure automation in microarray image analysis can be realized.

  7. Automatic stress-relieving music recommendation system based on photoplethysmography-derived heart rate variability analysis.

    PubMed

    Shin, Il-Hyung; Cha, Jaepyeong; Cheon, Gyeong Woo; Lee, Choonghee; Lee, Seung Yup; Yoon, Hyung-Jin; Kim, Hee Chan

    2014-01-01

    This paper presents an automatic stress-relieving music recommendation system (ASMRS) for individual music listeners. The ASMRS uses a portable, wireless photoplethysmography module with a finger-type sensor, and a program that translates heartbeat signals from the sensor to the stress index. The sympathovagal balance index (SVI) was calculated from heart rate variability to assess the user's stress levels while listening to music. Twenty-two healthy volunteers participated in the experiment. The results have shown that the participants' SVI values are highly correlated with their prespecified music preferences. The sensitivity and specificity of the favorable music classification also improved as the number of music repetitions increased to 20 times. Based on the SVI values, the system automatically recommends favorable music lists to relieve stress for individuals.

  8. Automatic extraction of initial moving object based on advanced feature and video analysis

    NASA Astrophysics Data System (ADS)

    Liu, Mao-Ying; Dai, Qiong-Hai; Liu, Xiao-Dong; Er, Gui-Hua

    2005-07-01

    Traditionally, video segmentation usually extracts object using low-level features such as color, texture, edge, motion, and optical flow. This paper originally proposes that the connectivity of object motion is an advanced feature of video moving object because it can reflect semantic meanings of object to some extent. And it can be fully represented on cumulated difference image which is the combination of a certain number of interframe difference images. Based on this principle, a novel system is designed to extract initial moving object automatically. The system includes 3 key innovations: 1) System is applied on cumulated difference image which can make object more prominent than background noise. Object extraction is based on the connectivity of object motion and it can guarantee the integrity of the extracted object while eliminate big background regions which cannot be removed by conventional change detection methods, for example, intense-noise regions and shadow regions that are not connected tightly to object. 2) Video sequence analysis is performed ahead of video segmentation. Proper object extraction methods are adopted according to the characteristics of background noise and object motion. 3) The adaptive threshold is automatically determined on cumulated difference image after acute noises is removed. The threshold determined here is more reasonable. And with it, most noise can be eliminated while small-motion regions of object are preserved. Results show that this system can extract object in different kinds of sequences automatically, promptly and properly. Thus, this system is very suitable for real time video applications.

  9. Applications of the automatic change detection for disaster monitoring by the knowledge-based framework

    NASA Astrophysics Data System (ADS)

    Tadono, T.; Hashimoto, S.; Onosato, M.; Hori, M.

    2012-11-01

    Change detection is a fundamental approach in utilization of satellite remote sensing image, especially in multi-temporal analysis that involves for example extracting damaged areas by a natural disaster. Recently, the amount of data obtained by Earth observation satellites has increased significantly owing to the increasing number and types of observing sensors, the enhancement of their spatial resolution, and improvements in their data processing systems. In applications for disaster monitoring, in particular, fast and accurate analysis of broad geographical areas is required to facilitate efficient rescue efforts. It is expected that robust automatic image interpretation is necessary. Several algorithms have been proposed in the field of automatic change detection in past, however they are still lack of robustness for multi purposes, an instrument independency, and accuracy better than a manual interpretation. We are trying to develop a framework for automatic image interpretation using ontology-based knowledge representation. This framework permits the description, accumulation, and use of knowledge drawn from image interpretation. Local relationships among certain concepts defined in the ontology are described as knowledge modules and are collected in the knowledge base. The knowledge representation uses a Bayesian network as a tool to describe various types of knowledge in a uniform manner. Knowledge modules are synthesized and used for target-specified inference. The results applied to two types of disasters by the framework without any modification and tuning are shown in this paper.

  10. The algorithm and implementation of EMCCD automatic gain adjustment based on fixed gray level

    NASA Astrophysics Data System (ADS)

    Luo, Le; Chen, Qian; He, Wei-Ji; Lu, Zhen-Xi

    2015-10-01

    The image quality and resolution will be affected, if the multiplication gain value of EMCCD imaging system is too low or too high. This paper presents the algorithm of EMCCD automatic gain adjustment based on fixed gray level. The algorithm takes the average brightness of the image as a measure of image quality. It calculates the multiplication gain adjustment value through the calculation of average brightness of current frame image, combining the two gray level threshold values and the system exposure function, so that the next frame image will achieve the ideal brightness value. On the basis of the algorithm, this paper builds a multiplication gain adjustment control circuit and a multiplication gain resistor lookup table. Then, we can achieve automatic adjustment of multiplication gain by changing the resistance value of the digital potentiometer in the circuit. Experimental results show that the image quality can be effectively improved, by using the algorithm of automatic gain adjustment based on fixed gray level. After adjustment, the brightness of the image is moderate, contrast enhanced, and the details more clear.

  11. Automatic identification of shallow landslides based on Worldview2 remote sensing images

    NASA Astrophysics Data System (ADS)

    Ma, Hai-Rong; Cheng, Xinwen; Chen, Lianjun; Zhang, Haitao; Xiong, Hongwei

    2016-01-01

    Automatic identification of landslides based on remote sensing images is important for investigating disasters and producing hazard maps. We propose a method to detect shallow landslides automatically using Wordview2 images. Features such as high soil brightness and low vegetation coverage can help identify shallow landslides on remote sensing images. Therefore, soil brightness and vegetation index were chosen as indexes for landslide remote sensing. The back scarp of a landslide can form dark shadow areas on the landslide mass, affecting the accuracy of landslide extraction. To eliminate this effect, the shadow index was chosen as an index. The first principal component (PC1) contained >90% of the image information; therefore, this was also selected as an index. The four selected indexes were used to synthesize a new image wherein information on shallow landslides was enhanced, while other background information was suppressed. Then, PC1 was extracted from the new synthetic image, and an automatic threshold segmentation algorithm was used for segmenting the image to obtain similar landslide areas. Based on landslide features such as slope, shape, and area, nonlandslide areas were eliminated. Finally, four experimental sites were used to verify the feasibility of the developed method.

  12. Automatic segmentation and classification of gestational sac based on mean sac diameter using medical ultrasound image

    NASA Astrophysics Data System (ADS)

    Khazendar, Shan; Farren, Jessica; Al-Assam, Hisham; Sayasneh, Ahmed; Du, Hongbo; Bourne, Tom; Jassim, Sabah A.

    2014-05-01

    Ultrasound is an effective multipurpose imaging modality that has been widely used for monitoring and diagnosing early pregnancy events. Technology developments coupled with wide public acceptance has made ultrasound an ideal tool for better understanding and diagnosing of early pregnancy. The first measurable signs of an early pregnancy are the geometric characteristics of the Gestational Sac (GS). Currently, the size of the GS is manually estimated from ultrasound images. The manual measurement involves multiple subjective decisions, in which dimensions are taken in three planes to establish what is known as Mean Sac Diameter (MSD). The manual measurement results in inter- and intra-observer variations, which may lead to difficulties in diagnosis. This paper proposes a fully automated diagnosis solution to accurately identify miscarriage cases in the first trimester of pregnancy based on automatic quantification of the MSD. Our study shows a strong positive correlation between the manual and the automatic MSD estimations. Our experimental results based on a dataset of 68 ultrasound images illustrate the effectiveness of the proposed scheme in identifying early miscarriage cases with classification accuracies comparable with those of domain experts using K nearest neighbor classifier on automatically estimated MSDs.

  13. Sparse deconvolution method for ultrasound images based on automatic estimation of reference signals.

    PubMed

    Jin, Haoran; Yang, Keji; Wu, Shiwei; Wu, Haiteng; Chen, Jian

    2016-04-01

    Sparse deconvolution is widely used in the field of non-destructive testing (NDT) for improving the temporal resolution. Generally, the reference signals involved in sparse deconvolution are measured from the reflection echoes of standard plane block, which cannot accurately describe the acoustic properties at different spatial positions. Therefore, the performance of sparse deconvolution will deteriorate, due to the deviations in reference signals. Meanwhile, it is inconvenient for automatic ultrasonic NDT using manual measurement of reference signals. To overcome these disadvantages, a modified sparse deconvolution based on automatic estimation of reference signals is proposed in this paper. By estimating the reference signals, the deviations would be alleviated and the accuracy of sparse deconvolution is therefore improved. Based on the automatic estimation of reference signals, regional sparse deconvolution is achievable by decomposing the whole B-scan image into small regions of interest (ROI), and the image dimensionality is significantly reduced. Since the computation time of proposed method has a power dependence on the signal length, the computation efficiency is therefore improved significantly with this strategy. The performance of proposed method is demonstrated using immersion measurement of scattering targets and steel block with side-drilled holes. The results verify that the proposed method is able to maintain the vertical resolution enhancement and noise-suppression capabilities in different scenarios. PMID:26773787

  14. Automatic lesion tracking for a PET/CT based computer aided cancer therapy monitoring system

    NASA Astrophysics Data System (ADS)

    Opfer, Roland; Brenner, Winfried; Carlsen, Ingwer; Renisch, Steffen; Sabczynski, Jörg; Wiemker, Rafael

    2008-03-01

    Response assessment of cancer therapy is a crucial component towards a more effective and patient individualized cancer therapy. Integrated PET/CT systems provide the opportunity to combine morphologic with functional information. However, dealing simultaneously with several PET/CT scans poses a serious workflow problem. It can be a difficult and tedious task to extract response criteria based upon an integrated analysis of PET and CT images and to track these criteria over time. In order to improve the workflow for serial analysis of PET/CT scans we introduce in this paper a fast lesion tracking algorithm. We combine a global multi-resolution rigid registration algorithm with a local block matching and a local region growing algorithm. Whenever the user clicks on a lesion in the base-line PET scan the course of standardized uptake values (SUV) is automatically identified and shown to the user as a graph plot. We have validated our method by a data collection from 7 patients. Each patient underwent two or three PET/CT scans during the course of a cancer therapy. An experienced nuclear medicine physician manually measured the courses of the maximum SUVs for altogether 18 lesions. As a result we obtained that the automatic detection of the corresponding lesions resulted in SUV measurements which are nearly identical to the manually measured SUVs. Between 38 measured maximum SUVs derived from manual and automatic detected lesions we observed a correlation of 0.9994 and a average error of 0.4 SUV units.

  15. Automaticity in acute ischemia: Bifurcation analysis of a human ventricular model

    NASA Astrophysics Data System (ADS)

    Bouchard, Sylvain; Jacquemet, Vincent; Vinet, Alain

    2011-01-01

    Acute ischemia (restriction in blood supply to part of the heart as a result of myocardial infarction) induces major changes in the electrophysiological properties of the ventricular tissue. Extracellular potassium concentration ([Ko+]) increases in the ischemic zone, leading to an elevation of the resting membrane potential that creates an “injury current” (IS) between the infarcted and the healthy zone. In addition, the lack of oxygen impairs the metabolic activity of the myocytes and decreases ATP production, thereby affecting ATP-sensitive potassium channels (IKatp). Frequent complications of myocardial infarction are tachycardia, fibrillation, and sudden cardiac death, but the mechanisms underlying their initiation are still debated. One hypothesis is that these arrhythmias may be triggered by abnormal automaticity. We investigated the effect of ischemia on myocyte automaticity by performing a comprehensive bifurcation analysis (fixed points, cycles, and their stability) of a human ventricular myocyte model [K. H. W. J. ten Tusscher and A. V. Panfilov, Am. J. Physiol. Heart Circ. Physiol.AJPHAP0363-613510.1152/ajpheart.00109.2006 291, H1088 (2006)] as a function of three ischemia-relevant parameters [Ko+], IS, and IKatp. In this single-cell model, we found that automatic activity was possible only in the presence of an injury current. Changes in [Ko+] and IKatp significantly altered the bifurcation structure of IS, including the occurrence of early-after depolarization. The results provide a sound basis for studying higher-dimensional tissue structures representing an ischemic heart.

  16. VACTIV: A graphical dialog based program for an automatic processing of line and band spectra

    NASA Astrophysics Data System (ADS)

    Zlokazov, V. B.

    2013-05-01

    The program VACTIV-Visual ACTIV-has been developed for an automatic analysis of spectrum-like distributions, in particular gamma-ray spectra or alpha-spectra and is a standard graphical dialog based Windows XX application, driven by a menu, mouse and keyboard. On the one hand, it was a conversion of an existing Fortran program ACTIV [1] to the DELPHI language; on the other hand, it is a transformation of the sequential syntax of Fortran programming to a new object-oriented style, based on the organization of event interactions. New features implemented in the algorithms of both the versions consisted in the following as peak model both an analytical function and a graphical curve could be used; the peak search algorithm was able to recognize not only Gauss peaks but also peaks with an irregular form; both narrow peaks (2-4 channels) and broad ones (50-100 channels); the regularization technique in the fitting guaranteed a stable solution in the most complicated cases of strongly overlapping or weak peaks. The graphical dialog interface of VACTIV is much more convenient than the batch mode of ACTIV. [1] V.B. Zlokazov, Computer Physics Communications, 28 (1982) 27-37. NEW VERSION PROGRAM SUMMARYProgram Title: VACTIV Catalogue identifier: ABAC_v2_0 Licensing provisions: no Programming language: DELPHI 5-7 Pascal. Computer: IBM PC series. Operating system: Windows XX. RAM: 1 MB Keywords: Nuclear physics, spectrum decomposition, least squares analysis, graphical dialog, object-oriented programming. Classification: 17.6. Catalogue identifier of previous version: ABAC_v1_0 Journal reference of previous version: Comput. Phys. Commun. 28 (1982) 27 Does the new version supersede the previous version?: Yes. Nature of problem: Program VACTIV is intended for precise analysis of arbitrary spectrum-like distributions, e.g. gamma-ray and X-ray spectra and allows the user to carry out the full cycle of automatic processing of such spectra, i.e. calibration, automatic peak search

  17. VACTIV: A graphical dialog based program for an automatic processing of line and band spectra

    NASA Astrophysics Data System (ADS)

    Zlokazov, V. B.

    2013-05-01

    The program VACTIV-Visual ACTIV-has been developed for an automatic analysis of spectrum-like distributions, in particular gamma-ray spectra or alpha-spectra and is a standard graphical dialog based Windows XX application, driven by a menu, mouse and keyboard. On the one hand, it was a conversion of an existing Fortran program ACTIV [1] to the DELPHI language; on the other hand, it is a transformation of the sequential syntax of Fortran programming to a new object-oriented style, based on the organization of event interactions. New features implemented in the algorithms of both the versions consisted in the following as peak model both an analytical function and a graphical curve could be used; the peak search algorithm was able to recognize not only Gauss peaks but also peaks with an irregular form; both narrow peaks (2-4 channels) and broad ones (50-100 channels); the regularization technique in the fitting guaranteed a stable solution in the most complicated cases of strongly overlapping or weak peaks. The graphical dialog interface of VACTIV is much more convenient than the batch mode of ACTIV. [1] V.B. Zlokazov, Computer Physics Communications, 28 (1982) 27-37. NEW VERSION PROGRAM SUMMARYProgram Title: VACTIV Catalogue identifier: ABAC_v2_0 Licensing provisions: no Programming language: DELPHI 5-7 Pascal. Computer: IBM PC series. Operating system: Windows XX. RAM: 1 MB Keywords: Nuclear physics, spectrum decomposition, least squares analysis, graphical dialog, object-oriented programming. Classification: 17.6. Catalogue identifier of previous version: ABAC_v1_0 Journal reference of previous version: Comput. Phys. Commun. 28 (1982) 27 Does the new version supersede the previous version?: Yes. Nature of problem: Program VACTIV is intended for precise analysis of arbitrary spectrum-like distributions, e.g. gamma-ray and X-ray spectra and allows the user to carry out the full cycle of automatic processing of such spectra, i.e. calibration, automatic peak search

  18. Hydrocarbon ignition: Automatic generation of reaction mechanisms and applications to modeling of engine knock

    SciTech Connect

    Chevalier, C.; Warnatz, J. . Inst. fuer Technische Verbrennung); Pitz, W.J.; Westbrook, C.K. )

    1992-02-01

    A computational technique is described which automatically develops detailed chemical kinetic reaction mechanisms for large aliphatic hydrocarbon fuel molecules. This formulation uses the LISP language to apply general rules which identify the chemical species produced, the reactions between these species, and the elementary reaction rates for each reaction step. Reaction mechanisms for cetane (n-hexadecane) and most alkane fuels C{sub 7} and smaller are developed using this automatic technique, and detailed sensitivity analyses for n-heptane and cetane are described. These reaction mechanisms are then applied to calculation of knock tendencies in internal combustion engines. The model is used to study the influence of fuel molecule size and structure on knock tendency, to examine knocking properties of fuel mixtures, and to determine the mechanisms by which pro-knock and anti-knock additives change knock properties.

  19. Atlas-Based Automatic Generation of Subject-Specific Finite Element Tongue Meshes.

    PubMed

    Bijar, Ahmad; Rohan, Pierre-Yves; Perrier, Pascal; Payan, Yohan

    2016-01-01

    Generation of subject-specific 3D finite element (FE) models requires the processing of numerous medical images in order to precisely extract geometrical information about subject-specific anatomy. This processing remains extremely challenging. To overcome this difficulty, we present an automatic atlas-based method that generates subject-specific FE meshes via a 3D registration guided by Magnetic Resonance images. The method extracts a 3D transformation by registering the atlas' volume image to the subject's one, and establishes a one-to-one correspondence between the two volumes. The 3D transformation field deforms the atlas' mesh to generate the subject-specific FE mesh. To preserve the quality of the subject-specific mesh, a diffeomorphic non-rigid registration based on B-spline free-form deformations is used, which guarantees a non-folding and one-to-one transformation. Two evaluations of the method are provided. First, a publicly available CT-database is used to assess the capability to accurately capture the complexity of each subject-specific Lung's geometry. Second, FE tongue meshes are generated for two healthy volunteers and two patients suffering from tongue cancer using MR images. It is shown that the method generates an appropriate representation of the subject-specific geometry while preserving the quality of the FE meshes for subsequent FE analysis. To demonstrate the importance of our method in a clinical context, a subject-specific mesh is used to simulate tongue's biomechanical response to the activation of an important tongue muscle, before and after cancer surgery.

  20. Investment appraisal of automatic milking and conventional milking technologies in a pasture-based dairy system.

    PubMed

    Shortall, J; Shalloo, L; Foley, C; Sleator, R D; O'Brien, B

    2016-09-01

    The successful integration of automatic milking (AM) systems and grazing has resulted in AM becoming a feasible alternative to conventional milking (CM) in pasture-based systems. The objective of this study was to identify the profitability of AM in a pasture-based system, relative to CM herringbone parlors with 2 different levels of automation, across 2 farm sizes, over a 10-yr period following initial investment. The scenarios which were evaluated were (1) a medium farm milking 70 cows twice daily, with 1 AM unit, a 12-unit CM medium-specification (MS) parlor and a 12-unit CM high-specification (HS) parlor, and (2) a large farm milking 140 cows twice daily with 2 AM units, a 20-unit CM MS parlor and a 20-unit CM HS parlor. A stochastic whole-farm budgetary simulation model combined capital investment costs and annual labor and maintenance costs for each investment scenario, with each scenario evaluated using multiple financial metrics, such as annual net profit, annual net cash flow, total discounted net profitability, total discounted net cash flow, and return on investment. The capital required for each investment was financed from borrowings at an interest rate of 5% and repaid over 10-yr, whereas milking equipment and building infrastructure were depreciated over 10 and 20 yr, respectively. A supporting labor audit (conducted on both AM and CM farms) showed a 36% reduction in labor demand associated with AM. However, despite this reduction in labor, MS CM technologies consistently achieved greater profitability, irrespective of farm size. The AM system achieved intermediate profitability at medium farm size; it was 0.5% less profitable than HS technology at the large farm size. The difference in profitability was greatest in the years after the initial investment. This study indicated that although milking with AM was less profitable than MS technologies, it was competitive when compared with a CM parlor of similar technology.

  1. Investment appraisal of automatic milking and conventional milking technologies in a pasture-based dairy system.

    PubMed

    Shortall, J; Shalloo, L; Foley, C; Sleator, R D; O'Brien, B

    2016-09-01

    The successful integration of automatic milking (AM) systems and grazing has resulted in AM becoming a feasible alternative to conventional milking (CM) in pasture-based systems. The objective of this study was to identify the profitability of AM in a pasture-based system, relative to CM herringbone parlors with 2 different levels of automation, across 2 farm sizes, over a 10-yr period following initial investment. The scenarios which were evaluated were (1) a medium farm milking 70 cows twice daily, with 1 AM unit, a 12-unit CM medium-specification (MS) parlor and a 12-unit CM high-specification (HS) parlor, and (2) a large farm milking 140 cows twice daily with 2 AM units, a 20-unit CM MS parlor and a 20-unit CM HS parlor. A stochastic whole-farm budgetary simulation model combined capital investment costs and annual labor and maintenance costs for each investment scenario, with each scenario evaluated using multiple financial metrics, such as annual net profit, annual net cash flow, total discounted net profitability, total discounted net cash flow, and return on investment. The capital required for each investment was financed from borrowings at an interest rate of 5% and repaid over 10-yr, whereas milking equipment and building infrastructure were depreciated over 10 and 20 yr, respectively. A supporting labor audit (conducted on both AM and CM farms) showed a 36% reduction in labor demand associated with AM. However, despite this reduction in labor, MS CM technologies consistently achieved greater profitability, irrespective of farm size. The AM system achieved intermediate profitability at medium farm size; it was 0.5% less profitable than HS technology at the large farm size. The difference in profitability was greatest in the years after the initial investment. This study indicated that although milking with AM was less profitable than MS technologies, it was competitive when compared with a CM parlor of similar technology. PMID:27423956

  2. Automatic parametrization of non-polar implicit solvent models for the blind prediction of solvation free energies

    NASA Astrophysics Data System (ADS)

    Wang, Bao; Zhao, Zhixiong; Wei, Guo-Wei

    2016-09-01

    In this work, a systematic protocol is proposed to automatically parametrize the non-polar part of implicit solvent models with polar and non-polar components. The proposed protocol utilizes either the classical Poisson model or the Kohn-Sham density functional theory based polarizable Poisson model for modeling polar solvation free energies. Four sets of radius parameters are combined with four sets of charge force fields to arrive at a total of 16 different parametrizations for the polar component. For the non-polar component, either the standard model of surface area, molecular volume, and van der Waals interactions or a model with atomic surface areas and molecular volume is employed. To automatically parametrize a non-polar model, we develop scoring and ranking algorithms to classify solute molecules. The their non-polar parametrization is obtained based on the assumption that similar molecules have similar parametrizations. A large database with 668 experimental data is collected and employed to validate the proposed protocol. The lowest leave-one-out root mean square (RMS) error for the database is 1.33 kcal/mol. Additionally, five subsets of the database, i.e., SAMPL0-SAMPL4, are employed to further demonstrate that the proposed protocol. The optimal RMS errors are 0.93, 2.82, 1.90, 0.78, and 1.03 kcal/mol, respectively, for SAMPL0, SAMPL1, SAMPL2, SAMPL3, and SAMPL4 test sets. The corresponding RMS errors for the polarizable Poisson model with the Amber Bondi radii are 0.93, 2.89, 1.90, 1.16, and 1.07 kcal/mol, respectively.

  3. Automatic and Quantitative Measurement of Collagen Gel Contraction Using Model-Guided Segmentation.

    PubMed

    Chen, Hsin-Chen; Yang, Tai-Hua; Thoreson, Andrew R; Zhao, Chunfeng; Amadio, Peter C; Sun, Yung-Nien; Su, Fong-Chin; An, Kai-Nan

    2013-08-01

    Quantitative measurement of collagen gel contraction plays a critical role in the field of tissue engineering because it provides spatial-temporal assessment (e.g., changes of gel area and diameter during the contraction process) reflecting the cell behaviors and tissue material properties. So far the assessment of collagen gels relies on manual segmentation, which is time-consuming and suffers from serious intra- and inter-observer variability. In this study, we propose an automatic method combining various image processing techniques to resolve these problems. The proposed method first detects the maximal feasible contraction range of circular references (e.g., culture dish) and avoids the interference of irrelevant objects in the given image. Then, a three-step color conversion strategy is applied to normalize and enhance the contrast between the gel and background. We subsequently introduce a deformable circular model (DCM) which utilizes regional intensity contrast and circular shape constraint to locate the gel boundary. An adaptive weighting scheme was employed to coordinate the model behavior, so that the proposed system can overcome variations of gel boundary appearances at different contraction stages. Two measurements of collagen gels (i.e., area and diameter) can readily be obtained based on the segmentation results. Experimental results, including 120 gel images for accuracy validation, showed high agreement between the proposed method and manual segmentation with an average dice similarity coefficient larger than 0.95. The results also demonstrated obvious improvement in gel contours obtained by the proposed method over two popular, generic segmentation methods. PMID:24092954

  4. Two-Stage Automatic Calibration and Predictive Uncertainty Analysis of a Semi-distributed Watershed Model

    NASA Astrophysics Data System (ADS)

    Lin, Z.; Radcliffe, D. E.; Doherty, J.

    2004-12-01

    Automatic calibration has been applied to conceptual rainfall-runoff models for more than three decades, usually to lumped models. Even when a (semi-)distributed model that allows spatial variability of parameters is calibrated using an automated process, the parameters of the model are often lumped over space so that the model is simplified as a lumped model. Our objective was to develop a two-stage routine for automatically calibrating the Soil Water Assessment Tool (SWAT, a semi-distributed watershed model) that would find the optimal values for the model parameters, preserve the spatial variability in essential parameters, and lead to a measure of the model prediction uncertainty. In the first stage of this proposed calibration scheme, a global search method, namely, the Shuffled Complex Evolution (SCE-UA) method, was employed to find the ``best'' values for the lumped model parameters. That is, in order to limit the number of the calibrated parameters, the model parameters were assumed to be invariant over different subbasins and hydrologic response units (HRU, the basic calculation unit in the SWAT model). However, in the second stage, the spatial variability of the original model parameters was restored and the number of the calibrated parameters was dramatically increased (from a few to near a hundred). Hence, a local search method, namely, a variation of Levenberg-Marquart method, was preferred to find the more distributed set of parameters using the results of the previous stage as starting values. Furthermore, in order to prevent the parameters from taking extreme values, a strategy called ``regularization'' was adopted, through which the distributed parameters were constrained to vary as little as possible from the initial values of the lumped parameters. We calibrated the stream flow in the Etowah River measured at Canton, GA (a watershed area of 1,580 km2) for the years 1983-1992 and used the years 1993-2001 for validation. Calibration for daily and

  5. Adjustment of automatic control systems of production facilities at coal processing plants using multivariant physico- mathematical models

    NASA Astrophysics Data System (ADS)

    Evtushenko, V. F.; Myshlyaev, L. P.; Makarov, G. V.; Ivushkin, K. A.; Burkova, E. V.

    2016-10-01

    The structure of multi-variant physical and mathematical models of control system is offered as well as its application for adjustment of automatic control system (ACS) of production facilities on the example of coal processing plant.

  6. An automatic, stagnation point based algorithm for the delineation of Wellhead Protection Areas

    NASA Astrophysics Data System (ADS)

    Tosco, Tiziana; Sethi, Rajandrea; di Molfetta, Antonio

    2008-07-01

    Time-related capture areas are usually delineated using the backward particle tracking method, releasing circles of equally spaced particles around each well. In this way, an accurate delineation often requires both a very high number of particles and a manual capture zone encirclement. The aim of this work was to propose an Automatic Protection Area (APA) delineation algorithm, which can be coupled with any model of flow and particle tracking. The computational time is here reduced, thanks to the use of a limited number of nonequally spaced particles. The particle starting positions are determined coupling forward particle tracking from the stagnation point, and backward particle tracking from the pumping well. The pathlines are postprocessed for a completely automatic delineation of closed perimeters of time-related capture zones. The APA algorithm was tested for a two-dimensional geometry, in homogeneous and nonhomogeneous aquifers, steady state flow conditions, single and multiple wells. Results show that the APA algorithm is robust and able to automatically and accurately reconstruct protection areas with a very small number of particles, also in complex scenarios.

  7. Image based hair segmentation algorithm for the application of automatic facial caricature synthesis.

    PubMed

    Shen, Yehu; Peng, Zhenyun; Zhang, Yaohui

    2014-01-01

    Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying.

  8. Research on large spatial coordinate automatic measuring system based on multilateral method

    NASA Astrophysics Data System (ADS)

    Miao, Dongjing; Li, Jianshuan; Li, Lianfu; Jiang, Yuanlin; Kang, Yao; He, Mingzhao; Deng, Xiangrui

    2015-10-01

    To measure the spatial coordinate accurately and efficiently in large size range, a manipulator automatic measurement system which based on multilateral method is developed. This system is divided into two parts: The coordinate measurement subsystem is consists of four laser tracers, and the trajectory generation subsystem is composed by a manipulator and a rail. To ensure that there is no laser beam break during the measurement process, an optimization function is constructed by using the vectors between the laser tracers measuring center and the cat's eye reflector measuring center, then an orientation automatically adjust algorithm for the reflector is proposed, with this algorithm, the laser tracers are always been able to track the reflector during the entire measurement process. Finally, the proposed algorithm is validated by taking the calibration of laser tracker for instance: the actual experiment is conducted in 5m × 3m × 3.2m range, the algorithm is used to plan the orientations of the reflector corresponding to the given 24 points automatically. After improving orientations of some minority points with adverse angles, the final results are used to control the manipulator's motion. During the actual movement, there are no beam break occurs. The result shows that the proposed algorithm help the developed system to measure the spatial coordinates over a large range with efficiency.

  9. Multi-atlas-based automatic 3D segmentation for prostate brachytherapy in transrectal ultrasound images

    NASA Astrophysics Data System (ADS)

    Nouranian, Saman; Mahdavi, S. Sara; Spadinger, Ingrid; Morris, William J.; Salcudean, S. E.; Abolmaesumi, P.

    2013-03-01

    One of the commonly used treatment methods for early-stage prostate cancer is brachytherapy. The standard of care for planning this procedure is segmentation of contours from transrectal ultrasound (TRUS) images, which closely follow the prostate boundary. This process is currently performed either manually or using semi-automatic techniques. This paper introduces a fully automatic segmentation algorithm which uses a priori knowledge of contours in a reference data set of TRUS volumes. A non-parametric deformable registration method is employed to transform the atlas prostate contours to a target image coordinates. All atlas images are sorted based on their registration results and the highest ranked registration results are selected for decision fusion. A Simultaneous Truth and Performance Level Estimation algorithm is utilized to fuse labels from registered atlases and produce a segmented target volume. In this experiment, 50 patient TRUS volumes are obtained and a leave-one-out study on TRUS volumes is reported. We also compare our results with a state-of-the-art semi-automatic prostate segmentation method that has been clinically used for planning prostate brachytherapy procedures and we show comparable accuracy and precision within clinically acceptable runtime.

  10. Computer Vision Based Automatic Extraction and Thickness Measurement of Deep Cervical Flexor from Ultrasonic Images

    PubMed Central

    Kim, Kwang Baek; Song, Doo Heon; Park, Hyun Jun

    2016-01-01

    Deep Cervical Flexor (DCF) muscles are important in monitoring and controlling neck pain. While ultrasonographic analysis is useful in this area, it has intrinsic subjectivity problem. In this paper, we propose automatic DCF extractor/analyzer software based on computer vision. One of the major difficulties in developing such an automatic analyzer is to detect important organs and their boundaries under very low brightness contrast environment. Our fuzzy sigma binarization process is one of the answers for that problem. Another difficulty is to compensate information loss that happened during such image processing procedures. Many morphologically motivated image processing algorithms are applied for that purpose. The proposed method is verified as successful in extracting DCFs and measuring thicknesses in experiment using two hundred 800 × 600 DICOM ultrasonography images with 98.5% extraction rate. Also, the thickness of DCFs automatically measured by this software has small difference (less than 0.3 cm) for 89.8% of extracted DCFs. PMID:26949411

  11. Computer Vision Based Automatic Extraction and Thickness Measurement of Deep Cervical Flexor from Ultrasonic Images.

    PubMed

    Kim, Kwang Baek; Song, Doo Heon; Park, Hyun Jun

    2016-01-01

    Deep Cervical Flexor (DCF) muscles are important in monitoring and controlling neck pain. While ultrasonographic analysis is useful in this area, it has intrinsic subjectivity problem. In this paper, we propose automatic DCF extractor/analyzer software based on computer vision. One of the major difficulties in developing such an automatic analyzer is to detect important organs and their boundaries under very low brightness contrast environment. Our fuzzy sigma binarization process is one of the answers for that problem. Another difficulty is to compensate information loss that happened during such image processing procedures. Many morphologically motivated image processing algorithms are applied for that purpose. The proposed method is verified as successful in extracting DCFs and measuring thicknesses in experiment using two hundred 800 × 600 DICOM ultrasonography images with 98.5% extraction rate. Also, the thickness of DCFs automatically measured by this software has small difference (less than 0.3 cm) for 89.8% of extracted DCFs. PMID:26949411

  12. Design of an optimum computer vision-based automatic abalone (Haliotis discus hannai) grading algorithm.

    PubMed

    Lee, Donggil; Lee, Kyounghoon; Kim, Seonghun; Yang, Yongsu

    2015-04-01

    An automatic abalone grading algorithm that estimates abalone weights on the basis of computer vision using 2D images is developed and tested. The algorithm overcomes the problems experienced by conventional abalone grading methods that utilize manual sorting and mechanical automatic grading. To design an optimal algorithm, a regression formula and R(2) value were investigated by performing a regression analysis for each of total length, body width, thickness, view area, and actual volume against abalone weights. The R(2) value between the actual volume and abalone weight was 0.999, showing a relatively high correlation. As a result, to easily estimate the actual volumes of abalones based on computer vision, the volumes were calculated under the assumption that abalone shapes are half-oblate ellipsoids, and a regression formula was derived to estimate the volumes of abalones through linear regression analysis between the calculated and actual volumes. The final automatic abalone grading algorithm is designed using the abalone volume estimation regression formula derived from test results, and the actual volumes and abalone weights regression formula. In the range of abalones weighting from 16.51 to 128.01 g, the results of evaluation of the performance of algorithm via cross-validation indicate root mean square and worst-case prediction errors of are 2.8 and ±8 g, respectively. PMID:25874500

  13. Design of an optimum computer vision-based automatic abalone (Haliotis discus hannai) grading algorithm.

    PubMed

    Lee, Donggil; Lee, Kyounghoon; Kim, Seonghun; Yang, Yongsu

    2015-04-01

    An automatic abalone grading algorithm that estimates abalone weights on the basis of computer vision using 2D images is developed and tested. The algorithm overcomes the problems experienced by conventional abalone grading methods that utilize manual sorting and mechanical automatic grading. To design an optimal algorithm, a regression formula and R(2) value were investigated by performing a regression analysis for each of total length, body width, thickness, view area, and actual volume against abalone weights. The R(2) value between the actual volume and abalone weight was 0.999, showing a relatively high correlation. As a result, to easily estimate the actual volumes of abalones based on computer vision, the volumes were calculated under the assumption that abalone shapes are half-oblate ellipsoids, and a regression formula was derived to estimate the volumes of abalones through linear regression analysis between the calculated and actual volumes. The final automatic abalone grading algorithm is designed using the abalone volume estimation regression formula derived from test results, and the actual volumes and abalone weights regression formula. In the range of abalones weighting from 16.51 to 128.01 g, the results of evaluation of the performance of algorithm via cross-validation indicate root mean square and worst-case prediction errors of are 2.8 and ±8 g, respectively.

  14. An automatic recognition method of pointer instrument based on improved Hough transform

    NASA Astrophysics Data System (ADS)

    Xu, Li; Fang, Tian; Gao, Xiaoyu

    2015-10-01

    For the automatic recognition of pointer instrument, the method for the automatic recognition of pointer instrument based on improved Hough Transform was proposed in this paper. The automatic recognition of pointer instrument is applied to all kinds of lighting conditions, but the accuracy of it binaryzation will be influenced when the light is too strong or too dark. Therefore, the improved Ostu method was suggested to realize recognition for adaptive thresholding of pointer instrument under all kinds of lighting conditions. On the basis of dial image characteristics, Otsu method is used to get the value of maximum between-cluster variance and initial threshold than analyze its maximum between-cluster variance value to determine the light and shade of the image. When the images are too bright or too dark, the smaller pixels should be given up and then calculate the initial threshold by Otsu method again and again until the best binaryzation image was obtained. Hence, transform the pointer straight line of the binaryzation image to Hough parameter space through improved Hough Transform to determine the position of the pointer straight line by searching the maximum value of arrays of the same angle. Finally, according to angle method, the pointer reading was obtained by the linear relationship for the initial scale and angle of the pointer instrument. Results show that the improved Otsu method make pointer instrument possible to obtained the accuracy binaryzation image even though the light is too bright or too dark , which improves the adaptability of pointer instrument to automatic recognize the light under different conditions. For the pressure gauges with range of 60MPa, the relative error identification reached to 0.005 when use the improved Hough Transform Algorithm.

  15. Machine learning based sample extraction for automatic speech recognition using dialectal Assamese speech.

    PubMed

    Agarwalla, Swapna; Sarma, Kandarpa Kumar

    2016-06-01

    Automatic Speaker Recognition (ASR) and related issues are continuously evolving as inseparable elements of Human Computer Interaction (HCI). With assimilation of emerging concepts like big data and Internet of Things (IoT) as extended elements of HCI, ASR techniques are found to be passing through a paradigm shift. Oflate, learning based techniques have started to receive greater attention from research communities related to ASR owing to the fact that former possess natural ability to mimic biological behavior and that way aids ASR modeling and processing. The current learning based ASR techniques are found to be evolving further with incorporation of big data, IoT like concepts. Here, in this paper, we report certain approaches based on machine learning (ML) used for extraction of relevant samples from big data space and apply them for ASR using certain soft computing techniques for Assamese speech with dialectal variations. A class of ML techniques comprising of the basic Artificial Neural Network (ANN) in feedforward (FF) and Deep Neural Network (DNN) forms using raw speech, extracted features and frequency domain forms are considered. The Multi Layer Perceptron (MLP) is configured with inputs in several forms to learn class information obtained using clustering and manual labeling. DNNs are also used to extract specific sentence types. Initially, from a large storage, relevant samples are selected and assimilated. Next, a few conventional methods are used for feature extraction of a few selected types. The features comprise of both spectral and prosodic types. These are applied to Recurrent Neural Network (RNN) and Fully Focused Time Delay Neural Network (FFTDNN) structures to evaluate their performance in recognizing mood, dialect, speaker and gender variations in dialectal Assamese speech. The system is tested under several background noise conditions by considering the recognition rates (obtained using confusion matrices and manually) and computation time

  16. Machine learning based sample extraction for automatic speech recognition using dialectal Assamese speech.

    PubMed

    Agarwalla, Swapna; Sarma, Kandarpa Kumar

    2016-06-01

    Automatic Speaker Recognition (ASR) and related issues are continuously evolving as inseparable elements of Human Computer Interaction (HCI). With assimilation of emerging concepts like big data and Internet of Things (IoT) as extended elements of HCI, ASR techniques are found to be passing through a paradigm shift. Oflate, learning based techniques have started to receive greater attention from research communities related to ASR owing to the fact that former possess natural ability to mimic biological behavior and that way aids ASR modeling and processing. The current learning based ASR techniques are found to be evolving further with incorporation of big data, IoT like concepts. Here, in this paper, we report certain approaches based on machine learning (ML) used for extraction of relevant samples from big data space and apply them for ASR using certain soft computing techniques for Assamese speech with dialectal variations. A class of ML techniques comprising of the basic Artificial Neural Network (ANN) in feedforward (FF) and Deep Neural Network (DNN) forms using raw speech, extracted features and frequency domain forms are considered. The Multi Layer Perceptron (MLP) is configured with inputs in several forms to learn class information obtained using clustering and manual labeling. DNNs are also used to extract specific sentence types. Initially, from a large storage, relevant samples are selected and assimilated. Next, a few conventional methods are used for feature extraction of a few selected types. The features comprise of both spectral and prosodic types. These are applied to Recurrent Neural Network (RNN) and Fully Focused Time Delay Neural Network (FFTDNN) structures to evaluate their performance in recognizing mood, dialect, speaker and gender variations in dialectal Assamese speech. The system is tested under several background noise conditions by considering the recognition rates (obtained using confusion matrices and manually) and computation time

  17. Generic method for automatic bladder segmentation on cone beam CT using a patient-specific bladder shape model

    SciTech Connect

    Schoot, A. J. A. J. van de Schooneveldt, G.; Wognum, S.; Stalpers, L. J. A.; Rasch, C. R. N.; Bel, A.; Hoogeman, M. S.; Chai, X.

    2014-03-15

    Purpose: The aim of this study is to develop and validate a generic method for automatic bladder segmentation on cone beam computed tomography (CBCT), independent of gender and treatment position (prone or supine), using only pretreatment imaging data. Methods: Data of 20 patients, treated for tumors in the pelvic region with the entire bladder visible on CT and CBCT, were divided into four equally sized groups based on gender and treatment position. The full and empty bladder contour, that can be acquired with pretreatment CT imaging, were used to generate a patient-specific bladder shape model. This model was used to guide the segmentation process on CBCT. To obtain the bladder segmentation, the reference bladder contour was deformed iteratively by maximizing the cross-correlation between directional grey value gradients over the reference and CBCT bladder edge. To overcome incorrect segmentations caused by CBCT image artifacts, automatic adaptations were implemented. Moreover, locally incorrect segmentations could be adapted manually. After each adapted segmentation, the bladder shape model was expanded and new shape patterns were calculated for following segmentations. All available CBCTs were used to validate the segmentation algorithm. The bladder segmentations were validated by comparison with the manual delineations and the segmentation performance was quantified using the Dice similarity coefficient (DSC), surface distance error (SDE) and SD of contour-to-contour distances. Also, bladder volumes obtained by manual delineations and segmentations were compared using a Bland-Altman error analysis. Results: The mean DSC, mean SDE, and mean SD of contour-to-contour distances between segmentations and manual delineations were 0.87, 0.27 cm and 0.22 cm (female, prone), 0.85, 0.28 cm and 0.22 cm (female, supine), 0.89, 0.21 cm and 0.17 cm (male, supine) and 0.88, 0.23 cm and 0.17 cm (male, prone), respectively. Manual local adaptations improved the segmentation

  18. Automatic Texture Reconstruction of 3d City Model from Oblique Images

    NASA Astrophysics Data System (ADS)

    Kang, Junhua; Deng, Fei; Li, Xinwei; Wan, Fang

    2016-06-01

    In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.

  19. Noise Robust Feature Scheme for Automatic Speech Recognition Based on Auditory Perceptual Mechanisms

    NASA Astrophysics Data System (ADS)

    Cai, Shang; Xiao, Yeming; Pan, Jielin; Zhao, Qingwei; Yan, Yonghong

    Mel Frequency Cepstral Coefficients (MFCC) are the most popular acoustic features used in automatic speech recognition (ASR), mainly because the coefficients capture the most useful information of the speech and fit well with the assumptions used in hidden Markov models. As is well known, MFCCs already employ several principles which have known counterparts in the peripheral properties of human hearing: decoupling across frequency, mel-warping of the frequency axis, log-compression of energy, etc. It is natural to introduce more mechanisms in the auditory periphery to improve the noise robustness of MFCC. In this paper, a k-nearest neighbors based frequency masking filter is proposed to reduce the audibility of spectra valleys which are sensitive to noise. Besides, Moore and Glasberg's critical band equivalent rectangular bandwidth (ERB) expression is utilized to determine the filter bandwidth. Furthermore, a new bandpass infinite impulse response (IIR) filter is proposed to imitate the temporal masking phenomenon of the human auditory system. These three auditory perceptual mechanisms are combined with the standard MFCC algorithm in order to investigate their effects on ASR performance, and a revised MFCC extraction scheme is presented. Recognition performances with the standard MFCC, RASTA perceptual linear prediction (RASTA-PLP) and the proposed feature extraction scheme are evaluated on a medium-vocabulary isolated-word recognition task and a more complex large vocabulary continuous speech recognition (LVCSR) task. Experimental results show that consistent robustness against background noise is achieved on these two tasks, and the proposed method outperforms both the standard MFCC and RASTA-PLP.

  20. A transition-constrained discrete hidden Markov model for automatic sleep staging

    PubMed Central

    2012-01-01

    Background Approximately one-third of the human lifespan is spent sleeping. To diagnose sleep problems, all-night polysomnographic (PSG) recordings including electroencephalograms (EEGs), electrooculograms (EOGs) and electromyograms (EMGs), are usually acquired from the patient and scored by a well-trained expert according to Rechtschaffen & Kales (R&K) rules. Visual sleep scoring is a time-consuming and subjective process. Therefore, the development of an automatic sleep scoring method is desirable. Method The EEG, EOG and EMG signals from twenty subjects were measured. In addition to selecting sleep characteristics based on the 1968 R&K rules, features utilized in other research were collected. Thirteen features were utilized including temporal and spectrum analyses of the EEG, EOG and EMG signals, and a total of 158 hours of sleep data were recorded. Ten subjects were used to train the Discrete Hidden Markov Model (DHMM), and the remaining ten were tested by the trained DHMM for recognition. Furthermore, the 2-fold cross validation was performed during this experiment. Results Overall agreement between the expert and the results presented is 85.29%. With the exception of S1, the sensitivities of each stage were more than 81%. The most accurate stage was SWS (94.9%), and the least-accurately classified stage was S1 (<34%). In the majority of cases, S1 was classified as Wake (21%), S2 (33%) or REM sleep (12%), consistent with previous studies. However, the total time of S1 in the 20 all-night sleep recordings was less than 4%. Conclusion The results of the experiments demonstrate that the proposed method significantly enhances the recognition rate when compared with prior studies. PMID:22908930

  1. Automatic segmentation and statistical shape modeling of the paranasal sinuses to estimate natural variations

    NASA Astrophysics Data System (ADS)

    Sinha, Ayushi; Leonard, Simon; Reiter, Austin; Ishii, Masaru; Taylor, Russell H.; Hager, Gregory D.

    2016-03-01

    We present an automatic segmentation and statistical shape modeling system for the paranasal sinuses which allows us to locate structures in and around the sinuses, as well as to observe the variability in these structures. This system involves deformably registering a given patient image to a manually segmented template image, and using the resulting deformation field to transfer labels from the template to the patient image. We use 3D snake splines to correct errors in this initial segmentation. Once we have several accurately segmented images, we build statistical shape models to observe the population mean and variance for each structure. These shape models are useful to us in several ways. Regular registration methods are insufficient to accurately register pre-operative computed tomography (CT) images with intra-operative endoscopy video of the sinuses. This is because of deformations that occur in structures containing erectile tissue. Our aim is to estimate these deformations using our shape models in order to improve video-CT registration, as well as to distinguish normal variations in anatomy from abnormal variations, and automatically detect and stage pathology. We can also compare the mean shapes and variances in different populations, such as different genders or ethnicities, in order to observe differences and similarities, as well as in different age groups in order to observe the developmental changes that occur in the sinuses.

  2. Automatic 3D image registration using voxel similarity measurements based on a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Sullivan, John M., Jr.; Kulkarni, Praveen; Murugavel, Murali

    2006-03-01

    An automatic 3D non-rigid body registration system based upon the genetic algorithm (GA) process is presented. The system has been successfully applied to 2D and 3D situations using both rigid-body and affine transformations. Conventional optimization techniques and gradient search strategies generally require a good initial start location. The GA approach avoids the local minima/maxima traps of conventional optimization techniques. Based on the principles of Darwinian natural selection (survival of the fittest), the genetic algorithm has two basic steps: 1. Randomly generate an initial population. 2. Repeated application of the natural selection operation until a termination measure is satisfied. The natural selection process selects individuals based on their fitness to participate in the genetic operations; and it creates new individuals by inheritance from both parents, genetic recombination (crossover) and mutation. Once the termination criteria are satisfied, the optimum is selected from the population. The algorithm was applied on 2D and 3D magnetic resonance images (MRI). It does not require any preprocessing such as threshold, smoothing, segmentation, or definition of base points or edges. To evaluate the performance of the GA registration, the results were compared with results of the Automatic Image Registration technique (AIR) and manual registration which was used as the gold standard. Results showed that our GA implementation was a robust algorithm and gives very close results to the gold standard. A pre-cropping strategy was also discussed as an efficient preprocessing step to enhance the registration accuracy.

  3. Automatic urban road extraction on DSM data based on fuzzy ART, region growing, morphological operations and radon transform

    NASA Astrophysics Data System (ADS)

    Herumurti, Darlis; Uchimura, Keiichi; Koutaki, Gou; Uemura, Takumi

    2013-10-01

    In recent years, an automatic urban road extraction, as part of Intelligent Transportation research, has attracted the researchers due to the important role for the next modern transportation where urban area plays the main role within the transportation system. In this work, we propose a new combination of fuzzy ART clustering, Region growing, Morphological Operations and Radon transform (ARMOR) for automatic extraction of urban road networks from the digital surface model (DSM). The DSM data, which is based-on the elevation of surface, overcome a serious building's shadow problem as in the aerial photo image. Due to the different elevation between the road and the buildings, the thresholding technique yields a fast initial road extraction. The threshold values are obtained from Fuzzy ART clustering of the geometrical points in the histogram. The initial road is then expanded using region growing. Though most of the road regions are extracted, it contains a lot of non-road areas and the edge is still rough. A fast way to smoothing the region is by employing the morphology closing operation. Furthermore, we perform the road line filter by opening operation with a line shape structuring element, where the line orientation is obtained from the Radon Transform. Finally, the road network is constructed based-on B-Spline from the extracted road skeleton. The experimental result shows that the proposed method running faster and increases the quality and the accuracy about 10% higher than the highest result of the compared method.

  4. A learning-based automatic clinical organ segmentation in medical images

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoqing; Samarabandu, Jagath; Li, Shuo; Ross, Ian; Garvin, Greg

    2007-03-01

    Image segmentation plays an important role in medical image analysis and visualization since it greatly enhances the clinical diagnosis. Although many algorithms have been proposed, it is challenging to achieve an automatic clinical organ segmentation which requires speed and robustness. Automatically segmenting cardiac Magnetic Resonance Imaging (MRI) image is extremely challenging due to the artifacts of cardiac motion and characteristics of MRI. Moreover many of the existing algorithms are specific to a particular view of cardiac MRI images. We proposed a generic view-independent, learning-based method to automatically segment cardiac MRI images, which uses machine learning techniques and the geometric shape information. A main feature of our contribution is the fact that the proposed algorithm can use a training set containing a mix of various views and is able to successfully segment any given views. The proposed method consists of four stages. First, we partition the input image into a number of image regions based on their intensity characteristics. Then, we calculate the pre-selected feature descriptions for each generated region and use a trained classi.er to learn the conditional probabilities for every pixel based on the calculated features. In this paper, we use the Support Vector Machine (SVM) to train our classifier. The learned conditional probabilities of every pixel are then fed into an energy function to segment the input image. We optimize our energy function with graph cuts. Finally, domain knowledge is applied to verify the segmentation. Experimental results show that this method is very efficient and robust with respect to image views, slices and motion phases. The method also has the potential to be imaging modality independent as the proposed algorithm is not specific to a particular imaging modality.

  5. Model-based vision using geometric hashing

    NASA Astrophysics Data System (ADS)

    Akerman, Alexander, III; Patton, Ronald

    1991-04-01

    The Geometric Hashing technique developed by the NYU Courant Institute has been applied to various automatic target recognition applications. In particular, I-MATH has extended the hashing algorithm to perform automatic target recognition ofsynthetic aperture radar (SAR) imagery. For this application, the hashing is performed upon the geometric locations of dominant scatterers. In addition to being a robust model-based matching algorithm -- invariant under translation, scale, and 3D rotations of the target -- hashing is of particular utility because it can still perform effective matching when the target is partially obscured. Moreover, hashing is very amenable to a SIMD parallel processing architecture, and thus potentially realtime implementable.

  6. Automatic facial expression recognition based on features extracted from tracking of facial landmarks

    NASA Astrophysics Data System (ADS)

    Ghimire, Deepak; Lee, Joonwhoan

    2014-01-01

    In this paper, we present a fully automatic facial expression recognition system using support vector machines, with geometric features extracted from the tracking of facial landmarks. Facial landmark initialization and tracking is performed by using an elastic bunch graph matching algorithm. The facial expression recognition is performed based on the features extracted from the tracking of not only individual landmarks, but also pair of landmarks. The recognition accuracy on the Extended Kohn-Kanade (CK+) database shows that our proposed set of features produces better results, because it utilizes time-varying graph information, as well as the motion of individual facial landmarks.

  7. [A brain tumor automatic assisted-diagnostic system based on medical image shape analysis].

    PubMed

    Wang, Li-Li; Yang, Jie

    2005-03-01

    This paper covers a brain tumor assisted diagnosis system based on medical image analysis. The system supplements the PACS functions such as display of medical images and database inquiry, segments slice in real-time using the algorithm of fuzzy region competition, extracts shape feature factors such as contour label, compactness, moment, Fourier Descriptor, chord length, radius and other medical data on the brain tumor image with irregular contour feature after segmentation and then feeds to Bayesian network in order to sort the brain tumor for the implementation of automatic assisted diagnosis. PMID:16011110

  8. Automatic 3D segmentation of the kidney in MR images using wavelet feature extraction and probability shape model

    NASA Astrophysics Data System (ADS)

    Akbari, Hamed; Fei, Baowei

    2012-02-01

    Numerical estimation of the size of the kidney is useful in evaluating conditions of the kidney, especially, when serial MR imaging is performed to evaluate the kidney function. This paper presents a new method for automatic segmentation of the kidney in three-dimensional (3D) MR images, by extracting texture features and statistical matching of geometrical shape of the kidney. A set of Wavelet-based support vector machines (W-SVMs) is trained on the MR images. The W-SVMs capture texture priors of MRI for classification of the kidney and non-kidney tissues in different zones around the kidney boundary. In the segmentation procedure, these W-SVMs are trained to tentatively label each voxel around the kidney model as a kidney or non-kidney voxel by texture matching. A probability kidney model is created using 10 segmented MRI data. The model is initially localized based on the intensity profiles in three directions. The weight functions are defined for each labeled voxel for each Wavelet-based, intensity-based, and model-based label. Consequently, each voxel has three labels and three weights for the Wavelet feature, intensity, and probability model. Using a 3D edge detection method, the model is re-localized and the segmented kidney is modified based on a region growing method in the model region. The probability model is re-localized based on the results and this loop continues until the segmentation converges. Experimental results with mouse MRI data show the good performance of the proposed method in segmenting the kidney in MR images.

  9. Insight into model mechanisms through automatic parameter fitting: a new methodological framework for model development

    PubMed Central

    2014-01-01

    Background Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. Results The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input–output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard

  10. Automatic Atlas Based Electron Density and Structure Contouring for MRI-based Prostate Radiation Therapy on the Cloud

    NASA Astrophysics Data System (ADS)

    Dowling, J. A.; Burdett, N.; Greer, P. B.; Sun, J.; Parker, J.; Pichler, P.; Stanwell, P.; Chandra, S.; Rivest-Hénault, D.; Ghose, S.; Salvado, O.; Fripp, J.

    2014-03-01

    Our group have been developing methods for MRI-alone prostate cancer radiation therapy treatment planning. To assist with clinical validation of the workflow we are investigating a cloud platform solution for research purposes. Benefits of cloud computing can include increased scalability, performance and extensibility while reducing total cost of ownership. In this paper we demonstrate the generation of DICOM-RT directories containing an automatic average atlas based electron density image and fast pelvic organ contouring from whole pelvis MR scans.

  11. Automatic Method for Building Indoor Boundary Models from Dense Point Clouds Collected by Laser Scanners

    PubMed Central

    Valero, Enrique; Adán, Antonio; Cerrada, Carlos

    2012-01-01

    In this paper we present a method that automatically yields Boundary Representation Models (B-rep) for indoors after processing dense point clouds collected by laser scanners from key locations through an existing facility. Our objective is particularly focused on providing single models which contain the shape, location and relationship of primitive structural elements of inhabited scenarios such as walls, ceilings and floors. We propose a discretization of the space in order to accurately segment the 3D data and generate complete B-rep models of indoors in which faces, edges and vertices are coherently connected. The approach has been tested in real scenarios with data coming from laser scanners yielding promising results. We have deeply evaluated the results by analyzing how reliably these elements can be detected and how accurately they are modeled. PMID:23443369

  12. Automatic method for building indoor boundary models from dense point clouds collected by laser scanners.

    PubMed

    Valero, Enrique; Adán, Antonio; Cerrada, Carlos

    2012-01-01

    In this paper we present a method that automatically yields Boundary Representation Models (B-rep) for indoors after processing dense point clouds collected by laser scanners from key locations through an existing facility. Our objective is particularly focused on providing single models which contain the shape, location and relationship of primitive structural elements of inhabited scenarios such as walls, ceilings and floors. We propose a discretization of the space in order to accurately segment the 3D data and generate complete B-rep models of indoors in which faces, edges and vertices are coherently connected. The approach has been tested in real scenarios with data coming from laser scanners yielding promising results. We have deeply evaluated the results by analyzing how reliably these elements can be detected and how accurately they are modeled.

  13. Automatic layout feature extraction for lithography hotspot detection based on deep neural network

    NASA Astrophysics Data System (ADS)

    Matsunawa, Tetsuaki; Nojima, Shigeki; Kotani, Toshiya

    2016-03-01

    Lithography hotspot detection in the physical verification phase is one of the most important techniques in today's optical lithography based manufacturing process. Although lithography simulation based hotspot detection is widely used, it is also known to be time-consuming. To detect hotspots in a short runtime, several machine learning based methods have been proposed. However, it is difficult to realize highly accurate detection without an increase in false alarms because an appropriate layout feature is undefined. This paper proposes a new method to automatically extract a proper layout feature from a given layout for improvement in detection performance of machine learning based methods. Experimental results show that using a deep neural network can achieve better performance than other frameworks using manually selected layout features and detection algorithms, such as conventional logistic regression or artificial neural network.

  14. Biothermal Model of Patient and Automatic Control System of Brain Temperature for Brain Hypothermia Treatment

    NASA Astrophysics Data System (ADS)

    Wakamatsu, Hidetoshi; Gaohua, Lu

    Various surface-cooling apparatus such as the cooling cap, muffler and blankets have been commonly used for the cooling of the brain to provide hypothermic neuro-protection for patients of hypoxic-ischemic encephalopathy. The present paper is aimed at the brain temperature regulation from the viewpoint of automatic system control, in order to help clinicians decide an optimal temperature of the cooling fluid provided for these three types of apparatus. At first, a biothermal model characterized by dynamic ambient temperatures is constructed for adult patient, especially on account of the clinical practice of hypothermia and anesthesia in the brain hypothermia treatment. Secondly, the model is represented by the state equation as a lumped parameter linear dynamic system. The biothermal model is justified from their various responses corresponding to clinical phenomena and treatment. Finally, the optimal regulator is tentatively designed to give clinicians some suggestions on the optimal temperature regulation of the patient’s brain. It suggests the patient’s brain temperature could be optimally controlled to follow-up the temperature process prescribed by the clinicians. This study benefits us a great clinical possibility for the automatic hypothermia treatment.

  15. Automatic processing and modeling of GPR data for pavement thickness and properties

    NASA Astrophysics Data System (ADS)

    Olhoeft, Gary R.; Smith, Stanley S., III

    2000-04-01

    A GSSI SIR-8 with 1 GHz air-launched horn antennas has been modified to acquire data from a moving vehicle. Algorithms have been developed to acquire the data, and to automatically calibrate, position, process, and full waveform model it without operator intervention. Vehicle suspension system bounce is automatically compensated (for varying antenna height). Multiple scans are modeled by full waveform inversion that is remarkably robust and relatively insensitive to noise. Statistical parameters and histograms are generated for the thickness and dielectric permittivity of concrete or asphalt pavements. The statistical uncertainty with which the thickness is determined is given with each thickness measurement, along with the dielectric permittivity of the pavement material and of the subgrade material at each location. Permittivities are then converted into equivalent density and water content. Typical statistical uncertainties in thickness are better than 0.4 cm in 20 cm thick pavement. On a Pentium laptop computer, the data may be processed and modeled to have cross-sectional images and computed pavement thickness displayed in real time at highway speeds.

  16. Automatic classifier based on heart rate variability to identify fallers among hypertensive subjects.

    PubMed

    Melillo, Paolo; Jovic, Alan; De Luca, Nicola; Pecchia, Leandro

    2015-08-01

    Accidental falls are a major problem of later life. Different technologies to predict falls have been investigated, but with limited success, mainly because of low specificity due to a high false positive rate. This Letter presents an automatic classifier based on heart rate variability (HRV) analysis with the goal to identify fallers automatically. HRV was used in this study as it is considered a good estimator of autonomic nervous system (ANS) states, which are responsible, among other things, for human balance control. Nominal 24 h electrocardiogram recordings from 168 cardiac patients (age 72 ± 8 years, 60 female), of which 47 were fallers, were investigated. Linear and nonlinear HRV properties were analysed in 30 min excerpts. Different data mining approaches were adopted and their performances were compared with a subject-based receiver operating characteristic analysis. The best performance was achieved by a hybrid algorithm, RUSBoost, integrated with feature selection method based on principal component analysis, which achieved satisfactory specificity and accuracy (80 and 72%, respectively), but low sensitivity (51%). These results suggested that ANS states causing falls could be reliably detected, but also that not all the falls were due to ANS states. PMID:26609412

  17. Automatic classifier based on heart rate variability to identify fallers among hypertensive subjects

    PubMed Central

    Jovic, Alan; De Luca, Nicola; Pecchia, Leandro

    2015-01-01

    Accidental falls are a major problem of later life. Different technologies to predict falls have been investigated, but with limited success, mainly because of low specificity due to a high false positive rate. This Letter presents an automatic classifier based on heart rate variability (HRV) analysis with the goal to identify fallers automatically. HRV was used in this study as it is considered a good estimator of autonomic nervous system (ANS) states, which are responsible, among other things, for human balance control. Nominal 24 h electrocardiogram recordings from 168 cardiac patients (age 72 ± 8 years, 60 female), of which 47 were fallers, were investigated. Linear and nonlinear HRV properties were analysed in 30 min excerpts. Different data mining approaches were adopted and their performances were compared with a subject-based receiver operating characteristic analysis. The best performance was achieved by a hybrid algorithm, RUSBoost, integrated with feature selection method based on principal component analysis, which achieved satisfactory specificity and accuracy (80 and 72%, respectively), but low sensitivity (51%). These results suggested that ANS states causing falls could be reliably detected, but also that not all the falls were due to ANS states. PMID:26609412

  18. UAS-based automatic bird count of a common gull colony

    NASA Astrophysics Data System (ADS)

    Grenzdörffer, G. J.

    2013-08-01

    The standard procedure to count birds is a manual one. However a manual bird count is a time consuming and cumbersome process, requiring several people going from nest to nest counting the birds and the clutches. High resolution imagery, generated with a UAS (Unmanned Aircraft System) offer an interesting alternative. Experiences and results of UAS surveys for automatic bird count of the last two years are presented for the bird reserve island Langenwerder. For 2011 1568 birds (± 5%) were detected on the image mosaic, based on multispectral image classification and GIS-based post processing. Based on the experiences of 2011 the results and the accuracy of the automatic bird count 2012 became more efficient. For 2012 1938 birds with an accuracy of approx. ± 3% were counted. Additionally a separation of breeding and non-breeding birds was performed with the assumption, that standing birds cause a visible shade. The final section of the paper is devoted to the analysis of the 3D-point cloud. Thereby the point cloud was used to determine the height of the vegetation and the extend and depth of closed sinks, which are unsuitable for breeding birds.

  19. Automatic Liver Segmentation on Volumetric CT Images Using Supervoxel-Based Graph Cuts

    PubMed Central

    Wu, Weiwei; Zhou, Zhuhuang; Wu, Shuicai; Zhang, Yanhua

    2016-01-01

    Accurate segmentation of liver from abdominal CT scans is critical for computer-assisted diagnosis and therapy. Despite many years of research, automatic liver segmentation remains a challenging task. In this paper, a novel method was proposed for automatic delineation of liver on CT volume images using supervoxel-based graph cuts. To extract the liver volume of interest (VOI), the region of abdomen was firstly determined based on maximum intensity projection (MIP) and thresholding methods. Then, the patient-specific liver VOI was extracted from the region of abdomen by using a histogram-based adaptive thresholding method and morphological operations. The supervoxels of the liver VOI were generated using the simple linear iterative clustering (SLIC) method. The foreground/background seeds for graph cuts were generated on the largest liver slice, and the graph cuts algorithm was applied to the VOI supervoxels. Thirty abdominal CT images were used to evaluate the accuracy and efficiency of the proposed algorithm. Experimental results show that the proposed method can detect the liver accurately with significant reduction of processing time, especially when dealing with diseased liver cases. PMID:27127536

  20. Automatic Detection and Recognition of Man-Made Objects in High Resolution Remote Sensing Images Using Hierarchical Semantic Graph Model

    NASA Astrophysics Data System (ADS)

    Sun, X.; Thiele, A.; Hinz, S.; Fu, K.

    2013-05-01

    In this paper, we propose a hierarchical semantic graph model to detect and recognize man-made objects in high resolution remote sensing images automatically. Following the idea of part-based methods, our model builds a hierarchical possibility framework to explore both the appearance information and semantic relationships between objects and background. This multi-levels structure is promising to enable a more comprehensive understanding of natural scenes. After training local classifiers to calculate parts properties, we use belief propagation to transmit messages quantitatively, which could enhance the utilization of spatial constrains existed in images. Besides, discriminative learning and generative learning are combined interleavely in the inference procedure, to improve the training error and recognition efficiency. The experimental results demonstrate that this method is able to detect manmade objects in complicated surroundings with satisfactory precision and robustness.

  1. Automatic brain matter segmentation of computed tomography images using a statistical model: A tool to gain working time!

    PubMed

    Bertè, Francesco; Lamponi, Giuseppe; Bramanti, Placido; Calabrò, Rocco S

    2015-10-01

    Brain computed tomography (CT) is useful diagnostic tool for the evaluation of several neurological disorders due to its accuracy, reliability, safety and wide availability. In this field, a potentially interesting research topic is the automatic segmentation and recognition of medical regions of interest (ROIs). Herein, we propose a novel automated method, based on the use of the active appearance model (AAM) for the segmentation of brain matter in CT images to assist radiologists in the evaluation of the images. The method described, that was applied to 54 CT images coming from a sample of outpatients affected by cognitive impairment, enabled us to obtain the generation of a model overlapping with the original image with quite good precision. Since CT neuroimaging is in widespread use for detecting neurological disease, including neurodegenerative conditions, the development of automated tools enabling technicians and physicians to reduce working time and reach a more accurate diagnosis is needed. PMID:26427894

  2. Automatic calibration of a parsimonious ecohydrological model in a sparse basin using the spatio-temporal variation of the NDVI

    NASA Astrophysics Data System (ADS)

    Ruiz-Pérez, Guiomar; Manfreda, Salvatore; Caylor, Kelly; Francés, Félix

    2016-04-01

    Drylands are extensive, covering 30% of the Earth's land surface and 50% of Africa. In these water-controlled areas, vegetation plays a key role in the water cycle. Ecohydrological models provide a tool to investigate the relationships between vegetation and water resources. However, studies in Africa often face the problem that many ecohydrological models have quite extensive parametrical requirements, while available data are scarce. Therefore, there is a need for searching new sources of information such as satellite data. The advantages of the use of satellite data in dry regions has been deeply demonstrated and studied. But, the use of this kind of data forces to introduce the concept of spatio-temporal information. In this context, we have to deal with the fact that there is a lack in terms of statistics and methodologies to incorporate the spatio-temporal data during the calibration and validation processes. This research wants to be a contribution in that sense. The used ecohydrological model was calibrated in the Upper Ewaso river basin in Kenya only using NDVI (Normalized Difference Vegetation Index) data from MODIS. An automatic calibration methodology based on Singular Value Decomposition techniques was proposed in order to calibrate the model taking into account the temporal variation and, also, the spatial pattern of the observed NDVI and the simulated LAI. The obtained results have demonstrated: (1) the satellite data is an extraordinary useful tool of information and it can be used to implement ecohydrological models in dry regions; (2) the proposed model calibrated only using satellite data is able to reproduce the vegetation dynamics (in time and in space) and, also, the observed discharge at the outlet point; and (3) the proposed automatic calibration methodology works satisfactorily and it includes spatio-temporal data, in other words, it takes into account the temporal variation and the spatial pattern of the analyzed data.

  3. Model-based machine learning.

    PubMed

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612

  4. Model-based machine learning.

    PubMed

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  5. Semi-Automatic Building Models and FAÇADE Texture Mapping from Mobile Phone Images

    NASA Astrophysics Data System (ADS)

    Jeong, J.; Kim, T.

    2016-06-01

    Research on 3D urban modelling has been actively carried out for a long time. Recently the need of 3D urban modelling research is increased rapidly due to improved geo-web services and popularized smart devices. Nowadays 3D urban models provided by, for example, Google Earth use aerial photos for 3D urban modelling but there are some limitations: immediate update for the change of building models is difficult, many buildings are without 3D model and texture, and large resources for maintaining and updating are inevitable. To resolve the limitations mentioned above, we propose a method for semi-automatic building modelling and façade texture mapping from mobile phone images and analyze the result of modelling with actual measurements. Our method consists of camera geometry estimation step, image matching step, and façade mapping step. Models generated from this method were compared with actual measurement value of real buildings. Ratios of edge length of models and measurements were compared. Result showed 5.8% average error of length ratio. Through this method, we could generate a simple building model with fine façade textures without expensive dedicated tools and dataset.

  6. 3D automatic anatomy segmentation based on iterative graph-cut-ASM

    SciTech Connect

    Chen, Xinjian; Bagci, Ulas

    2011-08-15

    Purpose: This paper studies the feasibility of developing an automatic anatomy segmentation (AAS) system in clinical radiology and demonstrates its operation on clinical 3D images. Methods: The AAS system, the authors are developing consists of two main parts: object recognition and object delineation. As for recognition, a hierarchical 3D scale-based multiobject method is used for the multiobject recognition task, which incorporates intensity weighted ball-scale (b-scale) information into the active shape model (ASM). For object delineation, an iterative graph-cut-ASM (IGCASM) algorithm is proposed, which effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. The presented IGCASM algorithm is a 3D generalization of the 2D GC-ASM method that they proposed previously in Chen et al.[Proc. SPIE, 7259, 72590C1-72590C-8 (2009)]. The proposed methods are tested on two datasets comprised of images obtained from 20 patients (10 male and 10 female) of clinical abdominal CT scans, and 11 foot magnetic resonance imaging (MRI) scans. The test is for four organs (liver, left and right kidneys, and spleen) segmentation, five foot bones (calcaneus, tibia, cuboid, talus, and navicular). The recognition and delineation accuracies were evaluated separately. The recognition accuracy was evaluated in terms of translation, rotation, and scale (size) error. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF, FPVF). The efficiency of the delineation method was also evaluated on an Intel Pentium IV PC with a 3.4 GHZ CPU machine. Results: The recognition accuracies in terms of translation, rotation, and scale error over all organs are about 8 mm, 10 deg. and 0.03, and over all foot bones are about 3.5709 mm, 0.35 deg. and 0.025, respectively. The accuracy of delineation over all organs for all subjects as expressed in TPVF and FPVF is 93.01% and 0.22%, and

  7. [Wearable Automatic External Defibrillators].

    PubMed

    Luo, Huajie; Luo, Zhangyuan; Jin, Xun; Zhang, Leilei; Wang, Changjin; Zhang, Wenzan; Tu, Quan

    2015-11-01

    Defibrillation is the most effective method of treating ventricular fibrillation(VF), this paper introduces wearable automatic external defibrillators based on embedded system which includes EGG measurements, bioelectrical impedance measurement, discharge defibrillation module, which can automatic identify VF signal, biphasic exponential waveform defibrillation discharge. After verified by animal tests, the device can realize EGG acquisition and automatic identification. After identifying the ventricular fibrillation signal, it can automatic defibrillate to abort ventricular fibrillation and to realize the cardiac electrical cardioversion.

  8. Memory-Based Processing as a Mechanism of Automaticity in Text Comprehension

    ERIC Educational Resources Information Center

    Rawson, Katherine A.; Middleton, Erica L.

    2009-01-01

    A widespread theoretical assumption is that many processes involved in text comprehension are automatic, with automaticity typically defined in terms of properties (e.g., speed, effort). In contrast, the authors advocate for conceptualization of automaticity in terms of underlying cognitive mechanisms and evaluate one prominent account, the…

  9. Automatic Training Sample Selection for a Multi-Evidence Based Crop Classification Approach

    NASA Astrophysics Data System (ADS)

    Chellasamy, M.; Ferre, P. A. Ty; Humlekrog Greve, M.

    2014-09-01

    An approach to use the available agricultural parcel information to automatically select training samples for crop classification is investigated. Previous research addressed the multi-evidence crop classification approach using an ensemble classifier. This first produced confidence measures using three Multi-Layer Perceptron (MLP) neural networks trained separately with spectral, texture and vegetation indices; classification labels were then assigned based on Endorsement Theory. The present study proposes an approach to feed this ensemble classifier with automatically selected training samples. The available vector data representing crop boundaries with corresponding crop codes are used as a source for training samples. These vector data are created by farmers to support subsidy claims and are, therefore, prone to errors such as mislabeling of crop codes and boundary digitization errors. The proposed approach is named as ECRA (Ensemble based Cluster Refinement Approach). ECRA first automatically removes mislabeled samples and then selects the refined training samples in an iterative training-reclassification scheme. Mislabel removal is based on the expectation that mislabels in each class will be far from cluster centroid. However, this must be a soft constraint, especially when working with a hypothesis space that does not contain a good approximation of the targets classes. Difficulty in finding a good approximation often exists either due to less informative data or a large hypothesis space. Thus this approach uses the spectral, texture and indices domains in an ensemble framework to iteratively remove the mislabeled pixels from the crop clusters declared by the farmers. Once the clusters are refined, the selected border samples are used for final learning and the unknown samples are classified using the multi-evidence approach. The study is implemented with WorldView-2 multispectral imagery acquired for a study area containing 10 crop classes. The proposed

  10. Strain measurement based on laser mark automatic tracking line mark on specimen

    NASA Astrophysics Data System (ADS)

    Tian, Qiuhong; Sun, Zhengrong; Le, Zhongping; Liu, Yanna; Zhang, Lijian; Xie, Sendong

    2014-12-01

    Conventional video extensometers, using a measurement mark on specimen to obtain material strain, have a problem with deformation of the measurement mark. Therefore, the accurate position of the measurement mark is difficult to evaluate, and measurement accuracy is limited. To solve this problem, a strain measurement method based on a laser mark automatically tracking a line mark on the specimen is proposed. This method is using an undeformed laser mark to replace the line mark to calculate the specimen strain and eliminates the measurement error induced by the deformation of specimen marks. The positions of the laser mark and the line mark are achieved by using digital image processing. Automatic tracking is realized by means of an intelligent motor control. Also, the strain of the specimen is obtained by analyzing the movement trace of the laser mark. A video extensometer experimental setup based on the proposed method is constructed. Two experiments were carried out. The first experiment verified the validity and the repeatability of the method via tensile testing of the specimens of low-carbon steel and cast iron. The second one demonstrated the high measurement accuracy of the method by comparing with a clip-on extensometer.

  11. Identification of forensic samples by using an infrared-based automatic DNA sequencer.

    PubMed

    Ricci, Ugo; Sani, Ilaria; Klintschar, Michael; Cerri, Nicoletta; De Ferrari, Francesco; Giovannucci Uzielli, Maria Luisa

    2003-06-01

    We have recently introduced a new protocol for analyzing all core loci of the Federal Bureau of Investigation's (FBI) Combined DNA Index System (CODIS) with an infrared (IR) automatic DNA sequencer (LI-COR 4200). The amplicons were labeled with forward oligonucleotide primers, covalently linked to a new infrared fluorescent molecule (IRDye 800). The alleles were displayed as familiar autoradiogram-like images with real-time detection. This protocol was employed for paternity testing, population studies, and identification of degraded forensic samples. We extensively analyzed some simulated forensic samples and mixed stains (blood, semen, saliva, bones, and fixed archival embedded tissues), comparing the results with donor samples. Sensitivity studies were also performed for the four multiplex systems. Our results show the efficiency, reliability, and accuracy of the IR system for the analysis of forensic samples. We also compared the efficiency of the multiplex protocol with ultraviolet (UV) technology. Paternity tests, undegraded DNA samples, and real forensic samples were analyzed with this approach based on IR technology and with UV-based automatic sequencers in combination with commercially-available kits. The comparability of the results with the widespread UV methods suggests that it is possible to exchange data between laboratories using the same core group of markers but different primer sets and detection methods.

  12. Automatic Road Extraction Based on Integration of High Resolution LIDAR and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Rahimi, S.; Arefi, H.; Bahmanyar, R.

    2015-12-01

    In recent years, the rapid increase in the demand for road information together with the availability of large volumes of high resolution Earth Observation (EO) images, have drawn remarkable interest to the use of EO images for road extraction. Among the proposed methods, the unsupervised fully-automatic ones are more efficient since they do not require human effort. Considering the proposed methods, the focus is usually to improve the road network detection, while the roads' precise delineation has been less attended to. In this paper, we propose a new unsupervised fully-automatic road extraction method, based on the integration of the high resolution LiDAR and aerial images of a scene using Principal Component Analysis (PCA). This method discriminates the existing roads in a scene; and then precisely delineates them. Hough transform is then applied to the integrated information to extract straight lines; which are further used to segment the scene and discriminate the existing roads. The roads' edges are then precisely localized using a projection-based technique, and the round corners are further refined. Experimental results demonstrate that our proposed method extracts and delineates the roads with a high accuracy.

  13. Automatic active contour-based segmentation and classification of carotid artery ultrasound images.

    PubMed

    Chaudhry, Asmatullah; Hassan, Mehdi; Khan, Asifullah; Kim, Jin Young

    2013-12-01

    In this paper, we present automatic image segmentation and classification technique for carotid artery ultrasound images based on active contour approach. For early detection of the plaque in carotid artery to avoid serious brain strokes, active contour-based techniques have been applied successfully to segment out the carotid artery ultrasound images. Further, ultrasound images might be affected due to rotation, scaling, or translational factors during acquisition process. Keeping in view these facts, image alignment is used as a preprocessing step to align the carotid artery ultrasound images. In our experimental study, we exploit intima-media thickness (IMT) measurement to detect the presence of plaque in the artery. Support vector machine (SVM) classification is employed using these segmented images to distinguish the normal and diseased artery images. IMT measurement is used to form the feature vector. Our proposed approach segments the carotid artery images in an automatic way and further classifies them using SVM. Experimental results show the learning capability of SVM classifier and validate the usefulness of our proposed approach. Further, the proposed approach needs minimum interaction from a user for an early detection of plaque in carotid artery. Regarding the usefulness of the proposed approach in healthcare, it can be effectively used in remote areas as a preliminary clinical step even in the absence of highly skilled radiologists.

  14. Automatic information timeliness assessment of diabetes web sites by evidence based medicine.

    PubMed

    Sağlam, Rahime Belen; Taşkaya Temizel, Tuğba

    2014-11-01

    Studies on health domain have shown that health websites provide imperfect information and give recommendations which are not up to date with the recent literature even when their last modified dates are quite recent. In this paper, we propose a framework which assesses the timeliness of the content of health websites automatically by evidence based medicine. Our aim is to assess the accordance of website contents with the current literature and information timeliness disregarding the update time stated on the websites. The proposed method is based on automatic term recognition, relevance feedback and information retrieval techniques in order to generate time-aware structured queries. We tested the framework on diabetes health web sites which were archived between 2006 and 2013 by Archive-it using American Diabetes Association's (ADA) guidelines. The results showed that the proposed framework achieves 65% and 77% accuracy in detecting the timeliness of the web content according to years and pre-determined time intervals respectively. Information seekers and web site owners may benefit from the proposed framework in finding relevant and up-to-date diabetes web sites.

  15. Perception-based automatic classification of impulsive-source active sonar echoes.

    PubMed

    Young, Victor W; Hines, Paul C

    2007-09-01

    Impulsive-source active sonar systems are often plagued by false alarm echoes resulting from the presence of naturally occurring clutter objects in the environment. Sonar performance could be improved by a technique for discriminating between echoes from true targets and echoes from clutter. Motivated by anecdotal evidence that target echoes sound very different than clutter echoes when auditioned by a human operator, this paper describes the implementation of an automatic classifier for impulsive-source active sonar echoes that is based on perceptual signal features that have been previously identified in the musical acoustics literature as underlying timbre. Perceptual signal features found in this paper to be particularly useful to the problem of active sonar classification include: the centroid and peak value of the perceptual loudness function, as well as several features based on subband attack and decay times. This paper uses subsets of these perceptual signal features to train and test an automatic classifier capable of discriminating between target and clutter echoes with an equal error rate of roughly 10%; the area under the receiver operating characteristic curve corresponding to this classifier is found to be 0.975.

  16. Automatic evaluation of hypernasality based on a cleft palate speech database.

    PubMed

    He, Ling; Zhang, Jing; Liu, Qi; Yin, Heng; Lech, Margaret; Huang, Yunzhi

    2015-05-01

    The hypernasality is one of the most typical characteristics of cleft palate (CP) speech. The evaluation outcome of hypernasality grading decides the necessity of follow-up surgery. Currently, the evaluation of CP speech is carried out by experienced speech therapists. However, the result strongly depends on their clinical experience and subjective judgment. This work aims to propose an automatic evaluation system for hypernasality grading in CP speech. The database tested in this work is collected by the Hospital of Stomatology, Sichuan University, which has the largest number of CP patients in China. Based on the production process of hypernasality, source sound pulse and vocal tract filter features are presented. These features include pitch, the first and second energy amplified frequency bands, cepstrum based features, MFCC, short-time energy in the sub-bands features. These features combined with KNN classier are applied to automatically classify four grades of hypernasality: normal, mild, moderate and severe. The experiment results show that the proposed system achieves a good performance. The classification rates for four hypernasality grades reach up to 80.4%. The sensitivity of proposed features to the gender is also discussed. PMID:25814462

  17. Automatic decision support system based on SAR data for oil spill detection

    NASA Astrophysics Data System (ADS)

    Mera, David; Cotos, José M.; Varela-Pet, José; Rodríguez, Pablo G.; Caro, Andrés

    2014-11-01

    Global trade is mainly supported by maritime transport, which generates important pollution problems. Thus, effective surveillance and intervention means are necessary to ensure proper response to environmental emergencies. Synthetic Aperture Radar (SAR) has been established as a useful tool for detecting hydrocarbon spillages on the oceans surface. Several decision support systems have been based on this technology. This paper presents an automatic oil spill detection system based on SAR data which was developed on the basis of confirmed spillages and it was adapted to an important international shipping route off the Galician coast (northwest Iberian Peninsula). The system was supported by an adaptive segmentation process based on wind data as well as a shape oriented characterization algorithm. Moreover, two classifiers were developed and compared. Thus, image testing revealed up to 95.1% candidate labeling accuracy. Shared-memory parallel programming techniques were used to develop algorithms in order to improve above 25% of the system processing time.

  18. Modeling automatic threat detection: development of a face-in-the-crowd task.

    PubMed

    Schmidt-Daffy, Martin

    2011-02-01

    An angry face is expected to be detected faster than a happy face because of an early, stimulus-driven analysis of threat-related properties. However, it is unclear to what extent results from the visual search approach-the face-in-the-crowd task-mirror this automatic analysis. The paper outlines a model of automatic threat detection that combines the assumption of a neuronal system for threat detection with contemporary theories of visual search. The model served as a guideline for the development of a new face-in-the-crowd task. The development involved three preliminary studies that provided a basis for the selection of angry and happy facial stimuli resembling each other in respect to perceptibility, homogeneity, and intensity. With these stimuli a signal detection version of the search task was designed and tested. For crowds composed of neutral faces, the sensitivity measure d' proved the expected detection advantage of angry faces compared to happy faces. However, the emotional expression made no difference if a neutral face had to be detected in crowd composed of either angry or happy faces. Results are in line with the assumption of a stimulus-driven shift of attention giving rise to the superior detection of angry target faces.

  19. Automatic representation of urban terrain models for simulations on the example of VBS2

    NASA Astrophysics Data System (ADS)

    Bulatov, Dimitri; Häufel, Gisela; Solbrig, Peter; Wernerus, Peter

    2014-10-01

    Virtual simulations have been on the rise together with the fast progress of rendering engines and graphics hardware. Especially in military applications, offensive actions in modern peace-keeping missions have to be quick, firm and precise, especially under the conditions of asymmetric warfare, non-cooperative urban terrain and rapidly developing situations. Going through the mission in simulation can prepare the minds of soldiers and leaders, increase selfconfidence and tactical awareness, and finally save lives. This work is dedicated to illustrate the potential and limitations of integration of semantic urban terrain models into a simulation. Our system of choice is Virtual Battle Space 2, a simulation system created by Bohemia Interactive System. The topographic object types that we are able to export into this simulation engine are either results of the sensor data evaluation (building, trees, grass, and ground), which is done fully-automatically, or entities obtained from publicly available sources (streets and water-areas), which can be converted into the system-proper format with a few mouse clicks. The focus of this work lies in integrating of information about building façades into the simulation. We are inspired by state-of the art methods that allow for automatic extraction of doors and windows in laser point clouds captured from building walls and thus increase the level of details of building models. As a consequence, it is important to simulate these animationable entities. Doing so, we are able to make accessible some of the buildings in the simulation.

  20. Towards an automatic statistical model for seasonal precipitation prediction and its application to Central and South Asian headwater catchments

    NASA Astrophysics Data System (ADS)

    Gerlitz, Lars; Gafurov, Abror; Apel, Heiko; Unger-Sayesteh, Katy; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    Statistical climate forecast applications typically utilize a small set of large scale SST or climate indices, such as ENSO, PDO or AMO as predictor variables. If the predictive skill of these large scale modes is insufficient, specific predictor variables such as customized SST patterns are frequently included. Hence statistically based climate forecast models are either based on a fixed number of climate indices (and thus might not consider important predictor variables) or are highly site specific and barely transferable to other regions. With the aim of developing an operational seasonal forecast model, which is easily transferable to any region in the world, we present a generic data mining approach which automatically selects potential predictors from gridded SST observations and reanalysis derived large scale atmospheric circulation patterns and generates robust statistical relationships with posterior precipitation anomalies for user selected target regions. Potential predictor variables are derived by means of a cellwise correlation analysis of precipitation anomalies with gridded global climate variables under consideration of varying lead times. Significantly correlated grid cells are subsequently aggregated to predictor regions by means of a variability based cluster analysis. Finally for every month and lead time, an individual random forest based forecast model is automatically calibrated and evaluated by means of the preliminary generated predictor variables. The model is exemplarily applied and evaluated for selected headwater catchments in Central and South Asia. Particularly the for winter and spring precipitation (which is associated with westerly disturbances in the entire target domain) the model shows solid results with correlation coefficients up to 0.7, although the variability of precipitation rates is highly underestimated. Likewise for the monsoonal precipitation amounts in the South Asian target areas a certain skill of the model could

  1. Wavelet-based automatic determination of the P- and S-wave arrivals

    NASA Astrophysics Data System (ADS)

    Bogiatzis, P.; Ishii, M.

    2013-12-01

    The detection of P- and S-wave arrivals is important for a variety of seismological applications including earthquake detection and characterization, and seismic tomography problems such as imaging of hydrocarbon reservoirs. For many years, dedicated human-analysts manually selected the arrival times of P and S waves. However, with the rapid expansion of seismic instrumentation, automatic techniques that can process a large number of seismic traces are becoming essential in tomographic applications, and for earthquake early-warning systems. In this work, we present a pair of algorithms for efficient picking of P and S onset times. The algorithms are based on the continuous wavelet transform of the seismic waveform that allows examination of a signal in both time and frequency domains. Unlike Fourier transform, the basis functions are localized in time and frequency, therefore, wavelet decomposition is suitable for analysis of non-stationary signals. For detecting the P-wave arrival, the wavelet coefficients are calculated using the vertical component of the seismogram, and the onset time of the wave is identified. In the case of the S-wave arrival, we take advantage of the polarization of the shear waves, and cross-examine the wavelet coefficients from the two horizontal components. In addition to the onset times, the automatic picking program provides estimates of uncertainty, which are important for subsequent applications. The algorithms are tested with synthetic data that are generated to include sudden changes in amplitude, frequency, and phase. The performance of the wavelet approach is further evaluated using real data by comparing the automatic picks with manual picks. Our results suggest that the proposed algorithms provide robust measurements that are comparable to manual picks for both P- and S-wave arrivals.

  2. Automatic machine learning based prediction of cardiovascular events in lung cancer screening data

    NASA Astrophysics Data System (ADS)

    de Vos, Bob D.; de Jong, Pim A.; Wolterink, Jelmer M.; Vliegenthart, Rozemarijn; Wielingen, Geoffrey V. F.; Viergever, Max A.; Išgum, Ivana

    2015-03-01

    Calcium burden determined in CT images acquired in lung cancer screening is a strong predictor of cardiovascular events (CVEs). This study investigated whether subjects undergoing such screening who are at risk of a CVE can be identified using automatic image analysis and subject characteristics. Moreover, the study examined whether these individuals can be identified using solely image information, or if a combination of image and subject data is needed. A set of 3559 male subjects undergoing Dutch-Belgian lung cancer screening trial was included. Low-dose non-ECG synchronized chest CT images acquired at baseline were analyzed (1834 scanned in the University Medical Center Groningen, 1725 in the University Medical Center Utrecht). Aortic and coronary calcifications were identified using previously developed automatic algorithms. A set of features describing number, volume and size distribution of the detected calcifications was computed. Age of the participants was extracted from image headers. Features describing participants' smoking status, smoking history and past CVEs were obtained. CVEs that occurred within three years after the imaging were used as outcome. Support vector machine classification was performed employing different feature sets using sets of only image features, or a combination of image and subject related characteristics. Classification based solely on the image features resulted in the area under the ROC curve (Az) of 0.69. A combination of image and subject features resulted in an Az of 0.71. The results demonstrate that subjects undergoing lung cancer screening who are at risk of CVE can be identified using automatic image analysis. Adding subject information slightly improved the performance.

  3. Clinical evaluation of semi-automatic landmark-based lesion tracking software for CT-scans

    PubMed Central

    2014-01-01

    Background To evaluate a semi-automatic landmark-based lesion tracking software enabling navigation between RECIST lesions in baseline and follow-up CT-scans. Methods The software automatically detects 44 stable anatomical landmarks in each thoraco/abdominal/pelvic CT-scan, sets up a patient specific coordinate-system and cross-links the coordinate-systems of consecutive CT-scans. Accuracy of the software was evaluated on 96 RECIST lesions (target- and non-target lesions) in baseline and follow-up CT-scans of 32 oncologic patients (64 CT-scans). Patients had to present at least one thoracic, one abdominal and one pelvic RECIST lesion. Three radiologists determined the deviation between lesions’ centre and the software’s navigation result in consensus. Results The initial mean runtime of the system to synchronize baseline and follow-up examinations was 19.4 ± 1.2 seconds, with subsequent navigation to corresponding RECIST lesions facilitating in real-time. Mean vector length of the deviations between lesions’ centre and the semi-automatic navigation result was 10.2 ± 5.1 mm without a substantial systematic error in any direction. Mean deviation in the cranio-caudal dimension was 5.4 ± 4.0 mm, in the lateral dimension 5.2 ± 3.9 mm and in the ventro-dorsal dimension 5.3 ± 4.0 mm. Conclusion The investigated software accurately and reliably navigates between lesions in consecutive CT-scans in real-time, potentially accelerating and facilitating cancer staging. PMID:25609496

  4. Automatic NMR-based identification of chemical reaction types in mixtures of co-occurring reactions.

    PubMed

    Latino, Diogo A R S; Aires-de-Sousa, João

    2014-01-01

    The combination of chemoinformatics approaches with NMR techniques and the increasing availability of data allow the resolution of problems far beyond the original application of NMR in structure elucidation/verification. The diversity of applications can range from process monitoring, metabolic profiling, authentication of products, to quality control. An application related to the automatic analysis of complex mixtures concerns mixtures of chemical reactions. We encoded mixtures of chemical reactions with the difference between the (1)H NMR spectra of the products and the reactants. All the signals arising from all the reactants of the co-occurring reactions were taken together (a simulated spectrum of the mixture of reactants) and the same was done for products. The difference spectrum is taken as the representation of the mixture of chemical reactions. A data set of 181 chemical reactions was used, each reaction manually assigned to one of 6 types. From this dataset, we simulated mixtures where two reactions of different types would occur simultaneously. Automatic learning methods were trained to classify the reactions occurring in a mixture from the (1)H NMR-based descriptor of the mixture. Unsupervised learning methods (self-organizing maps) produced a reasonable clustering of the mixtures by reaction type, and allowed the correct classification of 80% and 63% of the mixtures in two independent test sets of different similarity to the training set. With random forests (RF), the percentage of correct classifications was increased to 99% and 80% for the same test sets. The RF probability associated to the predictions yielded a robust indication of their reliability. This study demonstrates the possibility of applying machine learning methods to automatically identify types of co-occurring chemical reactions from NMR data. Using no explicit structural information about the reactions participants, reaction elucidation is performed without structure elucidation of

  5. Automatic Generation of 3D Caricatures Based on Artistic Deformation Styles.

    PubMed

    Clarke, Lyndsey; Chen, Min; Mora, Benjamin

    2011-06-01

    Caricatures are a form of humorous visual art, usually created by skilled artists for the intention of amusement and entertainment. In this paper, we present a novel approach for automatic generation of digital caricatures from facial photographs, which capture artistic deformation styles from hand-drawn caricatures. We introduced a pseudo stress-strain model to encode the parameters of an artistic deformation style using "virtual" physical and material properties. We have also developed a software system for performing the caricaturistic deformation in 3D which eliminates the undesirable artifacts in 2D caricaturization. We employed a Multilevel Free-Form Deformation (MFFD) technique to optimize a 3D head model reconstructed from an input facial photograph, and for controlling the caricaturistic deformation. Our results demonstrated the effectiveness and usability of the proposed approach, which allows ordinary users to apply the captured and stored deformation styles to a variety of facial photographs.

  6. Automatic Generation of Individual Finite-Element Models for Computational Fluid Dynamics and Computational Structure Mechanics Simulations in the Arteries

    NASA Astrophysics Data System (ADS)

    Hazer, D.; Schmidt, E.; Unterhinninghofen, R.; Richter, G. M.; Dillmann, R.

    2009-08-01

    Abnormal hemodynamics and biomechanics of blood flow and vessel wall conditions in the arteries may result in severe cardiovascular diseases. Cardiovascular diseases result from complex flow pattern and fatigue of the vessel wall and are prevalent causes leading to high mortality each year. Computational Fluid Dynamics (CFD), Computational Structure Mechanics (CSM) and Fluid Structure Interaction (FSI) have become efficient tools in modeling the individual hemodynamics and biomechanics as well as their interaction in the human arteries. The computations allow non-invasively simulating patient-specific physical parameters of the blood flow and the vessel wall needed for an efficient minimally invasive treatment. The numerical simulations are based on the Finite Element Method (FEM) and require exact and individual mesh models to be provided. In the present study, we developed a numerical tool to automatically generate complex patient-specific Finite Element (FE) mesh models from image-based geometries of healthy and diseased vessels. The mesh generation is optimized based on the integration of mesh control functions for curvature, boundary layers and mesh distribution inside the computational domain. The needed mesh parameters are acquired from a computational grid analysis which ensures mesh-independent and stable simulations. Further, the generated models include appropriate FE sets necessary for the definition of individual boundary conditions, required to solve the system of nonlinear partial differential equations governed by the fluid and solid domains. Based on the results, we have performed computational blood flow and vessel wall simulations in patient-specific aortic models providing a physical insight into the pathological vessel parameters. Automatic mesh generation with individual awareness in terms of geometry and conditions is a prerequisite for performing fast, accurate and realistic FEM-based computations of hemodynamics and biomechanics in the

  7. Automatic atlas-based three-label cartilage segmentation from MR knee images

    PubMed Central

    Shan, Liang; Zach, Christopher; Charles, Cecil; Niethammer, Marc

    2016-01-01

    Osteoarthritis (OA) is the most common form of joint disease and often characterized by cartilage changes. Accurate quantitative methods are needed to rapidly screen large image databases to assess changes in cartilage morphology. We therefore propose a new automatic atlas-based cartilage segmentation method for future automatic OA studies. Atlas-based segmentation methods have been demonstrated to be robust and accurate in brain imaging and therefore also hold high promise to allow for reliable and high-quality segmentations of cartilage. Nevertheless, atlas-based methods have not been well explored for cartilage segmentation. A particular challenge is the thinness of cartilage, its relatively small volume in comparison to surrounding tissue and the difficulty to locate cartilage interfaces – for example the interface between femoral and tibial cartilage. This paper focuses on the segmentation of femoral and tibial cartilage, proposing a multi-atlas segmentation strategy with non-local patch-based label fusion which can robustly identify candidate regions of cartilage. This method is combined with a novel three-label segmentation method which guarantees the spatial separation of femoral and tibial cartilage, and ensures spatial regularity while preserving the thin cartilage shape through anisotropic regularization. Our segmentation energy is convex and therefore guarantees globally optimal solutions. We perform an extensive validation of the proposed method on 706 images of the Pfizer Longitudinal Study. Our validation includes comparisons of different atlas segmentation strategies, different local classifiers, and different types of regularizers. To compare to other cartilage segmentation approaches we validate based on the 50 images of the SKI10 dataset. PMID:25128683

  8. Markov random field based automatic alignment for low SNR imagesfor cryo electron tomography

    SciTech Connect

    Amat, Fernando; Moussavi, Farshid; Comolli, Luis R.; Elidan, Gal; Horowitz, Mark

    2007-07-21

    We present a method for automatic full precision alignmentof the images in a tomographic tilt series. Full-precision automaticalignment of cryo electron microscopy images has remained a difficultchallenge to date, due to the limited electron dose and low imagecontrast. These facts lead to poor signal to noise ratio (SNR) in theimages, which causes automatic feature trackers to generate errors, evenwith high contrast gold particles as fiducial features. To enable fullyautomatic alignment for full-precision reconstructions, we frame theproblem probabilistically as finding the most likely particle tracksgiven a set of noisy images, using contextual information to make thesolution more robust to the noise in each image. To solve this maximumlikelihood problem, we use Markov Random Fields (MRF) to establish thecorrespondence of features in alignment and robust optimization forprojection model estimation. The resultingalgorithm, called RobustAlignment and Projection Estimation for Tomographic Reconstruction, orRAPTOR, has not needed any manual intervention for the difficult datasets we have tried, and has provided sub-pixel alignment that is as goodas the manual approach by an expert user. We are able to automaticallymap complete and partial marker trajectories and thus obtain highlyaccurate image alignment. Our method has been applied to challenging cryoelectron tomographic datasets with low SNR from intact bacterial cells,as well as several plastic section and x-ray datasets.

  9. Toward improved calibration of hydrologic models: Combining the strengths of manual and automatic methods

    NASA Astrophysics Data System (ADS)

    Boyle, Douglas P.; Gupta, Hoshin V.; Sorooshian, Soroosh

    2000-12-01

    Automatic methods for model calibration seek to take advantage of the speed and power of digital computers, while being objective and relatively easy to implement. However, they do not provide parameter estimates and hydrograph simulations that are considered acceptable by the hydrologists responsible for operational forecasting and have therefore not entered into widespread use. In contrast, the manual approach which has been developed and refined over the years to result in excellent model calibrations is complicated and highly labor-intensive, and the expertise acquired by one individual with a specific model is not easily transferred to another person (or model). In this paper, we propose a hybrid approach that combines the strengths of each. A multicriteria formulation is used to "model" the evaluation techniques and strategies used in manual calibration, and the resulting optimization problem is solved by means of a computerized algorithm. The new approach provides a stronger test of model performance than methods that use a single overall statistic to aggregate model errors over a large range of hydrologie behaviors. The power of the new approach is illustrated by means of a case study using the Sacramento Soil Moisture Accounting model.

  10. A stochastic model for automatic extraction of 3D neuronal morphology.

    PubMed

    Basu, Sreetama; Kulikova, Maria; Zhizhina, Elena; Ooi, Wei Tsang; Racoceanu, Daniel

    2013-01-01

    Tubular structures are frequently encountered in bio-medical images. The center-lines of these tubules provide an accurate representation of the topology of the structures. We introduce a stochastic Marked Point Process framework for fully automatic extraction of tubular structures requiring no user interaction or seed points for initialization. Our Marked Point Process model enables unsupervised network extraction by fitting a configuration of objects with globally optimal associated energy to the centreline of the arbors. For this purpose we propose special configurations of marked objects and an energy function well adapted for detection of 3D tubular branches. The optimization of the energy function is achieved by a stochastic, discrete-time multiple birth and death dynamics. Our method finds the centreline, local width and orientation of neuronal arbors and identifies critical nodes like bifurcations and terminals. The proposed model is tested on 3D light microscopy images from the DIADEM data set with promising results. PMID:24505691

  11. Automatic target recognition algorithm based on statistical dispersion of infrared multispectral image

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Cao, Le-lin; Wu, Chun-feng; Hou, Qing-yu

    2009-07-01

    A novel automatic target recognition algorithm based on statistical dispersion of infrared multispectral images(SDOIMI) is proposed. Firstly, infrared multispectral characteristic matrix of the scenario is constructed based on infrared multispectral characteristic information (such as radiation intensity and spectral distribution etc.) of targets, background and decoys. Then the infrared multispectral characteristic matrix of targets is reconstructed after segmenting image by maximum distance method and fusing spatial and spectral information. Finally, an statistical dispersion of infrared multispectral images(SDOIMI) recognition criteria is formulated in terms of spectral radiation difference of interesting targets. In simulation, nine sub-bands multispectral images of real ship target and shipborne aerosol infrared decoy modulated by laser simulating ship geometry appearance are obtained via using spectral radiation curves. Digital simulation experiment result verifies that the algorithm is effective and feasible.

  12. Automatic classification of skin lesions using color mathematical morphology-based texture descriptors

    NASA Astrophysics Data System (ADS)

    Gonzalez-Castro, Victor; Debayle, Johan; Wazaefi, Yanal; Rahim, Mehdi; Gaudy-Marqueste, Caroline; Grob, Jean-Jacques; Fertil, Bernard

    2015-04-01

    In this paper an automatic classification method of skin lesions from dermoscopic images is proposed. This method is based on color texture analysis based both on color mathematical morphology and Kohonen Self-Organizing Maps (SOM), and it does not need any previous segmentation process. More concretely, mathematical morphology is used to compute a local descriptor for each pixel of the image, while the SOM is used to cluster them and, thus, create the texture descriptor of the global image. Two approaches are proposed, depending on whether the pixel descriptor is computed using classical (i.e. spatially invariant) or adaptive (i.e. spatially variant) mathematical morphology by means of the Color Adaptive Neighborhoods (CANs) framework. Both approaches obtained similar areas under the ROC curve (AUC): 0.854 and 0.859 outperforming the AUC built upon dermatologists' predictions (0.792).

  13. Truncated feature representation for automatic target detection using transformed data-based decomposition

    NASA Astrophysics Data System (ADS)

    Riasati, Vahid R.

    2016-05-01

    In this work, the data covariance matrix is diagonalized to provide an orthogonal bases set using the eigen vectors of the data. The eigen-vector decomposition of the data is transformed and filtered in the transform domain to truncate the data for robust features related to a specified set of targets. These truncated eigen features are then combined and reconstructed to utilize in a composite filter and consequently utilized for the automatic target detection of the same class of targets. The results associated with the testing of the current technique are evaluated using the peak-correlation and peak-correlation energy metrics and are presented in this work. The inverse transformed eigen-bases of the current technique may be thought of as an injected sparsity to minimize data in representing the skeletal data structure information associated with the set of targets under consideration.

  14. Optimal algorithm for automatic detection of microaneurysms based on receiver operating characteristic curve

    NASA Astrophysics Data System (ADS)

    Xu, Lili; Luo, Shuqian

    2010-11-01

    Microaneurysms (MAs) are the first manifestations of the diabetic retinopathy (DR) as well as an indicator for its progression. Their automatic detection plays a key role for both mass screening and monitoring and is therefore in the core of any system for computer-assisted diagnosis of DR. The algorithm basically comprises the following stages: candidate detection aiming at extracting the patterns possibly corresponding to MAs based on mathematical morphological black top hat, feature extraction to characterize these candidates, and classification based on support vector machine (SVM), to validate MAs. Feature vector and kernel function of SVM selection is very important to the algorithm. We use the receiver operating characteristic (ROC) curve to evaluate the distinguishing performance of different feature vectors and different kernel functions of SVM. The ROC analysis indicates the quadratic polynomial SVM with a combination of features as the input shows the best discriminating performance.

  15. Low-complexity PDE-based approach for automatic microarray image processing.

    PubMed

    Belean, Bogdan; Terebes, Romulus; Bot, Adrian

    2015-02-01

    Microarray image processing is known as a valuable tool for gene expression estimation, a crucial step in understanding biological processes within living organisms. Automation and reliability are open subjects in microarray image processing, where grid alignment and spot segmentation are essential processes that can influence the quality of gene expression information. The paper proposes a novel partial differential equation (PDE)-based approach for fully automatic grid alignment in case of microarray images. Our approach can handle image distortions and performs grid alignment using the vertical and horizontal luminance function profiles. These profiles are evolved using a hyperbolic shock filter PDE and then refined using the autocorrelation function. The results are compared with the ones delivered by state-of-the-art approaches for grid alignment in terms of accuracy and computational complexity. Using the same PDE formalism and curve fitting, automatic spot segmentation is achieved and visual results are presented. Considering microarray images with different spots layouts, reliable results in terms of accuracy and reduced computational complexity are achieved, compared with existing software platforms and state-of-the-art methods for microarray image processing.

  16. ICA based automatic segmentation of dynamic H(2)(15)O cardiac PET images.

    PubMed

    Margadán-Méndez, Margarita; Juslin, Anu; Nesterov, Sergey V; Kalliokoski, Kari; Knuuti, Juhani; Ruotsalainen, Ulla

    2010-05-01

    In this study, we applied an iterative independent component analysis (ICA) method for the separation of cardiac tissue components (myocardium, right, and left ventricle) from dynamic positron emission tomography (PET) images. Previous phantom and animal studies have shown that ICA separation extracts the cardiac structures accurately. Our goal in this study was to investigate the methodology with human studies. The ICA separated cardiac structures were used to calculate the myocardial perfusion in two different cases: 1) the regions of interest were drawn manually on the ICA separated component images and 2) the volumes of interest (VOI) were automatically segmented from the component images. For the whole myocardium, the perfusion values of 25 rest and six drug-induced stress studies obtained with these methods were compared to the values from the manually drawn regions of interest on differential images. The separation of the rest and stress studies using ICA-based methods was successful in all cases. The visualization of the cardiac structures from H (2) (15) O PET studies was improved with the ICA separation. Also, the automatic segmentation of the VOI seemed to be feasible. PMID:19273031

  17. Automatic computer-aided detection of prostate cancer based on multiparametric magnetic resonance image analysis

    NASA Astrophysics Data System (ADS)

    Vos, P. C.; Barentsz, J. O.; Karssemeijer, N.; Huisman, H. J.

    2012-03-01

    In this paper, a fully automatic computer-aided detection (CAD) method is proposed for the detection of prostate cancer. The CAD method consists of multiple sequential steps in order to detect locations that are suspicious for prostate cancer. In the initial stage, a voxel classification is performed using a Hessian-based blob detection algorithm at multiple scales on an apparent diffusion coefficient map. Next, a parametric multi-object segmentation method is applied and the resulting segmentation is used as a mask to restrict the candidate detection to the prostate. The remaining candidates are characterized by performing histogram analysis on multiparametric MR images. The resulting feature set is summarized into a malignancy likelihood by a supervised classifier in a two-stage classification approach. The detection performance for prostate cancer was tested on a screening population of 200 consecutive patients and evaluated using the free response operating characteristic methodology. The results show that the CAD method obtained sensitivities of 0.41, 0.65 and 0.74 at false positive (FP) levels of 1, 3 and 5 per patient, respectively. In conclusion, this study showed that it is feasible to automatically detect prostate cancer at a FP rate lower than systematic biopsy. The CAD method may assist the radiologist to detect prostate cancer locations and could potentially guide biopsy towards the most aggressive part of the tumour.

  18. Automatic computer-aided detection of prostate cancer based on multiparametric magnetic resonance image analysis.

    PubMed

    Vos, P C; Barentsz, J O; Karssemeijer, N; Huisman, H J

    2012-03-21

    In this paper, a fully automatic computer-aided detection (CAD) method is proposed for the detection of prostate cancer. The CAD method consists of multiple sequential steps in order to detect locations that are suspicious for prostate cancer. In the initial stage, a voxel classification is performed using a Hessian-based blob detection algorithm at multiple scales on an apparent diffusion coefficient map. Next, a parametric multi-object segmentation method is applied and the resulting segmentation is used as a mask to restrict the candidate detection to the prostate. The remaining candidates are characterized by performing histogram analysis on multiparametric MR images. The resulting feature set is summarized into a malignancy likelihood by a supervised classifier in a two-stage classification approach. The detection performance for prostate cancer was tested on a screening population of 200 consecutive patients and evaluated using the free response operating characteristic methodology. The results show that the CAD method obtained sensitivities of 0.41, 0.65 and 0.74 at false positive (FP) levels of 1, 3 and 5 per patient, respectively. In conclusion, this study showed that it is feasible to automatically detect prostate cancer at a FP rate lower than systematic biopsy. The CAD method may assist the radiologist to detect prostate cancer locations and could potentially guide biopsy towards the most aggressive part of the tumour.

  19. Automatic detection of suspicious behavior of pickpockets with track-based features in a shopping mall

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; Baan, Jan; Burghouts, Gertjan J.; Eendebak, Pieter T.; van Huis, Jasper R.; Dijk, Judith; van Rest, Jeroen H. C.

    2014-10-01

    Proactive detection of incidents is required to decrease the cost of security incidents. This paper focusses on the automatic early detection of suspicious behavior of pickpockets with track-based features in a crowded shopping mall. Our method consists of several steps: pedestrian tracking, feature computation and pickpocket recognition. This is challenging because the environment is crowded, people move freely through areas which cannot be covered by a single camera, because the actual snatch is a subtle action, and because collaboration is complex social behavior. We carried out an experiment with more than 20 validated pickpocket incidents. We used a top-down approach to translate expert knowledge in features and rules, and a bottom-up approach to learn discriminating patterns with a classifier. The classifier was used to separate the pickpockets from normal passers-by who are shopping in the mall. We performed a cross validation to train and evaluate our system. In this paper, we describe our method, identify the most valuable features, and analyze the results that were obtained in the experiment. We estimate the quality of these features and the performance of automatic detection of (collaborating) pickpockets. The results show that many of the pickpockets can be detected at a low false alarm rate.

  20. Toward a multi-sensor-based approach to automatic text classification

    SciTech Connect

    Dasigi, V.R.; Mann, R.C.

    1995-10-01

    Many automatic text indexing and retrieval methods use a term-document matrix that is automatically derived from the text in question. Latent Semantic Indexing is a method, recently proposed in the Information Retrieval (IR) literature, for approximating a large and sparse term-document matrix with a relatively small number of factors, and is based on a solid mathematical foundation. LSI appears to be quite useful in the problem of text information retrieval, rather than text classification. In this report, we outline a method that attempts to combine the strength of the LSI method with that of neural networks, in addressing the problem of text classification. In doing so, we also indicate ways to improve performance by adding additional {open_quotes}logical sensors{close_quotes} to the neural network, something that is hard to do with the LSI method when employed by itself. The various programs that can be used in testing the system with TIPSTER data set are described. Preliminary results are summarized, but much work remains to be done.

  1. All-automatic swimmer tracking system based on an optimized scaled composite JTC technique

    NASA Astrophysics Data System (ADS)

    Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.

    2016-04-01

    In this paper, an all-automatic optimized JTC based swimmer tracking system is proposed and evaluated on real video database outcome from national and international swimming competitions (French National Championship, Limoges 2015, FINA World Championships, Barcelona 2013 and Kazan 2015). First, we proposed to calibrate the swimming pool using the DLT algorithm (Direct Linear Transformation). DLT calculates the homography matrix given a sufficient set of correspondence points between pixels and metric coordinates: i.e. DLT takes into account the dimensions of the swimming pool and the type of the swim. Once the swimming pool is calibrated, we extract the lane. Then we apply a motion detection approach to detect globally the swimmer in this lane. Next, we apply our optimized Scaled Composite JTC which consists of creating an adapted input plane that contains the predicted region and the head reference image. This latter is generated using a composite filter of fin images chosen from the database. The dimension of this reference will be scaled according to the ratio between the head's dimension and the width of the swimming lane. Finally, applying the proposed approach improves the performances of our previous tracking method by adding a detection module in order to achieve an all-automatic swimmer tracking system.

  2. Automatic video summarization driven by a spatio-temporal attention model

    NASA Astrophysics Data System (ADS)

    Barland, R.; Saadane, A.

    2008-02-01

    According to the literature, automatic video summarization techniques can be classified in two parts, following the output nature: "video skims", which are generated using portions of the original video and "key-frame sets", which correspond to the images, selected from the original video, having a significant semantic content. The difference between these two categories is reduced when we consider automatic procedures. Most of the published approaches are based on the image signal and use either pixel characterization or histogram techniques or image decomposition by blocks. However, few of them integrate properties of the Human Visual System (HVS). In this paper, we propose to extract keyframes for video summarization by studying the variations of salient information between two consecutive frames. For each frame, a saliency map is produced simulating the human visual attention by a bottom-up (signal-dependent) approach. This approach includes three parallel channels for processing three early visual features: intensity, color and temporal contrasts. For each channel, the variations of the salient information between two consecutive frames are computed. These outputs are then combined to produce the global saliency variation which determines the key-frames. Psychophysical experiments have been defined and conducted to analyze the relevance of the proposed key-frame extraction algorithm.

  3. Testing Strategies for Model-Based Development

    NASA Technical Reports Server (NTRS)

    Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.

    2006-01-01

    This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.

  4. Group-wise automatic mesh-based analysis of cortical thickness

    NASA Astrophysics Data System (ADS)

    Vachet, Clement; Cody Hazlett, Heather; Niethammer, Marc; Oguz, Ipek; Cates, Joshua; Whitaker, Ross; Piven, Joseph; Styner, Martin

    2011-03-01

    The analysis of neuroimaging data from pediatric populations presents several challenges. There are normal variations in brain shape from infancy to adulthood and normal developmental changes related to tissue maturation. Measurement of cortical thickness is one important way to analyze such developmental tissue changes. We developed a novel framework that allows group-wise automatic mesh-based analysis of cortical thickness. Our approach is divided into four main parts. First an individual pre-processing pipeline is applied on each subject to create genus-zero inflated white matter cortical surfaces with cortical thickness measurements. The second part performs an entropy-based group-wise shape correspondence on these meshes using a particle system, which establishes a trade-off between an even sampling of the cortical surfaces and the similarity of corresponding points across the population using sulcal depth information and spatial proximity. A novel automatic initial particle sampling is performed using a matched 98-lobe parcellation map prior to a particle-splitting phase. Third, corresponding re-sampled surfaces are computed with interpolated cortical thickness measurements, which are finally analyzed via a statistical vertex-wise analysis module. This framework consists of a pipeline of automated 3D Slicer compatible modules. It has been tested on a small pediatric dataset and incorporated in an open-source C++ based high-level module called GAMBIT. GAMBIT's setup allows efficient batch processing, grid computing and quality control. The current research focuses on the use of an average template for correspondence and surface re-sampling, as well as thorough validation of the framework and its application to clinical pediatric studies.

  5. Automatic speech recognition

    NASA Astrophysics Data System (ADS)

    Espy-Wilson, Carol

    2005-04-01

    Great strides have been made in the development of automatic speech recognition (ASR) technology over the past thirty years. Most of this effort has been centered around the extension and improvement of Hidden Markov Model (HMM) approaches to ASR. Current commercially-available and industry systems based on HMMs can perform well for certain situational tasks that restrict variability such as phone dialing or limited voice commands. However, the holy grail of ASR systems is performance comparable to humans-in other words, the ability to automatically transcribe unrestricted conversational speech spoken by an infinite number of speakers under varying acoustic environments. This goal is far from being reached. Key to the success of ASR is effective modeling of variability in the speech signal. This tutorial will review the basics of ASR and the various ways in which our current knowledge of speech production, speech perception and prosody can be exploited to improve robustness at every level of the system.

  6. Automatic PSO-Based Deformable Structures Markerless Tracking in Laparoscopic Cholecystectomy

    NASA Astrophysics Data System (ADS)

    Djaghloul, Haroun; Batouche, Mohammed; Jessel, Jean-Pierre

    An automatic and markerless tracking method of deformable structures (digestive organs) during laparoscopic cholecystectomy intervention that uses the (PSO) behavour and the preoperative a priori knowledge is presented. The associated shape to the global best particles of the population determines a coarse representation of the targeted organ (the gallbladder) in monocular laparoscopic colored images. The swarm behavour is directed by a new fitness function to be optimized to improve the detection and tracking performance. The function is defined by a linear combination of two terms, namely, the human a priori knowledge term (H) and the particle's density term (D). Under the limits of standard (PSO) characteristics, experimental results on both synthetic and real data show the effectiveness and robustness of our method. Indeed, it outperforms existing methods without need of explicit initialization (such as active contours, deformable models and Gradient Vector Flow) on accuracy and convergence rate.

  7. Automatic commissioning of a GPU-based Monte Carlo radiation dose calculation code for photon radiotherapy

    NASA Astrophysics Data System (ADS)

    Tian, Zhen; Jiang Graves, Yan; Jia, Xun; Jiang, Steve B.

    2014-10-01

    Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6 MV beam was used to generate the PSLs for 6 MV beams. In a simulation study, we commissioned a Siemens 6 MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D γ-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2 mm criteria and from 32.22 to 89.65% for 1%/1 mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The

  8. Automatic and continuous landslide monitoring: the Rotolon Web-based platform

    NASA Astrophysics Data System (ADS)

    Frigerio, Simone; Schenato, Luca; Mantovani, Matteo; Bossi, Giulia; Marcato, Gianluca; Cavalli, Marco; Pasuto, Alessandro

    2013-04-01

    Mount Rotolon (Eastern Italian Alps) is affected by a complex landslide that, since 1985, is threatening the nearby village of Recoaro Terme. The first written proof of a landslide occurrence dated back to 1798. After the last re-activation on November 2010 (637 mm of intense rainfall recorded in the 12 days prior the event), a mass of approximately 320.000 m3 detached from the south flank of Mount Rotolon and evolved into a fast debris flow that ran for about 3 km along the stream bed. A real-time monitoring system was required to detect early indication of rapid movements, potentially saving lives and property. A web-based platform for automatic and continuous monitoring was designed as a first step in the implementation of an early-warning system. Measurements collected by the automated geotechnical and topographic instrumentation, deployed over the landslide body, are gathered in a central box station. After the calibration process, they are transmitted by web services on a local server, where graphs, maps, reports and alert announcement are automatically generated and updated. All the processed information are available by web browser with different access rights. The web environment provides the following advantages: 1) data is collected from different data sources and matched on a single server-side frame 2) a remote user-interface allows regular technical maintenance and direct access to the instruments 3) data management system is synchronized and automatically tested 4) a graphical user interface on browser provides a user-friendly tool for decision-makers to interact with a system continuously updated. On this site two monitoring systems are actually on course: 1) GB-InSAR radar interferometer (University of Florence - Department of Earth Science) and 2) Automated Total Station (ATS) combined with extensometers network in a Web-based solution (CNR-IRPI Padova). This work deals with details on methodology, services and techniques adopted for the second

  9. A Web-Based Assessment for Phonological Awareness, Rapid Automatized Naming (RAN) and Learning to Read Chinese

    ERIC Educational Resources Information Center

    Liao, Chen-Huei; Kuo, Bor-Chen

    2011-01-01

    The present study examined the equivalency of conventional and web-based tests in reading Chinese. Phonological awareness, rapid automatized naming (RAN), reading accuracy, and reading fluency tests were administered to 93 grade 6 children in Taiwan with both test versions (paper-pencil and web-based). The results suggest that conventional and…

  10. Automatic Concept-Based Query Expansion Using Term Relational Pathways Built from a Collection-Specific Association Thesaurus

    ERIC Educational Resources Information Center

    Lyall-Wilson, Jennifer Rae

    2013-01-01

    The dissertation research explores an approach to automatic concept-based query expansion to improve search engine performance. It uses a network-based approach for identifying the concept represented by the user's query and is founded on the idea that a collection-specific association thesaurus can be used to create a reasonable representation of…

  11. [Affine transformation-based automatic registration for peripheral digital subtraction angiography (DSA)].

    PubMed

    Kong, Gang; Dai, Dao-Qing; Zou, Lu-Min

    2008-07-01

    In order to remove the artifacts of peripheral digital subtraction angiography (DSA), an affine transformation-based automatic image registration algorithm is introduced here. The whole process is described as follows: First, rectangle feature templates are constructed with their centers of the extracted Harris corners in the mask, and motion vectors of the central feature points are estimated using template matching technology with the similarity measure of maximum histogram energy. And then the optimal parameters of the affine transformation are calculated with the matrix singular value decomposition (SVD) method. Finally, bilinear intensity interpolation is taken to the mask according to the specific affine transformation. More than 30 peripheral DSA registrations are performed with the presented algorithm, and as the result, moving artifacts of the images are removed with sub-pixel precision, and the time consumption is less enough to satisfy the clinical requirements. Experimental results show the efficiency and robustness of the algorithm.

  12. A smartphone-based automatic diagnosis system for facial nerve palsy.

    PubMed

    Kim, Hyun Seok; Kim, So Young; Kim, Young Ho; Park, Kwang Suk

    2015-10-21

    Facial nerve palsy induces a weakness or loss of facial expression through damage of the facial nerve. A quantitative and reliable assessment system for facial nerve palsy is required for both patients and clinicians. In this study, we propose a rapid and portable smartphone-based automatic diagnosis system that discriminates facial nerve palsy from normal subjects. Facial landmarks are localized and tracked by an incremental parallel cascade of the linear regression method. An asymmetry index is computed using the displacement ratio between the left and right side of the forehead and mouth regions during three motions: resting, raising eye-brow and smiling. To classify facial nerve palsy, we used Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM), and Leave-one-out Cross Validation (LOOCV) with 36 subjects. The classification accuracy rate was 88.9%.

  13. Automatic Commentary System Based on Multiple Viewpoints with Different Amount of Information

    NASA Astrophysics Data System (ADS)

    Fujisawa, Mizuki; Saito, Suguru; Okumura, Manabu

    Previous commentary systems generate commentaries only from the viewpoint of a commentator. However there are various viewpoints for comments, and such different viewpoints invoke various comments. The amount of information about a situation may differ between the viewpoints, and the understandings of the situation may also differ between them. In this paper, we propose a method to generate commentaries automatically so that users can easily understand situations by taking into account the different understandings of the situations between viewpoints. Our method is composed of two parts. The first is generation of comment candidates about the current situation, unexpected actions, intentions of players by using a game tree. The second is comment selection which chooses comments related to the prior one so that listeners can compare the situations from different viewpoints. Based on our approach, we implemented an experimental system that generates commentaries on mahjong games. We discuss the output of the system.

  14. Automatic Carbon Dioxide-Methane Gas Sensor Based on the Solubility of Gases in Water

    PubMed Central

    Cadena-Pereda, Raúl O.; Rivera-Muñoz, Eric M.; Herrera-Ruiz, Gilberto; Gomez-Melendez, Domingo J.; Anaya-Rivera, Ely K.

    2012-01-01

    Biogas methane content is a relevant variable in anaerobic digestion processing where knowledge of process kinetics or an early indicator of digester failure is needed. The contribution of this work is the development of a novel, simple and low cost automatic carbon dioxide-methane gas sensor based on the solubility of gases in water as the precursor of a sensor for biogas quality monitoring. The device described in this work was used for determining the composition of binary mixtures, such as carbon dioxide-methane, in the range of 0–100%. The design and implementation of a digital signal processor and control system into a low-cost Field Programmable Gate Array (FPGA) platform has permitted the successful application of data acquisition, data distribution and digital data processing, making the construction of a standalone carbon dioxide-methane gas sensor possible. PMID:23112626

  15. Automatic Kappa Angle Estimation for Air Photos Based on Phase Only Correlation

    NASA Astrophysics Data System (ADS)

    Xiong, Z.; Stanley, D.; Xin, Y.

    2016-06-01

    The approximate value of exterior orientation parameters is needed for air photo bundle adjustment. Usually the air borne GPS/IMU can provide the initial value for the camera position and attitude angle. However, in some cases, the camera's attitude angle is not available due to lack of IMU or other reasons. In this case, the kappa angle needs to be estimated for each photo before bundle adjustment. The kappa angle can be obtained from the Ground Control Points (GCPs) in the photo. Unfortunately it is not the case that enough GCPs are always available. In order to overcome this problem, an algorithm is developed to automatically estimate the kappa angle for air photos based on phase only correlation technique. This function has been embedded in PCI software. Extensive experiments show that this algorithm is fast, reliable, and stable.

  16. Implementation of automatic white-balance based on adaptive-luminance

    NASA Astrophysics Data System (ADS)

    Zhong, Jian; Yao, Su-Ying; Xu, Jiang-Tao

    2009-03-01

    A novel automatic white-balance algorithm based on adaptive-luminance is proposed in this paper. This algorithm redefines the gray pixels region, which can filter the gray pixels accurately. Furthermore, with the relations between gray pixels’ luminance with standard light source and their chroma Cb, Cr shifts with other color temperatures, the algorithm establishes the equations between the captured pixels and the original ones, which can estimate the gains of RGB channels exactly. To evaluate the proposed algorithm, the objective comparison method and the subjective observation method are both used, and the test results prove that the effects of image emendated by the proposed algorithm are excelled to that by traditional algorithms. Finally, the algorithm is implemented with VLSI design, and the result of synthesis proves that it can satisfy real-time application.

  17. Automatic Wheezing Detection Based on Signal Processing of Spectrogram and Back-Propagation Neural Network.

    PubMed

    Lin, Bor-Shing; Wu, Huey-Dong; Chen, Sao-Jie

    2015-01-01

    Wheezing is a common clinical symptom in patients with obstructive pulmonary diseases such as asthma. Automatic wheezing detection offers an objective and accurate means for identifying wheezing lung sounds, helping physicians in the diagnosis, long-term auscultation, and analysis of a patient with obstructive pulmonary disease. This paper describes the design of a fast and high-performance wheeze recognition system. A wheezing detection algorithm based on the order truncate average method and a back-propagation neural network (BPNN) is proposed. Some features are extracted from processed spectra to train a BPNN, and subsequently, test samples are analyzed by the trained BPNN to determine whether they are wheezing sounds. The respiratory sounds of 58 volunteers (32 asthmatic and 26 healthy adults) were recorded for training and testing. Experimental results of a qualitative analysis of wheeze recognition showed a high sensitivity of 0.946 and a high specificity of 1.0.

  18. An Automatic Impact-based Delamination Detection System for Concrete Bridge Decks

    SciTech Connect

    Zhang, Gang; Harichandran, Ronald S.; Ramuhalli, Pradeep

    2012-01-02

    Delamination of concrete bridge decks is a commonly observed distress in corrosive environments. In traditional acoustic inspection methods, delamination is assessed by the "hollowness" of the sound created by impacting the bridge deck with a hammer or bar or by dragging a chain where the signals are often contaminated by ambient traffic noise and the detection is highly subjective. In the proposed method, a modified version of independent component analysis (ICA) is used to filter the traffic noise. To eliminate subjectivity, Mel-frequency cepstral coefficients (MFCC) are used as features for detection and the delamination is detected by a radial basis function (RBF) neural network. Results from both experimental and field data suggest that the proposed methods id noise robust and has satisfactory performance. The methods can also detect the delamination of repair patches and concrete below the repair patches. The algorithms were incorporated into an automatic impact-bases delamination detection (AIDD) system for field application.

  19. A processing-modeling routine to use rough data from automatic weather stations in snowpack mass dynamics modeling at fine temporal resolutions

    NASA Astrophysics Data System (ADS)

    Avanzi, Francesco; De Michele, Carlo; Ghezzi, Antonio; Jommi, Cristina; Pepe, Monica

    2015-04-01

    We discuss a proposal of coupled routine to process rough data from automatic weather stations at an hourly resolution and to model snowpack mass dynamics. Seasonal snow represents an important component of the water cycle in mountain environment, and the modeling of its mass dynamics is a living topic in modern hydrology, given the expected modifications of the climate in the near future. Nevertheless, model forcing, calibration and evaluation operations are often hampered by the noisiness of rough data from automatic weather stations. The noise issue include, among others, non-physical temperature-based fluctuations of the signal or gauge under-catch. Consequently, it can be difficult to quantify precipitation inputs, accumulation/ablation periods or melt-runoff timing and amounts. This problem is particularly relevant at fine temporal resolution (e.g., the hourly one). To tackle this issue, 40 SNOTEL sites from western US are here considered, and the proposed processing-modeling routine is applied on multi-year datasets to assess its performances to both process hourly data and model snowpack dynamics. A simple one-layer snowpack model is used for this purpose. Specific attention is paid to remove sub-daily erroneous oscillations of snow depth. Under these assumptions, we can separate events of different types and recover catch deficiency by means of a data-fusion procedure that relies on the mass conservation law, instead of site- or instrument-specific relations. Since the considered model needs the calibration of two parameters, and given that sub-daily physical oscillations in snow depth data are difficult to be separated from instrument noise, a coupled processing-modeling procedure has been designed. Results prove that noise can be successfully removed from data, and that sub-daily data-series can be exploited as useful sources to model snowpack dynamics.

  20. Automatic BSS-based filtering of metallic interference in MEG recordings: definition and validation using simulated signals

    NASA Astrophysics Data System (ADS)

    Migliorelli, Carolina; Alonso, Joan F.; Romero, Sergio; Mañanas, Miguel A.; Nowak, Rafał; Russi, Antonio

    2015-08-01

    Objective. One of the principal drawbacks of magnetoencephalography (MEG) is its high sensitivity to metallic artifacts, which come from implanted intracranial electrodes and dental ferromagnetic prosthesis and produce a high distortion that masks cerebral activity. The aim of this study was to develop an automatic algorithm based on blind source separation (BSS) techniques to remove metallic artifacts from MEG signals. Approach. Three methods were evaluated: AMUSE, a second-order technique; and INFOMAX and FastICA, both based on high-order statistics. Simulated signals consisting of real artifact-free data mixed with real metallic artifacts were generated to objectively evaluate the effectiveness of BSS and the subsequent interference reduction. A completely automatic detection of metallic-related components was proposed, exploiting the known characteristics of the metallic interference: regularity and low frequency content. Main results. The automatic procedure was applied to the simulated datasets and the three methods exhibited different performances. Results indicated that AMUSE preserved and consequently recovered more brain activity than INFOMAX and FastICA. Normalized mean squared error for AMUSE decomposition remained below 2%, allowing an effective removal of artifactual components. Significance. To date, the performance of automatic artifact reduction has not been evaluated in MEG recordings. The proposed methodology is based on an automatic algorithm that provides an effective interference removal. This approach can be applied to any MEG dataset affected by metallic artifacts as a processing step, allowing further analysis of unusable or poor quality data.

  1. Mitochondrial complex I and cell death: a semi-automatic shotgun model

    PubMed Central

    Gonzalez-Halphen, D; Ghelli, A; Iommarini, L; Carelli, V; Esposti, M D

    2011-01-01

    Mitochondrial dysfunction often leads to cell death and disease. We can now draw correlations between the dysfunction of one of the most important mitochondrial enzymes, NADH:ubiquinone reductase or complex I, and its structural organization thanks to the recent advances in the X-ray structure of its bacterial homologs. The new structural information on bacterial complex I provide essential clues to finally understand how complex I may work. However, the same information remains difficult to interpret for many scientists working on mitochondrial complex I from different angles, especially in the field of cell death. Here, we present a novel way of interpreting the bacterial structural information in accessible terms. On the basis of the analogy to semi-automatic shotguns, we propose a novel functional model that incorporates recent structural information with previous evidence derived from studies on mitochondrial diseases, as well as functional bioenergetics. PMID:22030538

  2. Automatic Identification of Web-Based Risk Markers for Health Events

    PubMed Central

    Borsa, Diana; Hayward, Andrew C; McKendry, Rachel A; Cox, Ingemar J

    2015-01-01

    Background The escalating cost of global health care is driving the development of new technologies to identify early indicators of an individual’s risk of disease. Traditionally, epidemiologists have identified such risk factors using medical databases and lengthy clinical studies but these are often limited in size and cost and can fail to take full account of diseases where there are social stigmas or to identify transient acute risk factors. Objective Here we report that Web search engine queries coupled with information on Wikipedia access patterns can be used to infer health events associated with an individual user and automatically generate Web-based risk markers for some of the common medical conditions worldwide, from cardiovascular disease to sexually transmitted infections and mental health conditions, as well as pregnancy. Methods Using anonymized datasets, we present methods to first distinguish individuals likely to have experienced specific health events, and classify them into distinct categories. We then use the self-controlled case series method to find the incidence of health events in risk periods directly following a user’s search for a query category, and compare to the incidence during other periods for the same individuals. Results Searches for pet stores were risk markers for allergy. We also identified some possible new risk markers; for example: searching for fast food and theme restaurants was associated with a transient increase in risk of myocardial infarction, suggesting this exposure goes beyond a long-term risk factor but may also act as an acute trigger of myocardial infarction. Dating and adult content websites were risk markers for sexually transmitted infections, such as human immunodeficiency virus (HIV). Conclusions Web-based methods provide a powerful, low-cost approach to automatically identify risk factors, and support more timely and personalized public health efforts to bring human and economic benefits. PMID

  3. Evaluating the effectiveness of treatment of corneal ulcers via computer-based automatic image analysis

    NASA Astrophysics Data System (ADS)

    Otoum, Nesreen A.; Edirisinghe, Eran A.; Dua, Harminder; Faraj, Lana

    2012-06-01

    Corneal Ulcers are a common eye disease that requires prompt treatment. Recently a number of treatment approaches have been introduced that have been proven to be very effective. Unfortunately, the monitoring process of the treatment procedure remains manual and hence time consuming and prone to human errors. In this research we propose an automatic image analysis based approach to measure the size of an ulcer and its subsequent further investigation to determine the effectiveness of any treatment process followed. In Ophthalmology an ulcer area is detected for further inspection via luminous excitation of a dye. Usually in the imaging systems utilised for this purpose (i.e. a slit lamp with an appropriate dye) the ulcer area is excited to be luminous green in colour as compared to rest of the cornea which appears blue/brown. In the proposed approach we analyse the image in the HVS colour space. Initially a pre-processing stage that carries out a local histogram equalisation is used to bring back detail in any over or under exposed areas. Secondly we deal with the removal of potential reflections from the affected areas by making use of image registration of two candidate corneal images based on the detected corneal areas. Thirdly the exact corneal boundary is detected by initially registering an ellipse to the candidate corneal boundary detected via edge detection and subsequently allowing the user to modify the boundary to overlap with the boundary of the ulcer being observed. Although this step makes the approach semi automatic, it removes the impact of breakages of the corneal boundary due to occlusion, noise, image quality degradations. The ratio between the ulcer area confined within the corneal area to the corneal area is used as a measure of comparison. We demonstrate the use of the proposed tool in the analysis of the effectiveness of a treatment procedure adopted for corneal ulcers in patients by comparing the variation of corneal size over time.

  4. Automatic detecting method of LED signal lamps on fascia based on color image

    NASA Astrophysics Data System (ADS)

    Peng, Xiaoling; Hou, Wenguang; Ding, Mingyue

    2009-10-01

    Instrument display panel is one of the most important parts of automobiles. Automatic detection of LED signal lamps is critical to ensure the reliability of automobile systems. In this paper, an automatic detection method was developed which is composed of three parts in the automatic detection: the shape of LED lamps, the color of LED lamps, and defect spots inside the lamps. More than hundreds of fascias were detected with the automatic detection algorithm. The speed of the algorithm is quite fast and satisfied with the real-time request of the system. Further, the detection result was demonstrated to be stable and accurate.

  5. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data

    PubMed Central

    Nogaret, Alain; Meliza, C. Daniel; Margoliash, Daniel; Abarbanel, Henry D. I.

    2016-01-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20–50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight. PMID:27605157

  6. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data.

    PubMed

    Nogaret, Alain; Meliza, C Daniel; Margoliash, Daniel; Abarbanel, Henry D I

    2016-01-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20-50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight. PMID:27605157

  7. Automatic 3D kidney segmentation based on shape constrained GC-OAAM

    NASA Astrophysics Data System (ADS)

    Chen, Xinjian; Summers, Ronald M.; Yao, Jianhua

    2011-03-01

    The kidney can be classified into three main tissue types: renal cortex, renal medulla and renal pelvis (or collecting system). Dysfunction of different renal tissue types may cause different kidney diseases. Therefore, accurate and efficient segmentation of kidney into different tissue types plays a very important role in clinical research. In this paper, we propose an automatic 3D kidney segmentation method which segments the kidney into the three different tissue types: renal cortex, medulla and pelvis. The proposed method synergistically combines active appearance model (AAM), live wire (LW) and graph cut (GC) methods, GC-OAAM for short. Our method consists of two main steps. First, a pseudo 3D segmentation method is employed for kidney initialization in which the segmentation is performed slice-by-slice via a multi-object oriented active appearance model (OAAM) method. An improved iterative model refinement algorithm is proposed for the AAM optimization, which synergistically combines the AAM and LW method. Multi-object strategy is applied to help the object initialization. The 3D model constraints are applied to the initialization result. Second, the object shape information generated from the initialization step is integrated into the GC cost computation. A multi-label GC method is used to segment the kidney into cortex, medulla and pelvis. The proposed method was tested on 19 clinical arterial phase CT data sets. The preliminary results showed the feasibility and efficiency of the proposed method.

  8. Automatic building detection based on Purposive FastICA (PFICA) algorithm using monocular high resolution Google Earth images

    NASA Astrophysics Data System (ADS)

    Ghaffarian, Saman; Ghaffarian, Salar

    2014-11-01

    This paper proposes an improved FastICA model named as Purposive FastICA (PFICA) with initializing by a simple color space transformation and a novel masking approach to automatically detect buildings from high resolution Google Earth imagery. ICA and FastICA algorithms are defined as Blind Source Separation (BSS) techniques for unmixing source signals using the reference data sets. In order to overcome the limitations of the ICA and FastICA algorithms and make them purposeful, we developed a novel method involving three main steps: 1-Improving the FastICA algorithm using Moore-Penrose pseudo inverse matrix model, 2-Automated seeding of the PFICA algorithm based on LUV color space and proposed simple rules to split image into three regions; shadow + vegetation, baresoil + roads and buildings, respectively, 3-Masking out the final building detection results from PFICA outputs utilizing the K-means clustering algorithm with two number of clusters and conducting simple morphological operations to remove noises. Evaluation of the results illustrates that buildings detected from dense and suburban districts with divers characteristics and color combinations using our proposed method have 88.6% and 85.5% overall pixel-based and object-based precision performances, respectively.

  9. Evaluating model accuracy for model-based reasoning

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Roden, Joseph

    1992-01-01

    Described here is an approach to automatically assessing the accuracy of various components of a model. In this approach, actual data from the operation of a target system is used to drive statistical measures to evaluate the prediction accuracy of various portions of the model. We describe how these statistical measures of model accuracy can be used in model-based reasoning for monitoring and design. We then describe the application of these techniques to the monitoring and design of the water recovery system of the Environmental Control and Life Support System (ECLSS) of Space Station Freedom.

  10. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.

    PubMed

    Wang, Jun Yi; Ngo, Michael M; Hessl, David; Hagerman, Randi J; Rivera, Susan M

    2016-01-01

    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging

  11. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem

    PubMed Central

    Wang, Jun Yi; Ngo, Michael M.; Hessl, David; Hagerman, Randi J.; Rivera, Susan M.

    2016-01-01

    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer’s segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging

  12. Automatic Synthetic Aperture Radar based oil spill detection and performance estimation via a semi-automatic operational service benchmark.

    PubMed

    Singha, Suman; Vespe, Michele; Trieschmann, Olaf

    2013-08-15

    Today the health of ocean is in danger as it was never before mainly due to man-made pollutions. Operational activities show regular occurrence of accidental and deliberate oil spill in European waters. Since the areas covered by oil spills are usually large, satellite remote sensing particularly Synthetic Aperture Radar represents an effective option for operational oil spill detection. This paper describes the development of a fully automated approach for oil spill detection from SAR. Total of 41 feature parameters extracted from each segmented dark spot for oil spill and 'look-alike' classification and ranked according to their importance. The classification algorithm is based on a two-stage processing that combines classification tree analysis and fuzzy logic. An initial evaluation of this methodology on a large dataset has been carried out and degree of agreement between results from proposed algorithm and human analyst was estimated between 85% and 93% respectively for ENVISAT and RADARSAT.

  13. Video-based respiration monitoring with automatic region of interest detection.

    PubMed

    Janssen, Rik; Wang, Wenjin; Moço, Andreia; de Haan, Gerard

    2016-01-01

    Vital signs monitoring is ubiquitous in clinical environments and emerging in home-based healthcare applications. Still, since current monitoring methods require uncomfortable sensors, respiration rate remains the least measured vital sign. In this paper, we propose a video-based respiration monitoring method that automatically detects a respiratory region of interest (RoI) and signal using a camera. Based on the observation that respiration induced chest/abdomen motion is an independent motion system in a video, our basic idea is to exploit the intrinsic properties of respiration to find the respiratory RoI and extract the respiratory signal via motion factorization. We created a benchmark dataset containing 148 video sequences obtained on adults under challenging conditions and also neonates in the neonatal intensive care unit (NICU). The measurements obtained by the proposed video respiration monitoring (VRM) method are not significantly different from the reference methods (guided breathing or contact-based ECG; p-value  =  0.6), and explain more than 99% of the variance of the reference values with low limits of agreement (-2.67 to 2.81 bpm). VRM seems to provide a valid solution to ECG in confined motion scenarios, though precision may be reduced for neonates. More studies are needed to validate VRM under challenging recording conditions, including upper-body motion types.

  14. Automatic analyzer for highly polar carboxylic acids based on fluorescence derivatization-liquid chromatography.

    PubMed

    Todoroki, Kenichiro; Nakano, Tatsuki; Ishii, Yasuhiro; Goto, Kanoko; Tomita, Ryoko; Fujioka, Toshihiro; Min, Jun Zhe; Inoue, Koichi; Toyo'oka, Toshimasa

    2015-03-01

    A sensitive, versatile, and reproducible automatic analyzer for highly polar carboxylic acids based on a fluorescence derivatization-liquid chromatography (LC) method was developed. In this method, carboxylic acids were automatically and fluorescently derivatized with 4-(N,N-dimethylaminosulfonyl)-7-piperazino-2,1,3-benzoxadiazole (DBD-PZ) in the presence of 4-(4,6-dimethoxy-1,3,5-triazin-2-yl)-4-methylmorpholinium chloride by adopting a pretreatment program installed in an LC autosampler. All of the DBD-PZ-carboxylic acid derivatives were separated on the ODS column within 30 min by gradient elution. The peak of DBD-PZ did not interfere with the separation and the quantification of all the acids with the exception of lactic acid. From the LC-MS/MS analysis, we confirmed that lactic acid was converted to an oxytriazinyl derivative, which was further modified with a dimethoxy triazine group of 4-(4,6-dimethoxy-1,3,5-triazin-2-yl)-4-methylmorpholinium chloride (DMT-MM). We detected this oxytriazinyl derivative to quantify lactic acid. The detection limits (signal-to-noise ratio = 3) for the examined acids ranged from 0.19 to 1.1 µm, which correspond to 95-550 fmol per injection. The intra- and inter-day precisions of typical, highly polar carboxylic acids were all <9.0%. The developed method was successfully applied to the comprehensive analysis of carboxylic acids in various samples, which included fruit juices, red wine and media from cultured tumor cells.

  15. DG TO FT - AUTOMATIC TRANSLATION OF DIGRAPH TO FAULT TREE MODELS

    NASA Technical Reports Server (NTRS)

    Iverson, D. L.

    1994-01-01

    Fault tree and digraph models are frequently used for system failure analysis. Both types of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Each model has its advantages. While digraphs can be derived in a fairly straightforward manner from system schematics and knowledge about component failure modes and system design, fault tree structure allows for fast processing using efficient techniques developed for tree data structures. The similarities between digraphs and fault trees permits the information encoded in the digraph to be translated into a logically equivalent fault tree. The DG TO FT translation tool will automatically translate digraph models, including those with loops or cycles, into fault tree models that have the same minimum cut set solutions as the input digraph. This tool could be useful, for example, if some parts of a system have been modeled using digraphs and others using fault trees. The digraphs could be translated and incorporated into the fault trees, allowing them to be analyzed using a number of powerful fault tree processing codes, such as cut set and quantitative solution codes. A cut set for a given node is a group of failure events that will cause the failure of the node. A minimum cut set for a node is any cut set that, if any of the failures in the set were to be removed, the occurrence of the other failures in the set will not cause the failure of the event represented by the node. Cut sets calculations can be used to find dependencies, weak links, and vital system components whose failures would cause serious systems failure. The DG TO FT translation system reads in a digraph with each node listed as a separate object in the input file. The user specifies a terminal node for the digraph that will be used as the top node of the resulting fault tree. A fault tree basic event node representing the failure of that digraph node is created and becomes a child of the terminal

  16. DG TO FT - AUTOMATIC TRANSLATION OF DIGRAPH TO FAULT TREE MODELS

    NASA Technical Reports Server (NTRS)

    Iverson, D. L.

    1994-01-01

    Fault tree and digraph models are frequently used for system failure analysis. Both types of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Each model has its advantages. While digraphs can be derived in a fairly straightforward manner from system schematics and knowledge about component failure modes and system design, fault tree structure allows for fast processing using efficient techniques developed for tree data structures. The similarities between digraphs and fault trees permits the information encoded in the digraph to be translated into a logically equivalent fault tree. The DG TO FT translation tool will automatically translate digraph models, including those with loops or cycles, into fault tree models that have the same minimum cut set solutions as the input digraph. This tool could be useful, for example, if some parts of a system have been modeled using digraphs and others using fault trees. The digraphs could be translated and incorporated into the fault trees, allowing them to be analyzed using a number of powerful fault tree processing codes, such as cut set and quantitative solution codes. A cut set for a given node is a group of failure events that will cause the failure of the node. A minimum cut set for a node is any cut set that, if any of the failures in the set were to be removed, the occurrence of the other failures in the set will not cause the failure of the event represented by the node. Cut sets calculations can be used to find dependencies, weak links, and vital system components whose failures would cause serious systems failure. The DG TO FT translation system reads in a digraph with each node listed as a separate object in the input file. The user specifies a terminal node for the digraph that will be used as the top node of the resulting fault tree. A fault tree basic event node representing the failure of that digraph node is created and becomes a child of the terminal

  17. Automatic calibration of a global flow routing model in the Amazon basin using virtual SWOT data

    NASA Astrophysics Data System (ADS)

    Rogel, P. Y.; Mouffe, M.; Getirana, A.; Ricci, S. M.; Lion, C.; Mognard, N. M.; Biancamaria, S.; Boone, A.

    2012-12-01

    The Surface Water and Ocean Topography (SWOT) wide swath altimetry mission will provide a global coverage of surface water elevation, which will be used to help correct water height and discharge prediction from hydrological models. Here, the aim is to investigate the use of virtually generated SWOT data to improve water height and discharge simulation using calibration of model parameters (like river width, river depth and roughness coefficient). In this work, we use the HyMAP model to estimate water height and discharge on the Amazon catchment area. Before reaching the river network, surface and subsurface runoff are delayed by a set of linear and independent reservoirs. The flow routing is performed by the kinematic wave equation.. Since the SWOT mission has not yet been launched, virtual SWOT data are generated with a set of true parameters for HyMAP as well as measurement errors from a SWOT data simulator (i.e. a twin experiment approach is implemented). These virtual observations are used to calibrate key parameters of HyMAP through the minimization of a cost function defining the difference between the simulated and observed water heights over a one-year simulation period. The automatic calibration procedure is achieved using the MOCOM-UA multicriteria global optimization algorithm as well as the local optimization algorithm BC-DFO that is considered as a computational cost saving alternative. First, to reduce the computational cost of the calibration procedure, each spatially distributed parameter (Manning coefficient, river width and river depth) is corrupted through the multiplication of a spatially uniform factor that is the only factor optimized. In this case, it is shown that, when the measurement errors are small, the true water heights and discharges are easily retrieved. Because of equifinality, the true parameters are not always identified. A spatial correction of the model parameters is then investigated and the domain is divided into 4 regions

  18. Experiments with Uas Imagery for Automatic Modeling of Power Line 3d Geometry

    NASA Astrophysics Data System (ADS)

    Jóźków, G.; Vander Jagt, B.; Toth, C.

    2015-08-01

    The ideal mapping technology for transmission line inspection is the airborne LiDAR executed from helicopter platforms. It allows for full 3D geometry extraction in highly automated manner. Large scale aerial images can be also used for this purpose, however, automation is possible only for finding transmission line positions (2D geometry), and the sag needs to be estimated manually. For longer lines, these techniques are less expensive than ground surveys, yet they are still expensive. UAS technology has the potential to reduce these costs, especially if using inexpensive platforms with consumer grade cameras. This study investigates the potential of using high resolution UAS imagery for automatic modeling of transmission line 3D geometry. The key point of this experiment was to employ dense matching algorithms to appropriately acquired UAS images to have points created also on wires. This allowed to model the 3D geometry of transmission lines similarly to LiDAR acquired point clouds. Results showed that the transmission line modeling is possible with a high internal accuracy for both, horizontal and vertical directions, even when wires were represented by a partial (sparse) point cloud.

  19. Semi-automatic evaluation of intraocular lenses (IOL) using a mechanical eye model

    NASA Astrophysics Data System (ADS)

    Drauschke, A.; Rank, E.; Forjan, M.; Traxler, L.

    2013-03-01

    As cataracts are the most common reason for loss of vision with an age over 55, the implantation of intraocular intraocular lenses is one of the most common surgical interventions. The quality measurement and test instructions for the patients. Therefore more efforts are put into the individualization of IOL in order to achieve better imaging properties. Two examples of typical quality standards for IOL are the modulated transfer function (MTF) and the Strehl ratio which can be measured in vivo or also in mechanical eye models. A mechanical eye model in the scale 1:1 is presented. It has been designed to allow the measurement of the MTF and Strehl ratio and simultaneous evaluation of physiological imaging quality. The eye model allows the automatic analysis of the IOL especially focused on the tolerance for tilting and decentering. Cornea, iris aperture and IOL type are interchangeable, because all these parts are implemented by the use of separated holders. The IOL is mounted on a shift plate. Both are mounted on a tilt plate. This set-up guarantees an independent decentration and tilt of the IOL, both moved by electrical drives. This set-up allows a two-dimensional tolerance analysis of decentration and tilt effects. Different 100×100 point (decentration×tilt) analyzes for various iris apertures, needing only approximately 15 minutes, are presented.

  20. Dynamic Data Driven Applications Systems (DDDAS) modeling for automatic target recognition

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Seetharaman, Guna; Darema, Frederica

    2013-05-01

    The Dynamic Data Driven Applications System (DDDAS) concept uses applications modeling, mathematical algorithms, and measurement systems to work with dynamic systems. A dynamic systems such as Automatic Target Recognition (ATR) is subject to sensor, target, and the environment variations over space and time. We use the DDDAS concept to develop an ATR methodology for multiscale-multimodal analysis that seeks to integrated sensing, processing, and exploitation. In the analysis, we use computer vision techniques to explore the capabilities and analogies that DDDAS has with information fusion. The key attribute of coordination is the use of sensor management as a data driven techniques to improve performance. In addition, DDDAS supports the need for modeling from which uncertainty and variations are used within the dynamic models for advanced performance. As an example, we use a Wide-Area Motion Imagery (WAMI) application to draw parallels and contrasts between ATR and DDDAS systems that warrants an integrated perspective. This elementary work is aimed at triggering a sequence of deeper insightful research towards exploiting sparsely sampled piecewise dense WAMI measurements - an application where the challenges of big-data with regards to mathematical fusion relationships and high-performance computations remain significant and will persist. Dynamic data-driven adaptive computations are required to effectively handle the challenges with exponentially increasing data volume for advanced information fusion systems solutions such as simultaneous target tracking and ATR.

  1. An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm

    PubMed Central

    Mallayya, Deivamani; Ramachandran, Baskaran; Viswanathan, Suganya

    2015-01-01

    Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the “quality of service” as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user's request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services. PMID:26504894

  2. Automatic Single Tree Detection in Plantations using UAV-based Photogrammetric Point clouds

    NASA Astrophysics Data System (ADS)

    Kattenborn, T.; Sperlich, M.; Bataua, K.; Koch, B.

    2014-08-01

    For reasons of documentation, management and certification there is a high interest in efficient inventories of palm plantations on the single plant level. Recent developments in unmanned aerial vehicle (UAV) technology facilitate spatial and temporal flexible acquisition of high resolution 3D data. Common single tree detection approaches are based on Very High Resolution (VHR) satellite or Airborne Laser Scanning (ALS) data. However, VHR data is often limited to clouds and does commonly not allow for height measurements. VHR and in particualar ALS data are characterized by high relatively high acquisition costs. Sperlich et al. (2013) already demonstrated the high potential of UAV-based photogrammetric point clouds for single tree detection using pouring algorithms. This approach was adjusted and improved for an application on palm plantation. The 9.4ha test site on Tarawa, Kiribati, comprised densely scattered growing palms, as well as abundant undergrowth and trees. Using a standard consumer grade camera mounted on an octocopter two flight campaigns at 70m and 100m altitude were performed to evaluate the effect Ground Sampling Distance (GSD) and image overlap. To avoid comission errors and improve the terrain interpolation the point clouds were classified based on the geometric characteristics of the classes, i.e. (1) palm, (2) other vegetation (3) and ground. The mapping accuracy amounts for 86.1 % for the entire study area and 98.2 % for dense growing palm stands. We conclude that this flexible and automatic approach has high capabilities for operational use.

  3. An adaptive filter-based method for robust, automatic detection and frequency estimation of whistles.

    PubMed

    Johansson, A Torbjorn; White, Paul R

    2011-08-01

    This paper proposes an adaptive filter-based method for detection and frequency estimation of whistle calls, such as the calls of birds and marine mammals, which are typically analyzed in the time-frequency domain using a spectrogram. The approach taken here is based on adaptive notch filtering, which is an established technique for frequency tracking. For application to automatic whistle processing, methods for detection and improved frequency tracking through frequency crossings as well as interfering transients are developed and coupled to the frequency tracker. Background noise estimation and compensation is accomplished using order statistics and pre-whitening. Using simulated signals as well as recorded calls of marine mammals and a human whistled speech utterance, it is shown that the proposed method can detect more simultaneous whistles than two competing spectrogram-based methods while not reporting any false alarms on the example datasets. In one example, it extracts complete 1.4 and 1.8 s bottlenose dolphin whistles successfully through frequency crossings. The method performs detection and estimates frequency tracks even at high sweep rates. The algorithm is also shown to be effective on human whistled utterances. PMID:21877804

  4. Automatic brain MR image denoising based on texture feature-based artificial neural networks.

    PubMed

    Chang, Yu-Ning; Chang, Herng-Hua

    2015-01-01

    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time.

  5. Automatic brain MR image denoising based on texture feature-based artificial neural networks.

    PubMed

    Chang, Yu-Ning; Chang, Herng-Hua

    2015-01-01

    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time. PMID:26405887

  6. TReMAP: Automatic 3D Neuron Reconstruction Based on Tracing, Reverse Mapping and Assembling of 2D Projections.

    PubMed

    Zhou, Zhi; Liu, Xiaoxiao; Long, Brian; Peng, Hanchuan

    2016-01-01

    Efficient and accurate digital reconstruction of neurons from large-scale 3D microscopic images remains a challenge in neuroscience. We propose a new automatic 3D neuron reconstruction algorithm, TReMAP, which utilizes 3D Virtual Finger (a reverse-mapping technique) to detect 3D neuron structures based on tracing results on 2D projection planes. Our fully automatic tracing strategy achieves close performance with the state-of-the-art neuron tracing algorithms, with the crucial advantage of efficient computation (much less memory consumption and parallel computation) for large-scale images.

  7. A computer program to automatically generate state equations and macro-models. [for network analysis and design

    NASA Technical Reports Server (NTRS)

    Garrett, S. J.; Bowers, J. C.; Oreilly, J. E., Jr.

    1978-01-01

    A computer program, PROSE, that produces nonlinear state equations from a simple topological description of an electrical or mechanical network is described. Unnecessary states are also automatically eliminated, so that a simplified terminal circuit model is obtained. The program also prints out the eigenvalues of a linearized system and the sensitivities of the eigenvalue of largest magnitude.

  8. Automatic Sleep Stage Determination by Multi-Valued Decision Making Based on Conditional Probability with Optimal Parameters

    NASA Astrophysics Data System (ADS)

    Wang, Bei; Sugi, Takenao; Wang, Xingyu; Nakamura, Masatoshi

    Data for human sleep study may be affected by internal and external influences. The recorded sleep data contains complex and stochastic factors, which increase the difficulties for the computerized sleep stage determination techniques to be applied for clinical practice. The aim of this study is to develop an automatic sleep stage determination system which is optimized for variable sleep data. The main methodology includes two modules: expert knowledge database construction and automatic sleep stage determination. Visual inspection by a qualified clinician is utilized to obtain the probability density function of parameters during the learning process of expert knowledge database construction. Parameter selection is introduced in order to make the algorithm flexible. Automatic sleep stage determination is manipulated based on conditional probability. The result showed close agreement comparing with the visual inspection by clinician. The developed system can meet the customized requirements in hospitals and institutions.

  9. A fully automatic three-step liver segmentation method on LDA-based probability maps for multiple contrast MR images.

    PubMed

    Gloger, Oliver; Kühn, Jens; Stanski, Adam; Völzke, Henry; Puls, Ralf

    2010-07-01

    Automatic 3D liver segmentation in magnetic resonance (MR) data sets has proven to be a very challenging task in the domain of medical image analysis. There exist numerous approaches for automatic 3D liver segmentation on computer tomography data sets that have influenced the segmentation of MR images. In contrast to previous approaches to liver segmentation in MR data sets, we use all available MR channel information of different weightings and formulate liver tissue and position probabilities in a probabilistic framework. We apply multiclass linear discriminant analysis as a fast and efficient dimensionality reduction technique and generate probability maps then used for segmentation. We develop a fully automatic three-step 3D segmentation approach based upon a modified region growing approach and a further threshold technique. Finally, we incorporate characteristic prior knowledge to improve the segmentation results. This novel 3D segmentation approach is modularized and can be applied for normal and fat accumulated liver tissue properties.

  10. An automatic water body area monitoring algorithm for satellite images based on Markov Random Fields

    NASA Astrophysics Data System (ADS)

    Elmi, Omid; Tourian, Mohammad J.; Sneeuw, Nico

    2016-04-01

    Our knowledge about spatial and temporal variation of hydrological parameters are surprisingly poor, because most of it is based on in situ stations and the number of stations have reduced dramatically during the past decades. On the other hand, remote sensing techniques have proven their ability to measure different parameters of Earth phenomena. Optical and SAR satellite imagery provide the opportunity to monitor the spatial change in coastline, which can serve as a way to determine the water extent repeatedly in an appropriate time interval. An appropriate classification technique to separate water and land is the backbone of each automatic water body monitoring. Due to changes in the water level, river and lake extent, atmosphere, sunlight radiation and onboard calibration of the satellite over time, most of the pixel-based classification techniques fail to determine accurate water masks. Beyond pixel intensity, spatial correlation between neighboring pixels is another source of information that should be used to decide the label of pixels. Water bodies have strong spatial correlation in satellite images. Therefore including contextual information as additional constraint into the procedure of water body monitoring improves the accuracy of the derived water masks significantly. In this study, we present an automatic algorithm for water body area monitoring based on maximum a posteriori (MAP) estimation of Markov Random Fields (MRF). First we collect all available images from selected case studies during the monitoring period. Then for each image separately we apply a k-means clustering to derive a primary water mask. After that we develop a MRF using pixel values and the primary water mask for each image. Then among the different realizations of the field we select the one that maximizes the posterior estimation. We solve this optimization problem using graph cut techniques. A graph with two terminals is constructed, after which the best labelling structure for

  11. Piloted Simulation Evaluation of a Model-Predictive Automatic Recovery System to Prevent Vehicle Loss of Control on Approach

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Liu, Yuan; Sowers, Thomas S.; Owen, A. Karl; Guo, Ten-Huei

    2014-01-01

    This paper describes a model-predictive automatic recovery system for aircraft on the verge of a loss-of-control situation. The system determines when it must intervene to prevent an imminent accident, resulting from a poor approach. It estimates the altitude loss that would result from a go-around maneuver at the current flight condition. If the loss is projected to violate a minimum altitude threshold, the maneuver is automatically triggered. The system deactivates to allow landing once several criteria are met. Piloted flight simulator evaluation showed the system to provide effective envelope protection during extremely unsafe landing attempts. The results demonstrate how flight and propulsion control can be integrated to recover control of the vehicle automatically and prevent a potential catastrophe.

  12. Semi-automatic liver tumor segmentation with hidden Markov measure field model and non-parametric distribution estimation.

    PubMed

    Häme, Yrjö; Pollari, Mika

    2012-01-01

    A novel liver tumor segmentation method for CT images is presented. The aim of this work was to reduce the manual labor and time required in the treatment planning of radiofrequency ablation (RFA), by providing accurate and automated tumor segmentations reliably. The developed method is semi-automatic, requiring only minimal user interaction. The segmentation is based on non-parametric intensity distribution estimation and a hidden Markov measure field model, with application of a spherical shape prior. A post-processing operation is also presented to remove the overflow to adjacent tissue. In addition to the conventional approach of using a single image as input data, an approach using images from multiple contrast phases was developed. The accuracy of the method was validated with two sets of patient data, and artificially generated samples. The patient data included preoperative RFA images and a public data set from "3D Liver Tumor Segmentation Challenge 2008". The method achieved very high accuracy with the RFA data, and outperformed other methods evaluated with the public data set, receiving an average overlap error of 30.3% which represents an improvement of 2.3% points to the previously best performing semi-automatic method. The average volume difference was 23.5%, and the average, the RMS, and the maximum surface distance errors were 1.87, 2.43, and 8.09 mm, respectively. The method produced good results even for tumors with very low contrast and ambiguous borders, and the performance remained high with noisy image data.

  13. Sediment characterization in intertidal zone of the Bourgneuf bay using the Automatic Modified Gaussian Model (AMGM)

    NASA Astrophysics Data System (ADS)

    Verpoorter, C.; Carrère, V.; Combe, J.-P.; Le Corre, L.

    2009-04-01

    Understanding of the uppermost layer of cohesive sediment beds provides important clues for predicting future sediment behaviours. Sediment consolidation, grain size, water content and biological slimes (EPS: extracellular polymeric substances) were found to be significant factors influencing erosion resistance. The surface spectral signatures of mudflat sediments reflect such bio-geophysical parameters. The overall shape of the spectrum, also called a continuum, is a function of grain size and moisture content. Composition translates into specific absorption features. Finally, the chlorophyll-a concentration derived from the strength of the absorption at 675 nm, is a good proxy for biofilm biomass. Bourgneuf Bay site, south of the Loire river estuary, France, was chosen to represent a range of physical and biological influences on sediment erodability. Field spectral measurements and samples of sediments were collected during various field campaigns. An ASD Fieldspec 3 spectroradiometer was used to produce sediment reflectance hyperspectra in the wavelength range 350-2500 nm. We have developed an automatic procedure based on the Modified Gaussian Model that uses, as the first step, the Spectroscopic Derivative Analysis (SDA) to extract from spectra the bio-geophysical properties on mudflat sediments (Verpoorter et al., 2007). This AMGM algorithm is a powerfull tool to deconvolve spectra into two components, first gaussian curves for the absorptions bands, and second a straight line in the wavenumber range for the continuum. We are investigating the possibility of including other approaches, as the inverse gaussian band centred on 2800 nm initially developed by Whiting et al., (2006) to estimate water content. Additionally, soils samples were analysed to determine moisture content, grain size (laser grain size analyses), organic matter content, carbonate content (calcimetry) and clay content. X-ray diffraction analysis was performed on selected non

  14. Automatic crack propagation tracking

    NASA Technical Reports Server (NTRS)

    Shephard, M. S.; Weidner, T. J.; Yehia, N. A. B.; Burd, G. S.

    1985-01-01

    A finite element based approach to fully automatic crack propagation tracking is presented. The procedure presented combines fully automatic mesh generation with linear fracture mechanics techniques in a geometrically based finite element code capable of automatically tracking cracks in two-dimensional domains. The automatic mesh generator employs the modified-quadtree technique. Crack propagation increment and direction are predicted using a modified maximum dilatational strain energy density criterion employing the numerical results obtained by meshes of quadratic displacement and singular crack tip finite elements. Example problems are included to demonstrate the procedure.

  15. Automatic identification of the number of food items in a meal using clustering techniques based on the monitoring of swallowing and chewing

    PubMed Central

    Lopez-Meyer, Paulo; Schuckers, Stephanie; Makeyev, Oleksandr; Fontana, Juan M.; Sazonov, Edward

    2012-01-01

    The number of distinct foods consumed in a meal is of significant clinical concern in the study of obesity and other eating disorders. This paper proposes the use of information contained in chewing and swallowing sequences for meal segmentation by food types. Data collected from experiments of 17 volunteers were analyzed using two different clustering techniques. First, an unsupervised clustering technique, Affinity Propagation (AP), was used to automatically identify the number of segments within a meal. Second, performance of the unsupervised AP method was compared to a supervised learning approach based on Agglomerative Hierarchical Clustering (AHC). While the AP method was able to obtain 90% accuracy in predicting the number of food items, the AHC achieved an accuracy >95%. Experimental results suggest that the proposed models of automatic meal segmentation may be utilized as part of an integral application for objective Monitoring of Ingestive Behavior in free living conditions. PMID:23125872

  16. Automatic FDG-PET-based tumor and metastatic lymph node segmentation in cervical cancer

    NASA Astrophysics Data System (ADS)

    Arbonès, Dídac R.; Jensen, Henrik G.; Loft, Annika; Munck af Rosenschöld, Per; Hansen, Anders Elias; Igel, Christian; Darkner, Sune

    2014-03-01

    Treatment of cervical cancer, one of the three most commonly diagnosed cancers worldwide, often relies on delineations of the tumour and metastases based on PET imaging using the contrast agent 18F-Fluorodeoxyglucose (FDG). We present a robust automatic algorithm for segmenting the gross tumour volume (GTV) and metastatic lymph nodes in such images. As the cervix is located next to the bladder and FDG is washed out through the urine, the PET-positive GTV and the bladder cannot be easily separated. Our processing pipeline starts with a histogram-based region of interest detection followed by level set segmentation. After that, morphological image operations combined with clustering, region growing, and nearest neighbour labelling allow to remove the bladder and to identify the tumour and metastatic lymph nodes. The proposed method was applied to 125 patients and no failure could be detected by visual inspection. We compared our segmentations with results from manual delineations of corresponding MR and CT images, showing that the detected GTV lays at least 97.5% within the MR/CT delineations. We conclude that the algorithm has a very high potential for substituting the tedious manual delineation of PET positive areas.

  17. Automatic registration of large-scale urban scene point clouds based on semantic feature points

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Dong, Zhen; Liang, Fuxun; Liu, Yuan

    2016-03-01

    Point clouds collected by terrestrial laser scanning (TLS) from large-scale urban scenes contain a wide variety of objects (buildings, cars, pole-like objects, and others) with symmetric and incomplete structures, and relatively low-textured surfaces, all of which pose great challenges for automatic registration between scans. To address the challenges, this paper proposes a registration method to provide marker-free and multi-view registration based on the semantic feature points extracted. First, the method detects the semantic feature points within a detection scheme, which includes point cloud segmentation, vertical feature lines extraction and semantic information calculation and finally takes the intersections of these lines with the ground as the semantic feature points. Second, the proposed method matches the semantic feature points using geometrical constraints (3-point scheme) as well as semantic information (category and direction), resulting in exhaustive pairwise registration between scans. Finally, the proposed method implements multi-view registration by constructing a minimum spanning tree of the fully connected graph derived from exhaustive pairwise registration. Experiments have demonstrated that the proposed method performs well in various urban environments and indoor scenes with the accuracy at the centimeter level and improves the efficiency, robustness, and accuracy of registration in comparison with the feature plane-based methods.

  18. [Automatic classification method of star spectra data based on manifold fuzzy twin support vector machine].

    PubMed

    Liu, Zhong-bao; Gao, Yan-yun; Wang, Jian-zhen

    2015-01-01

    Support vector machine (SVM) with good leaning ability and generalization is widely used in the star spectra data classification. But when the scale of data becomes larger, the shortages of SVM appear: the calculation amount is quite large and the classification speed is too slow. In order to solve the above problems, twin support vector machine (TWSVM) was proposed by Jayadeva. The advantage of TSVM is that the time cost is reduced to 1/4 of that of SVM. While all the methods mentioned above only focus on the global characteristics and neglect the local characteristics. In view of this, an automatic classification method of star spectra data based on manifold fuzzy twin support vector machine (MF-TSVM) is proposed in this paper. In MF-TSVM, manifold-based discriminant analysis (MDA) is used to obtain the global and local characteristics of the input data and the fuzzy membership is introduced to reduce the influences of noise and singular data on the classification results. Comparative experiments with current classification methods, such as C-SVM and KNN, on the SDSS star spectra datasets verify the effectiveness of the proposed method. PMID:25993861

  19. Automatic classification of delphinids based on the representative frequencies of whistles.

    PubMed

    Lin, Tzu-Hao; Chou, Lien-Siang

    2015-08-01

    Classification of odontocete species remains a challenging task for passive acoustic monitoring. Classifiers that have been developed use spectral features extracted from echolocation clicks and whistle contours. Most of these contour-based classifiers require complete contours to reduce measurement errors. Therefore, overlapping contours and partially detected contours in an automatic detection algorithm may increase the bias for contour-based classifiers. In this study, classification was conducted on each recording section without extracting individual contours. The local-max detector was used to extract representative frequencies of delphinid whistles and each section was divided into multiple non-overlapping fragments. Three acoustical parameters were measured from the distribution of representative frequencies in each fragment. By using the statistical features of the acoustical parameters and the percentage of overlapping whistles, correct classification rate of 70.3% was reached for the recordings of seven species (Tursiops truncatus, Delphinus delphis, Delphinus capensis, Peponocephala electra, Grampus griseus, Stenella longirostris longirostris, and Stenella attenuata) archived in MobySound.org. In addition, correct classification rate was not dramatically reduced in various simulated noise conditions. This algorithm can be employed in acoustic observatories to classify different delphinid species and facilitate future studies on the community ecology of odontocetes. PMID:26328716

  20. Automatic SAR and optical images registration method based on improved SIFT

    NASA Astrophysics Data System (ADS)

    Yue, Chunyu; Jiang, Wanshou

    2014-10-01

    An automatic SAR and optical images registration method based on improved SIFT is proposed in this paper, which is a two-step strategy, from rough to accuracy. The geometry relation of images is first constructed by the geographic information, and images are arranged based on the elevation datum plane to eliminate rotation and resolution differences. Then SIFT features extracted by the dominant direction improved SIFT from two images are matched by SSIM as similar measure according to structure information of the SIFT feature. As rotation difference is eliminated in images of flat area after rough registration, the number of correct matches and correct matching rate can be increased by altering the feature orientation assignment. And then, parallax and angle restrictions are introduced to improve the matching performance by clustering analysis in the angle and parallax domains. Mapping the original matches to the parallax feature space and rotation feature space in sequence, which are established by the custom defined parallax parameters and rotation parameters respectively. Cluster analysis is applied in the parallax feature space and rotation feature space, and the relationship between cluster parameters and matching result is analysed. Owing to the clustering feature, correct matches are retained. Finally, the perspective transform parameters for the registration are obtained by RANSAC algorithm with removing the false matches simultaneously. Experiments show that the algorithm proposed in this paper is effective in the registration of SAR and optical images with large differences.

  1. UFC advisor: An AI-based system for the automatic test environment

    NASA Technical Reports Server (NTRS)

    Lincoln, David T.; Fink, Pamela K.

    1990-01-01

    The Air Logistics Command within the Air Force is responsible for maintaining a wide variety of aircraft fleets and weapon systems. To maintain these fleets and systems requires specialized test equipment that provides data concerning the behavior of a particular device. The test equipment is used to 'poke and prod' the device to determine its functionality. The data represent voltages, pressures, torques, temperatures, etc. and are called testpoints. These testpoints can be defined numerically as being in or out of limits/tolerance. Some test equipment is termed 'automatic' because it is computer-controlled. Due to the fact that effective maintenance in the test arena requires a significant amount of expertise, it is an ideal area for the application of knowledge-based system technology. Such a system would take testpoint data, identify values out-of-limits, and determine potential underlying problems based on what is out-of-limits and how far. This paper discusses the application of this technology to a device called the Unified Fuel Control (UFC) which is maintained in this manner.

  2. Quality assurance in the production of pipe fittings by automatic laser-based material identification

    NASA Astrophysics Data System (ADS)

    Moench, Ingo; Peter, Laszlo; Priem, Roland; Sturm, Volker; Noll, Reinhard

    1999-09-01

    In plants of the chemical, nuclear and off-shore industry, application specific high-alloyed steels are used for pipe fittings. Mixing of different steel grades can lead to corrosion with severe consequential damages. Growing quality requirements and environmental responsibilities demand a 100% material control in the production of the pipe fittings. Therefore, LIFT, an automatic inspection machine, was developed to insure against any mix of material grades. LIFT is able to identify more than 30 different steel grades. The inspection method is based on Laser-Induced Breakdown Spectrometry (LIBS). An expert system, which can be easily trained and recalibrated, was developed for the data evaluation. The result of the material inspection is transferred to an external handling system via a PLC interface. The duration of the inspection process is 2 seconds. The graphical user interface was developed with respect to the requirements of an unskilled operator. The software is based on a realtime operating system and provides a safe and reliable operation. An interface for the remote maintenance by modem enables a fast operational support. Logged data are retrieved and evaluated. This is the basis for an adaptive improvement of the configuration of LIFT with respect to changing requirements in the production line. Within the first six months of routine operation, about 50000 pipe fittings were inspected.

  3. Multifractal Analysis and Relevance Vector Machine-Based Automatic Seizure Detection in Intracranial EEG.

    PubMed

    Zhang, Yanli; Zhou, Weidong; Yuan, Shasha

    2015-09-01

    Automatic seizure detection technology is of great significance for long-term electroencephalogram (EEG) monitoring of epilepsy patients. The aim of this work is to develop a seizure detection system with high accuracy. The proposed system was mainly based on multifractal analysis, which describes the local singular behavior of fractal objects and characterizes the multifractal structure using a continuous spectrum. Compared with computing the single fractal dimension, multifractal analysis can provide a better description on the transient behavior of EEG fractal time series during the evolvement from interictal stage to seizures. Thus both interictal EEG and ictal EEG were analyzed by multifractal formalism and their differences in the multifractal features were used to distinguish the two class of EEG and detect seizures. In the proposed detection system, eight features (α0, α(min), α(max), Δα, f(α(min)), f(α(max)), Δf and R) were extracted from the multifractal spectrums of the preprocessed EEG to construct feature vectors. Subsequently, relevance vector machine (RVM) was applied for EEG patterns classification, and a series of post-processing operations were used to increase the accuracy and reduce false detections. Both epoch-based and event-based evaluation methods were performed to appraise the system's performance on the EEG recordings of 21 patients in the Freiburg database. The epoch-based sensitivity of 92.94% and specificity of 97.47% were achieved, and the proposed system obtained a sensitivity of 92.06% with a false detection rate of 0.34/h in event-based performance assessment.

  4. Automatic segmentation of ground-glass opacities in lung CT images by using Markov random field-based algorithms.

    PubMed

    Zhu, Yanjie; Tan, Yongqing; Hua, Yanqing;