Science.gov

Sample records for automatic vol analysis

  1. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  2. FAMA: Fast Automatic MOOG Analysis

    NASA Astrophysics Data System (ADS)

    Magrini, Laura; Randich, Sofia; Friel, Eileen; Spina, Lorenzo; Jacobson, Heather; Cantat-Gaudin, Tristan; Donati, Paolo; Baglioni, Roberto; Maiorca, Enrico; Bragaglia, Angela; Sordo, Rosanna; Vallenari, Antonella

    2014-02-01

    FAMA (Fast Automatic MOOG Analysis), written in Perl, computes the atmospheric parameters and abundances of a large number of stars using measurements of equivalent widths (EWs) automatically and independently of any subjective approach. Based on the widely-used MOOG code, it simultaneously searches for three equilibria, excitation equilibrium, ionization balance, and the relationship between logn(FeI) and the reduced EWs. FAMA also evaluates the statistical errors on individual element abundances and errors due to the uncertainties in the stellar parameters. Convergence criteria are not fixed "a priori" but instead are based on the quality of the spectra.

  3. Automatic fringe analysis

    NASA Technical Reports Server (NTRS)

    Chiu, Arnold; Ladewski, Ted; Turney, Jerry

    1991-01-01

    To satisfy the requirement for fast, accurate interferometric analytical tools, the Fringe Analysis Workstation (FAW) has been developed to analyze complex fringe image data easily and rapidly. FAW is employed for flow studies in hydrodynamics and aerodynamics experiments, and for target shell characterization in inertial confinement fusion research. Three major components of the FAW system: fringe analysis/image processing, input/output, and visualization/graphical user interface are described.

  4. Toward automatic finite element analysis

    NASA Technical Reports Server (NTRS)

    Kela, Ajay; Perucchio, Renato; Voelcker, Herbert

    1987-01-01

    Two problems must be solved if the finite element method is to become a reliable and affordable blackbox engineering tool. Finite element meshes must be generated automatically from computer aided design databases and mesh analysis must be made self-adaptive. The experimental system described solves both problems in 2-D through spatial and analytical substructuring techniques that are now being extended into 3-D.

  5. Automatic emotional expression analysis from eye area

    NASA Astrophysics Data System (ADS)

    Akkoç, Betül; Arslan, Ahmet

    2015-02-01

    Eyes play an important role in expressing emotions in nonverbal communication. In the present study, emotional expression classification was performed based on the features that were automatically extracted from the eye area. Fırst, the face area and the eye area were automatically extracted from the captured image. Afterwards, the parameters to be used for the analysis through discrete wavelet transformation were obtained from the eye area. Using these parameters, emotional expression analysis was performed through artificial intelligence techniques. As the result of the experimental studies, 6 universal emotions consisting of expressions of happiness, sadness, surprise, disgust, anger and fear were classified at a success rate of 84% using artificial neural networks.

  6. Accuracy analysis of automatic distortion correction

    NASA Astrophysics Data System (ADS)

    Kolecki, Jakub; Rzonca, Antoni

    2015-06-01

    The paper addresses the problem of the automatic distortion removal from images acquired with non-metric SLR camera equipped with prime lenses. From the photogrammetric point of view the following question arises: is the accuracy of distortion control data provided by the manufacturer for a certain lens model (not item) sufficient in order to achieve demanded accuracy? In order to obtain the reliable answer to the aforementioned problem the two kinds of tests were carried out for three lens models. Firstly the multi-variant camera calibration was conducted using the software providing full accuracy analysis. Secondly the accuracy analysis using check points took place. The check points were measured in the images resampled based on estimated distortion model or in distortion-free images simply acquired in the automatic distortion removal mode. The extensive conclusions regarding application of each calibration approach in practice are given. Finally the rules of applying automatic distortion removal in photogrammetric measurements are suggested.

  7. Automatic Syntactic Analysis of Free Text.

    ERIC Educational Resources Information Center

    Schwarz, Christoph

    1990-01-01

    Discusses problems encountered with the syntactic analysis of free text documents in indexing. Postcoordination and precoordination of terms is discussed, an automatic indexing system call COPSY (context operator syntax) that uses natural language processing techniques is described, and future developments are explained. (60 references) (LRW)

  8. Automatic photointerpretation via texture and morphology analysis

    NASA Technical Reports Server (NTRS)

    Tou, J. T.

    1982-01-01

    Computer-based techniques for automatic photointerpretation based upon information derived from texture and morphology analysis of images are discussed. By automatic photointerpretation, is meant the determination of semantic descriptions of the content of the images by computer. To perform semantic analysis of morphology, a heirarchical structure of knowledge representation was developed. The simplest elements in a morphology are strokes, which are used to form alphabets. The alphabets are the elements for generating words, which are used to describe the function or property of an object or a region. The words are the elements for constructing sentences, which are used for semantic description of the content of the image. Photointerpretation based upon morphology is then augmented by textural information. Textural analysis is performed using a pixel-vector approach.

  9. Automatic Prosodic Analysis to Identify Mild Dementia.

    PubMed

    Gonzalez-Moreira, Eduardo; Torres-Boza, Diana; Kairuz, Héctor Arturo; Ferrer, Carlos; Garcia-Zamora, Marlene; Espinoza-Cuadros, Fernando; Hernandez-Gómez, Luis Alfonso

    2015-01-01

    This paper describes an exploratory technique to identify mild dementia by assessing the degree of speech deficits. A total of twenty participants were used for this experiment, ten patients with a diagnosis of mild dementia and ten participants like healthy control. The audio session for each subject was recorded following a methodology developed for the present study. Prosodic features in patients with mild dementia and healthy elderly controls were measured using automatic prosodic analysis on a reading task. A novel method was carried out to gather twelve prosodic features over speech samples. The best classification rate achieved was of 85% accuracy using four prosodic features. The results attained show that the proposed computational speech analysis offers a viable alternative for automatic identification of dementia features in elderly adults. PMID:26558287

  10. Automatic Prosodic Analysis to Identify Mild Dementia

    PubMed Central

    Gonzalez-Moreira, Eduardo; Torres-Boza, Diana; Kairuz, Héctor Arturo; Ferrer, Carlos; Garcia-Zamora, Marlene; Espinoza-Cuadros, Fernando; Hernandez-Gómez, Luis Alfonso

    2015-01-01

    This paper describes an exploratory technique to identify mild dementia by assessing the degree of speech deficits. A total of twenty participants were used for this experiment, ten patients with a diagnosis of mild dementia and ten participants like healthy control. The audio session for each subject was recorded following a methodology developed for the present study. Prosodic features in patients with mild dementia and healthy elderly controls were measured using automatic prosodic analysis on a reading task. A novel method was carried out to gather twelve prosodic features over speech samples. The best classification rate achieved was of 85% accuracy using four prosodic features. The results attained show that the proposed computational speech analysis offers a viable alternative for automatic identification of dementia features in elderly adults. PMID:26558287

  11. Automatic analysis and classification of surface electromyography.

    PubMed

    Abou-Chadi, F E; Nashar, A; Saad, M

    2001-01-01

    In this paper, parametric modeling of surface electromyography (EMG) algorithms that facilitates automatic SEMG feature extraction and artificial neural networks (ANN) are combined for providing an integrated system for the automatic analysis and diagnosis of myopathic disorders. Three paradigms of ANN were investigated: the multilayer backpropagation algorithm, the self-organizing feature map algorithm and a probabilistic neural network model. The performance of the three classifiers was compared with that of the old Fisher linear discriminant (FLD) classifiers. The results have shown that the three ANN models give higher performance. The percentage of correct classification reaches 90%. Poorer diagnostic performance was obtained from the FLD classifier. The system presented here indicates that surface EMG, when properly processed, can be used to provide the physician with a diagnostic assist device. PMID:11556501

  12. Automatic processing, analysis, and recognition of images

    NASA Astrophysics Data System (ADS)

    Abrukov, Victor S.; Smirnov, Evgeniy V.; Ivanov, Dmitriy G.

    2004-11-01

    New approaches and computer codes (A&CC) for automatic processing, analysis and recognition of images are offered. The A&CC are based on presentation of object image as a collection of pixels of various colours and consecutive automatic painting of distinguished itself parts of the image. The A&CC have technical objectives centred on such direction as: 1) image processing, 2) image feature extraction, 3) image analysis and some others in any consistency and combination. The A&CC allows to obtain various geometrical and statistical parameters of object image and its parts. Additional possibilities of the A&CC usage deal with a usage of artificial neural networks technologies. We believe that A&CC can be used at creation of the systems of testing and control in a various field of industry and military applications (airborne imaging systems, tracking of moving objects), in medical diagnostics, at creation of new software for CCD, at industrial vision and creation of decision-making system, etc. The opportunities of the A&CC are tested at image analysis of model fires and plumes of the sprayed fluid, ensembles of particles, at a decoding of interferometric images, for digitization of paper diagrams of electrical signals, for recognition of the text, for elimination of a noise of the images, for filtration of the image, for analysis of the astronomical images and air photography, at detection of objects.

  13. Research on automatic human chromosome image analysis

    NASA Astrophysics Data System (ADS)

    Ming, Delie; Tian, Jinwen; Liu, Jian

    2007-11-01

    Human chromosome karyotyping is one of the essential tasks in cytogenetics, especially in genetic syndrome diagnoses. In this thesis, an automatic procedure is introduced for human chromosome image analysis. According to different status of touching and overlapping chromosomes, several segmentation methods are proposed to achieve the best results. Medial axis is extracted by the middle point algorithm. Chromosome band is enhanced by the algorithm based on multiscale B-spline wavelets, extracted by average gray profile, gradient profile and shape profile, and calculated by the WDD (Weighted Density Distribution) descriptors. The multilayer classifier is used in classification. Experiment results demonstrate that the algorithms perform well.

  14. Automatic analysis of computation in biochemical reactions.

    PubMed

    Egri-Nagy, Attila; Nehaniv, Chrystopher L; Rhodes, John L; Schilstra, Maria J

    2008-01-01

    We propose a modeling and analysis method for biochemical reactions based on finite state automata. This is a completely different approach compared to traditional modeling of reactions by differential equations. Our method aims to explore the algebraic structure behind chemical reactions using automatically generated coordinate systems. In this paper we briefly summarize the underlying mathematical theory (the algebraic hierarchical decomposition theory of finite state automata) and describe how such automata can be derived from the description of chemical reaction networks. We also outline techniques for the flexible manipulation of existing models. As a real-world example we use the Krebs citric acid cycle. PMID:18606208

  15. Semi-automatic analysis of fire debris

    PubMed

    Touron; Malaquin; Gardebas; Nicolai

    2000-05-01

    Automated analysis of fire residues involves a strategy which deals with the wide variety of received criminalistic samples. Because of unknown concentration of accelerant in a sample and the wide range of flammable products, full attention from the analyst is required. Primary detection with a photoionisator resolves the first problem, determining the right method to use: the less responsive classical head-space determination or absorption on active charcoal tube, a better fitted method more adapted to low concentrations can thus be chosen. The latter method is suitable for automatic thermal desorption (ATD400), to avoid any risk of cross contamination. A PONA column (50 mx0.2 mm i.d.) allows the separation of volatile hydrocarbons from C(1) to C(15) and the update of a database. A specific second column is used for heavy hydrocarbons. Heavy products (C(13) to C(40)) were extracted from residues using a very small amount of pentane, concentrated to 1 ml at 50 degrees C and then placed on an automatic carousel. Comparison of flammables with referenced chromatograms provided expected identification, possibly using mass spectrometry. This analytical strategy belongs to the IRCGN quality program, resulting in analysis of 1500 samples per year by two technicians. PMID:10802196

  16. Automatic cortical thickness analysis on rodent brain

    NASA Astrophysics Data System (ADS)

    Lee, Joohwi; Ehlers, Cindy; Crews, Fulton; Niethammer, Marc; Budin, Francois; Paniagua, Beatriz; Sulik, Kathy; Johns, Josephine; Styner, Martin; Oguz, Ipek

    2011-03-01

    Localized difference in the cortex is one of the most useful morphometric traits in human and animal brain studies. There are many tools and methods already developed to automatically measure and analyze cortical thickness for the human brain. However, these tools cannot be directly applied to rodent brains due to the different scales; even adult rodent brains are 50 to 100 times smaller than humans. This paper describes an algorithm for automatically measuring the cortical thickness of mouse and rat brains. The algorithm consists of three steps: segmentation, thickness measurement, and statistical analysis among experimental groups. The segmentation step provides the neocortex separation from other brain structures and thus is a preprocessing step for the thickness measurement. In the thickness measurement step, the thickness is computed by solving a Laplacian PDE and a transport equation. The Laplacian PDE first creates streamlines as an analogy of cortical columns; the transport equation computes the length of the streamlines. The result is stored as a thickness map over the neocortex surface. For the statistical analysis, it is important to sample thickness at corresponding points. This is achieved by the particle correspondence algorithm which minimizes entropy between dynamically moving sample points called particles. Since the computational cost of the correspondence algorithm may limit the number of corresponding points, we use thin-plate spline based interpolation to increase the number of corresponding sample points. As a driving application, we measured the thickness difference to assess the effects of adolescent intermittent ethanol exposure that persist into adulthood and performed t-test between the control and exposed rat groups. We found significantly differing regions in both hemispheres.

  17. Automatic variance analysis of multistage care pathways.

    PubMed

    Li, Xiang; Liu, Haifeng; Zhang, Shilei; Mei, Jing; Xie, Guotong; Yu, Yiqin; Li, Jing; Lakshmanan, Geetika T

    2014-01-01

    A care pathway (CP) is a standardized process that consists of multiple care stages, clinical activities and their relations, aimed at ensuring and enhancing the quality of care. However, actual care may deviate from the planned CP, and analysis of these deviations can help clinicians refine the CP and reduce medical errors. In this paper, we propose a CP variance analysis method to automatically identify the deviations between actual patient traces in electronic medical records (EMR) and a multistage CP. As the care stage information is usually unavailable in EMR, we first align every trace with the CP using a hidden Markov model. From the aligned traces, we report three types of deviations for every care stage: additional activities, absent activities and violated constraints, which are identified by using the techniques of temporal logic and binomial tests. The method has been applied to a CP for the management of congestive heart failure and real world EMR, providing meaningful evidence for the further improvement of care quality. PMID:25160280

  18. Remote weapon station for automatic target recognition system demand analysis

    NASA Astrophysics Data System (ADS)

    Lei, Zhang; Li, Sheng-cai; Shi, Cai

    2015-08-01

    Introduces a remote weapon station basic composition and the main advantage, analysis of target based on image automatic recognition system for remote weapon station of practical significance, the system elaborated the image based automatic target recognition system in the photoelectric stabilized technology, multi-sensor image fusion technology, integrated control target image enhancement, target behavior risk analysis technology, intelligent based on the character of the image automatic target recognition algorithm research, micro sensor technology as the key technology of the development in the field of demand.

  19. Automatic analysis of the corneal ulcer

    NASA Astrophysics Data System (ADS)

    Ventura, Liliane; Chiaradia, Caio; Faria de Sousa, Sidney J.

    1999-06-01

    A very common disease in agricultural countries is the corneal ulcer. Particularly in the public hospitals, several patients come every week presenting this kind of pathology. One of the most important features to diagnose the regression of the disease is the determination of the vanishing of the affected area. An automatic system (optical system and software), attached to a Slit Lamp, has been developed to determine automatically the area of the ulcer and to follow up its regression. The clinical procedure to isolate the ulcer is still done, but the measuring time is fast enough to not cause discomfort to the patient as the traditional evaluation does. The system has been used in the last 6 months in a hospital that has about 80 patients per week presenting corneal ulcer. The patients follow up (which is an indispensable criteria for the cure of the disease) has been improved by the system and has guaranteed the treatment success.

  20. Automatic basal slice detection for cardiac analysis

    NASA Astrophysics Data System (ADS)

    Paknezhad, Mahsa; Marchesseau, Stephanie; Brown, Michael S.

    2016-03-01

    Identification of the basal slice in cardiac imaging is a key step to measuring the ejection fraction (EF) of the left ventricle (LV). Despite research on cardiac segmentation, basal slice identification is routinely performed manually. Manual identification, however, has been shown to have high inter-observer variability, with a variation of the EF by up to 8%. Therefore, an automatic way of identifying the basal slice is still required. Prior published methods operate by automatically tracking the mitral valve points from the long-axis view of the LV. These approaches assumed that the basal slice is the first short-axis slice below the mitral valve. However, guidelines published in 2013 by the society for cardiovascular magnetic resonance indicate that the basal slice is the uppermost short-axis slice with more than 50% myocardium surrounding the blood cavity. Consequently, these existing methods are at times identifying the incorrect short-axis slice. Correct identification of the basal slice under these guidelines is challenging due to the poor image quality and blood movement during image acquisition. This paper proposes an automatic tool that focuses on the two-chamber slice to find the basal slice. To this end, an active shape model is trained to automatically segment the two-chamber view for 51 samples using the leave-one-out strategy. The basal slice was detected using temporal binary profiles created for each short-axis slice from the segmented two-chamber slice. From the 51 successfully tested samples, 92% and 84% of detection results were accurate at the end-systolic and the end-diastolic phases of the cardiac cycle, respectively.

  1. Automatism

    PubMed Central

    McCaldon, R. J.

    1964-01-01

    Individuals can carry out complex activity while in a state of impaired consciousness, a condition termed “automatism”. Consciousness must be considered from both an organic and a psychological aspect, because impairment of consciousness may occur in both ways. Automatism may be classified as normal (hypnosis), organic (temporal lobe epilepsy), psychogenic (dissociative fugue) or feigned. Often painstaking clinical investigation is necessary to clarify the diagnosis. There is legal precedent for assuming that all crimes must embody both consciousness and will. Jurists are loath to apply this principle without reservation, as this would necessitate acquittal and release of potentially dangerous individuals. However, with the sole exception of the defence of insanity, there is at present no legislation to prohibit release without further investigation of anyone acquitted of a crime on the grounds of “automatism”. PMID:14199824

  2. Automatic ionospheric layers detection: Algorithms analysis

    NASA Astrophysics Data System (ADS)

    Molina, María G.; Zuccheretti, Enrico; Cabrera, Miguel A.; Bianchi, Cesidio; Sciacca, Umberto; Baskaradas, James

    2016-03-01

    Vertical sounding is a widely used technique to obtain ionosphere measurements, such as an estimation of virtual height versus frequency scanning. It is performed by high frequency radar for geophysical applications called "ionospheric sounder" (or "ionosonde"). Radar detection depends mainly on targets characteristics. While several targets behavior and correspondent echo detection algorithms have been studied, a survey to address a suitable algorithm for ionospheric sounder has to be carried out. This paper is focused on automatic echo detection algorithms implemented in particular for an ionospheric sounder, target specific characteristics were studied as well. Adaptive threshold detection algorithms are proposed, compared to the current implemented algorithm, and tested using actual data obtained from the Advanced Ionospheric Sounder (AIS-INGV) at Rome Ionospheric Observatory. Different cases of study have been selected according typical ionospheric and detection conditions.

  3. Automatic analysis of microscopic images of red blood cell aggregates

    NASA Astrophysics Data System (ADS)

    Menichini, Pablo A.; Larese, Mónica G.; Riquelme, Bibiana D.

    2015-06-01

    Red blood cell aggregation is one of the most important factors in blood viscosity at stasis or at very low rates of flow. The basic structure of aggregates is a linear array of cell commonly termed as rouleaux. Enhanced or abnormal aggregation is seen in clinical conditions, such as diabetes and hypertension, producing alterations in the microcirculation, some of which can be analyzed through the characterization of aggregated cells. Frequently, image processing and analysis for the characterization of RBC aggregation were done manually or semi-automatically using interactive tools. We propose a system that processes images of RBC aggregation and automatically obtains the characterization and quantification of the different types of RBC aggregates. Present technique could be interesting to perform the adaptation as a routine used in hemorheological and Clinical Biochemistry Laboratories because this automatic method is rapid, efficient and economical, and at the same time independent of the user performing the analysis (repeatability of the analysis).

  4. Automatic Analysis of Critical Incident Reports: Requirements and Use Cases.

    PubMed

    Denecke, Kerstin

    2016-01-01

    Increasingly, critical incident reports are used as a means to increase patient safety and quality of care. The entire potential of these sources of experiential knowledge remains often unconsidered since retrieval and analysis is difficult and time-consuming, and the reporting systems often do not provide support for these tasks. The objective of this paper is to identify potential use cases for automatic methods that analyse critical incident reports. In more detail, we will describe how faceted search could offer an intuitive retrieval of critical incident reports and how text mining could support in analysing relations among events. To realise an automated analysis, natural language processing needs to be applied. Therefore, we analyse the language of critical incident reports and derive requirements towards automatic processing methods. We learned that there is a huge potential for an automatic analysis of incident reports, but there are still challenges to be solved. PMID:27139389

  5. A hierarchical structure for automatic meshing and adaptive FEM analysis

    NASA Technical Reports Server (NTRS)

    Kela, Ajay; Saxena, Mukul; Perucchio, Renato

    1987-01-01

    A new algorithm for generating automatically, from solid models of mechanical parts, finite element meshes that are organized as spatially addressable quaternary trees (for 2-D work) or octal trees (for 3-D work) is discussed. Because such meshes are inherently hierarchical as well as spatially addressable, they permit efficient substructuring techniques to be used for both global analysis and incremental remeshing and reanalysis. The global and incremental techniques are summarized and some results from an experimental closed loop 2-D system in which meshing, analysis, error evaluation, and remeshing and reanalysis are done automatically and adaptively are presented. The implementation of 3-D work is briefly discussed.

  6. Project Report: Automatic Sequence Processor Software Analysis

    NASA Technical Reports Server (NTRS)

    Benjamin, Brandon

    2011-01-01

    The Mission Planning and Sequencing (MPS) element of Multi-Mission Ground System and Services (MGSS) provides space missions with multi-purpose software to plan spacecraft activities, sequence spacecraft commands, and then integrate these products and execute them on spacecraft. Jet Propulsion Laboratory (JPL) is currently is flying many missions. The processes for building, integrating, and testing the multi-mission uplink software need to be improved to meet the needs of the missions and the operations teams that command the spacecraft. The Multi-Mission Sequencing Team is responsible for collecting and processing the observations, experiments and engineering activities that are to be performed on a selected spacecraft. The collection of these activities is called a sequence and ultimately a sequence becomes a sequence of spacecraft commands. The operations teams check the sequence to make sure that no constraints are violated. The workflow process involves sending a program start command, which activates the Automatic Sequence Processor (ASP). The ASP is currently a file-based system that is comprised of scripts written in perl, c-shell and awk. Once this start process is complete, the system checks for errors and aborts if there are any; otherwise the system converts the commands to binary, and then sends the resultant information to be radiated to the spacecraft.

  7. Profiling School Shooters: Automatic Text-Based Analysis

    PubMed Central

    Neuman, Yair; Assaf, Dan; Cohen, Yochai; Knoll, James L.

    2015-01-01

    School shooters present a challenge to both forensic psychiatry and law enforcement agencies. The relatively small number of school shooters, their various characteristics, and the lack of in-depth analysis of all of the shooters prior to the shooting add complexity to our understanding of this problem. In this short paper, we introduce a new methodology for automatically profiling school shooters. The methodology involves automatic analysis of texts and the production of several measures relevant for the identification of the shooters. Comparing texts written by 6 school shooters to 6056 texts written by a comparison group of male subjects, we found that the shooters’ texts scored significantly higher on the Narcissistic Personality dimension as well as on the Humilated and Revengeful dimensions. Using a ranking/prioritization procedure, similar to the one used for the automatic identification of sexual predators, we provide support for the validity and relevance of the proposed methodology. PMID:26089804

  8. Profiling School Shooters: Automatic Text-Based Analysis.

    PubMed

    Neuman, Yair; Assaf, Dan; Cohen, Yochai; Knoll, James L

    2015-01-01

    School shooters present a challenge to both forensic psychiatry and law enforcement agencies. The relatively small number of school shooters, their various characteristics, and the lack of in-depth analysis of all of the shooters prior to the shooting add complexity to our understanding of this problem. In this short paper, we introduce a new methodology for automatically profiling school shooters. The methodology involves automatic analysis of texts and the production of several measures relevant for the identification of the shooters. Comparing texts written by 6 school shooters to 6056 texts written by a comparison group of male subjects, we found that the shooters' texts scored significantly higher on the Narcissistic Personality dimension as well as on the Humilated and Revengeful dimensions. Using a ranking/prioritization procedure, similar to the one used for the automatic identification of sexual predators, we provide support for the validity and relevance of the proposed methodology. PMID:26089804

  9. Application of software technology to automatic test data analysis

    NASA Technical Reports Server (NTRS)

    Stagner, J. R.

    1991-01-01

    The verification process for a major software subsystem was partially automated as part of a feasibility demonstration. The methods employed are generally useful and applicable to other types of subsystems. The effort resulted in substantial savings in test engineer analysis time and offers a method for inclusion of automatic verification as a part of regression testing.

  10. Trends of Science Education Research: An Automatic Content Analysis

    ERIC Educational Resources Information Center

    Chang, Yueh-Hsia; Chang, Chun-Yen; Tseng, Yuen-Hsien

    2010-01-01

    This study used scientometric methods to conduct an automatic content analysis on the development trends of science education research from the published articles in the four journals of "International Journal of Science Education, Journal of Research in Science Teaching, Research in Science Education, and Science Education" from 1990 to 2007. The…

  11. Effectiveness of an Automatic Tracking Software in Underwater Motion Analysis

    PubMed Central

    Magalhaes, Fabrício A.; Sawacha, Zimi; Di Michele, Rocco; Cortesi, Matteo; Gatta, Giorgio; Fantozzi, Silvia

    2013-01-01

    Tracking of markers placed on anatomical landmarks is a common practice in sports science to perform the kinematic analysis that interests both athletes and coaches. Although different software programs have been developed to automatically track markers and/or features, none of them was specifically designed to analyze underwater motion. Hence, this study aimed to evaluate the effectiveness of a software developed for automatic tracking of underwater movements (DVP), based on the Kanade-Lucas-Tomasi feature tracker. Twenty-one video recordings of different aquatic exercises (n = 2940 markers’ positions) were manually tracked to determine the markers’ center coordinates. Then, the videos were automatically tracked using DVP and a commercially available software (COM). Since tracking techniques may produce false targets, an operator was instructed to stop the automatic procedure and to correct the position of the cursor when the distance between the calculated marker’s coordinate and the reference one was higher than 4 pixels. The proportion of manual interventions required by the software was used as a measure of the degree of automation. Overall, manual interventions were 10.4% lower for DVP (7.4%) than for COM (17.8%). Moreover, when examining the different exercise modes separately, the percentage of manual interventions was 5.6% to 29.3% lower for DVP than for COM. Similar results were observed when analyzing the type of marker rather than the type of exercise, with 9.9% less manual interventions for DVP than for COM. In conclusion, based on these results, the developed automatic tracking software presented can be used as a valid and useful tool for underwater motion analysis. Key Points The availability of effective software for automatic tracking would represent a significant advance for the practical use of kinematic analysis in swimming and other aquatic sports. An important feature of automatic tracking software is to require limited human

  12. Automatic analysis of double coronal mass ejections from coronagraph images

    NASA Astrophysics Data System (ADS)

    Jacobs, Matthew; Chang, Lin-Ching; Pulkkinen, Antti; Romano, Michelangelo

    2015-11-01

    Coronal mass ejections (CMEs) can have major impacts on man-made technology and humans, both in space and on Earth. These impacts have created a high interest in the study of CMEs in an effort to detect and track events and forecast the CME arrival time to provide time for proper mitigation. A robust automatic real-time CME processing pipeline is greatly desired to avoid laborious and subjective manual processing. Automatic methods have been proposed to segment CMEs from coronagraph images and estimate CME parameters such as their heliocentric location and velocity. However, existing methods suffered from several shortcomings such as the use of hard thresholding and an inability to handle two or more CMEs occurring within the same coronagraph image. Double-CME analysis is a necessity for forecasting the many CME events that occur within short time frames. Robust forecasts for all CME events are required to fully understand space weather impacts. This paper presents a new method to segment CME masses and pattern recognition approaches to differentiate two CMEs in a single coronagraph image. The proposed method is validated on a data set of 30 halo CMEs, with results showing comparable ability in transient arrival time prediction accuracy and the new ability to automatically predict the arrival time of a double-CME event. The proposed method is the first automatic method to successfully calculate CME parameters from double-CME events, making this automatic method applicable to a wider range of CME events.

  13. Automatic recognition and analysis of synapses. [in brain tissue

    NASA Technical Reports Server (NTRS)

    Ungerleider, J. A.; Ledley, R. S.; Bloom, F. E.

    1976-01-01

    An automatic system for recognizing synaptic junctions would allow analysis of large samples of tissue for the possible classification of specific well-defined sets of synapses based upon structural morphometric indices. In this paper the three steps of our system are described: (1) cytochemical tissue preparation to allow easy recognition of the synaptic junctions; (2) transmitting the tissue information to a computer; and (3) analyzing each field to recognize the synapses and make measurements on them.

  14. Development of an automatic identification algorithm for antibiogram analysis.

    PubMed

    Costa, Luan F R; da Silva, Eduardo S; Noronha, Victor T; Vaz-Moreira, Ivone; Nunes, Olga C; Andrade, Marcelino M de

    2015-12-01

    Routinely, diagnostic and microbiology laboratories perform antibiogram analysis which can present some difficulties leading to misreadings and intra and inter-reader deviations. An Automatic Identification Algorithm (AIA) has been proposed as a solution to overcome some issues associated with the disc diffusion method, which is the main goal of this work. AIA allows automatic scanning of inhibition zones obtained by antibiograms. More than 60 environmental isolates were tested using susceptibility tests which were performed for 12 different antibiotics for a total of 756 readings. Plate images were acquired and classified as standard or oddity. The inhibition zones were measured using the AIA and results were compared with reference method (human reading), using weighted kappa index and statistical analysis to evaluate, respectively, inter-reader agreement and correlation between AIA-based and human-based reading. Agreements were observed in 88% cases and 89% of the tests showed no difference or a <4mm difference between AIA and human analysis, exhibiting a correlation index of 0.85 for all images, 0.90 for standards and 0.80 for oddities with no significant difference between automatic and manual method. AIA resolved some reading problems such as overlapping inhibition zones, imperfect microorganism seeding, non-homogeneity of the circumference, partial action of the antimicrobial, and formation of a second halo of inhibition. Furthermore, AIA proved to overcome some of the limitations observed in other automatic methods. Therefore, AIA may be a practical tool for automated reading of antibiograms in diagnostic and microbiology laboratories. PMID:26513468

  15. Automatic Generation of User Material Subroutines for Biomechanical Growth Analysis

    PubMed Central

    Young, Jonathan M.; Yao, Jiang; Ramasubramanian, Ashok; Taber, Larry A.; Perucchio, Renato

    2010-01-01

    Background The analysis of the biomechanics of growth and remodeling in soft tissues requires the formulation of specialized pseudoelastic constitutive relations. The nonlinear finite element analysis (FEA) package Abaqus allows the user to implement such specialized material responses through the coding of a user material subroutine called UMAT. However, hand coding UMAT subroutines is a challenge even for simple pseudoelastic materials and requires substantial time to debug and test the code. Method To resolve this issue, we develop an automatic UMAT code generation procedure for pseudoelastic materials using the symbolic mathematics package Mathematica, and extend the UMAT generator to include continuum growth. The performance of the automatically coded UMAT is tested by simulating the stress–stretch response of a material defined by a Fung-Orthotropic strain energy function, subject to uniaxial stretching, equibiaxial stretching, and simple shear in Abaqus. The Mathematica UMAT generator is then extended to include continuum growth, by adding a growth subroutine to the automatically generated UMAT. Results The Mathematica UMAT generator correctly derives the variables required in the UMAT code, quickly providing a ready-to-use UMAT. In turn, the UMAT accurately simulates the pseudoelastic response. In order to test the growth UMAT we simulate the growth-based bending of a bilayered bar with differing fiber directions in a non-growing passive layer. The anisotropic passive layer, being topologically tied to the growing isotropic layer, causes the bending bar to twist laterally. Conclusions The results of simulations demonstrate the validity of the automatically coded UMAT, used in both standardized tests of hyperelastic materials and for biomechanical growth analysis. PMID:20887023

  16. Cernuc: A program for automatic high-resolution radioelemental analysis

    NASA Astrophysics Data System (ADS)

    Roca, V.; Terrasi, F.; Moro, R.; Sorrentino, G.

    1981-04-01

    A computer program capable of qualitative and quantitative radioelemental analysis with high accuracy, a high degree of automatism and great ease in utilization, is presented. It has been produced to be used for Ge(Li) gammay-ray spectroscopy and can be used for X-ray spectroscopy as well. This program provides automatic searching and fitting of peaks, energy and intensity determination, identification and calculation of activities of the radioisotopes present in the sample. The last step is carried out by using a radionuclides library. The problem of a gamma line being assigned to more than one nuclide, is solved by searching the least-squares solution of a set of equations for the activities of the isotopes. Two versions of this program have been written to be run batchwise on a medium sized computer (UNIVAC 1106) and interactively on a small computer (HP 2100A).

  17. Facilitator control as automatic behavior: A verbal behavior analysis

    PubMed Central

    Hall, Genae A.

    1993-01-01

    Several studies of facilitated communication have demonstrated that the facilitators were controlling and directing the typing, although they appeared to be unaware of doing so. Such results shift the focus of analysis to the facilitator's behavior and raise questions regarding the controlling variables for that behavior. This paper analyzes facilitator behavior as an instance of automatic verbal behavior, from the perspective of Skinner's (1957) book Verbal Behavior. Verbal behavior is automatic when the speaker or writer is not stimulated by the behavior at the time of emission, the behavior is not edited, the products of behavior differ from what the person would produce normally, and the behavior is attributed to an outside source. All of these characteristics appear to be present in facilitator behavior. Other variables seem to account for the thematic content of the typed messages. These variables also are discussed. PMID:22477083

  18. An integrated spatial signature analysis and automatic defect classification system

    SciTech Connect

    Gleason, S.S.; Tobin, K.W.; Karnowski, T.P.

    1997-08-01

    An integrated Spatial Signature Analysis (SSA) and automatic defect classification (ADC) system for improved automatic semiconductor wafer manufacturing characterization is presented. Both concepts of SSA and ADC methodologies are reviewed and then the benefits of an integrated system are described, namely, focused ADC and signature-level sampling. Focused ADC involves the use of SSA information on a defect signature to reduce the number of possible classes that an ADC system must consider, thus improving the ADC system performance. Signature-level sampling improved the ADC system throughput and accuracy by intelligently sampling defects within a given spatial signature for subsequent off-line, high-resolution ADC. A complete example of wafermap characterization via an integrated SSA/ADC system is presented where a wafer with 3274 defects is completely characterized by revisiting only 25 defects on an off-line ADC review station. 13 refs., 7 figs.

  19. Spectral analysis methods for automatic speech recognition applications

    NASA Astrophysics Data System (ADS)

    Parinam, Venkata Neelima Devi

    In this thesis, we evaluate the front-end of Automatic Speech Recognition (ASR) systems, with respect to different types of spectral processing methods that are extensively used. A filter bank approach for front end spectral analysis is one of the common methods used for spectral analysis. In this work we describe and evaluate spectral analysis based on Mel and Gammatone filter banks. These filtering methods are derived from auditory models and are thought to have some advantages for automatic speech recognition work. Experimentally, however, we show that direct use of FFT spectral values is just as effective as using either Mel or Gammatone filter banks, provided that the features extracted from the FFT spectral values take into account a Mel or Mel-like frequency scale. It is also shown that trajectory features based on sliding block of spectral features, computed using either FFT or filter bank spectral analysis are considerably more effective, in terms of ASR accuracy, than are delta and delta-delta terms often used for ASR. Although there is no major performance disadvantage to using a filter bank, simplicity of analysis is a reason to eliminate this step in speech processing. These assertions hold for both clean and noisy speech.

  20. Automatic 3-D grayscale volume matching and shape analysis.

    PubMed

    Guétat, Grégoire; Maitre, Matthieu; Joly, Laurène; Lai, Sen-Lin; Lee, Tzumin; Shinagawa, Yoshihisa

    2006-04-01

    Recently, shape matching in three dimensions (3-D) has been gaining importance in a wide variety of fields such as computer graphics, computer vision, medicine, and biology, with applications such as object recognition, medical diagnosis, and quantitative morphological analysis of biological operations. Automatic shape matching techniques developed in the field of computer graphics handle object surfaces, but ignore intensities of inner voxels. In biology and medical imaging, voxel intensities obtained by computed tomography (CT), magnetic resonance imagery (MRI), and confocal microscopes are important to determine point correspondences. Nevertheless, most biomedical volume matching techniques require human interactions, and automatic methods assume matched objects to have very similar shapes so as to avoid combinatorial explosions of point. This article is aimed at decreasing the gap between the two fields. The proposed method automatically finds dense point correspondences between two grayscale volumes; i.e., finds a correspondent in the second volume for every voxel in the first volume, based on the voxel intensities. Mutiresolutional pyramids are introduced to reduce computational load and handle highly plastic objects. We calculate the average shape of a set of similar objects and give a measure of plasticity to compare them. Matching results can also be used to generate intermediate volumes for morphing. We use various data to validate the effectiveness of our method: we calculate the average shape and plasticity of a set of fly brain cells, and we also match a human skull and an orangutan skull. PMID:16617625

  1. Feature++: Automatic Feature Construction for Clinical Data Analysis.

    PubMed

    Sun, Wen; Hao, Bibo; Yu, Yiqin; Li, Jing; Hu, Gang; Xie, Guotong

    2016-01-01

    With the rapid growth of clinical data and knowledge, feature construction for clinical analysis becomes increasingly important and challenging. Given a clinical dataset with up to hundreds or thousands of columns, the traditional manual feature construction process is usually too labour intensive to generate a full spectrum of features with potential values. As a result, advanced large-scale data analysis technologies, such as feature selection for predictive modelling, cannot be fully utilized for clinical data analysis. In this paper, we propose an automatic feature construction framework for clinical data analysis, namely, Feature++. It leverages available public knowledge to understand the semantics of the clinical data, and is able to integrate external data sources to automatically construct new features based on predefined rules and clinical knowledge. We demonstrate the effectiveness of Feature++ in a typical predictive modelling use case with a public clinical dataset, and the results suggest that the proposed approach is able to fulfil typical feature construction tasks with minimal dataset specific configurations, so that more accurate models can be obtained from various clinical datasets in a more efficient way. PMID:27577443

  2. Automatic selection of region of interest for radiographic texture analysis

    NASA Astrophysics Data System (ADS)

    Lan, Li; Giger, Maryellen L.; Wilkie, Joel R.; Vokes, Tamara J.; Chen, Weijie; Li, Hui; Lyons, Tracy; Chinander, Michael R.; Pham, Ann

    2007-03-01

    We have been developing radiographic texture analysis (RTA) for assessing osteoporosis and the related risk of fracture. Currently, analyses are performed on heel images obtained from a digital imaging device, the GE/Lunar PIXI, that yields both the bone mineral density (BMD) and digital images (0.2-mm pixels; 12-bit quantization). RTA is performed on the image data in a region-of-interest (ROI) placed just below the talus in order to include the trabecular structure in the analysis. We have found that variations occur from manually selecting this ROI for RTA. To reduce the variations, we present an automatic method involving an optimized Canny edge detection technique and parameterized bone segmentation, to define bone edges for the placement of an ROI within the predominantly calcaneus portion of the radiographic heel image. The technique was developed using 1158 heel images and then tested on an independent set of 176 heel images. Results from a subjective analysis noted that 87.5% of ROI placements were rated as "good". In addition, an objective overlap measure showed that 98.3% of images had successful ROI placements as compared to placement by an experienced observer at an overlap threshold of 0.4. In conclusion, our proposed method for automatic ROI selection on radiographic heel images yields promising results and the method has the potential to reduce intra- and inter-observer variations in selecting ROIs for radiographic texture analysis.

  3. Automatic Analysis of Radio Meteor Events Using Neural Networks

    NASA Astrophysics Data System (ADS)

    Roman, Victor Ştefan; Buiu, Cătălin

    2015-12-01

    Meteor Scanning Algorithms (MESCAL) is a software application for automatic meteor detection from radio recordings, which uses self-organizing maps and feedforward multi-layered perceptrons. This paper aims to present the theoretical concepts behind this application and the main features of MESCAL, showcasing how radio recordings are handled, prepared for analysis, and used to train the aforementioned neural networks. The neural networks trained using MESCAL allow for valuable detection results, such as high correct detection rates and low false-positive rates, and at the same time offer new possibilities for improving the results.

  4. Automatic Analysis of Radio Meteor Events Using Neural Networks

    NASA Astrophysics Data System (ADS)

    Roman, Victor Ştefan; Buiu, Cătălin

    2015-07-01

    Meteor Scanning Algorithms (MESCAL) is a software application for automatic meteor detection from radio recordings, which uses self-organizing maps and feedforward multi-layered perceptrons. This paper aims to present the theoretical concepts behind this application and the main features of MESCAL, showcasing how radio recordings are handled, prepared for analysis, and used to train the aforementioned neural networks. The neural networks trained using MESCAL allow for valuable detection results, such as high correct detection rates and low false-positive rates, and at the same time offer new possibilities for improving the results.

  5. Rapid automatic keyword extraction for information retrieval and analysis

    DOEpatents

    Rose, Stuart J; Cowley,; Wendy E; Crow, Vernon L; Cramer, Nicholas O

    2012-03-06

    Methods and systems for rapid automatic keyword extraction for information retrieval and analysis. Embodiments can include parsing words in an individual document by delimiters, stop words, or both in order to identify candidate keywords. Word scores for each word within the candidate keywords are then calculated based on a function of co-occurrence degree, co-occurrence frequency, or both. Based on a function of the word scores for words within the candidate keyword, a keyword score is calculated for each of the candidate keywords. A portion of the candidate keywords are then extracted as keywords based, at least in part, on the candidate keywords having the highest keyword scores.

  6. Automatic analysis of attack data from distributed honeypot network

    NASA Astrophysics Data System (ADS)

    Safarik, Jakub; Voznak, MIroslav; Rezac, Filip; Partila, Pavol; Tomala, Karel

    2013-05-01

    There are many ways of getting real data about malicious activity in a network. One of them relies on masquerading monitoring servers as a production one. These servers are called honeypots and data about attacks on them brings us valuable information about actual attacks and techniques used by hackers. The article describes distributed topology of honeypots, which was developed with a strong orientation on monitoring of IP telephony traffic. IP telephony servers can be easily exposed to various types of attacks, and without protection, this situation can lead to loss of money and other unpleasant consequences. Using a distributed topology with honeypots placed in different geological locations and networks provides more valuable and independent results. With automatic system of gathering information from all honeypots, it is possible to work with all information on one centralized point. Communication between honeypots and centralized data store use secure SSH tunnels and server communicates only with authorized honeypots. The centralized server also automatically analyses data from each honeypot. Results of this analysis and also other statistical data about malicious activity are simply accessible through a built-in web server. All statistical and analysis reports serve as information basis for an algorithm which classifies different types of used VoIP attacks. The web interface then brings a tool for quick comparison and evaluation of actual attacks in all monitored networks. The article describes both, the honeypots nodes in distributed architecture, which monitor suspicious activity, and also methods and algorithms used on the server side for analysis of gathered data.

  7. Breast Density Analysis Using an Automatic Density Segmentation Algorithm.

    PubMed

    Oliver, Arnau; Tortajada, Meritxell; Lladó, Xavier; Freixenet, Jordi; Ganau, Sergi; Tortajada, Lidia; Vilagran, Mariona; Sentís, Melcior; Martí, Robert

    2015-10-01

    Breast density is a strong risk factor for breast cancer. In this paper, we present an automated approach for breast density segmentation in mammographic images based on a supervised pixel-based classification and using textural and morphological features. The objective of the paper is not only to show the feasibility of an automatic algorithm for breast density segmentation but also to prove its potential application to the study of breast density evolution in longitudinal studies. The database used here contains three complete screening examinations, acquired 2 years apart, of 130 different patients. The approach was validated by comparing manual expert annotations with automatically obtained estimations. Transversal analysis of the breast density analysis of craniocaudal (CC) and mediolateral oblique (MLO) views of both breasts acquired in the same study showed a correlation coefficient of ρ = 0.96 between the mammographic density percentage for left and right breasts, whereas a comparison of both mammographic views showed a correlation of ρ = 0.95. A longitudinal study of breast density confirmed the trend that dense tissue percentage decreases over time, although we noticed that the decrease in the ratio depends on the initial amount of breast density. PMID:25720749

  8. Spectral saliency via automatic adaptive amplitude spectrum analysis

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Dai, Jialun; Zhu, Yafei; Zheng, Haiyong; Qiao, Xiaoyan

    2016-03-01

    Suppressing nonsalient patterns by smoothing the amplitude spectrum at an appropriate scale has been shown to effectively detect the visual saliency in the frequency domain. Different filter scales are required for different types of salient objects. We observe that the optimal scale for smoothing amplitude spectrum shares a specific relation with the size of the salient region. Based on this observation and the bottom-up saliency detection characterized by spectrum scale-space analysis for natural images, we propose to detect visual saliency, especially with salient objects of different sizes and locations via automatic adaptive amplitude spectrum analysis. We not only provide a new criterion for automatic optimal scale selection but also reserve the saliency maps corresponding to different salient objects with meaningful saliency information by adaptive weighted combination. The performance of quantitative and qualitative comparisons is evaluated by three different kinds of metrics on the four most widely used datasets and one up-to-date large-scale dataset. The experimental results validate that our method outperforms the existing state-of-the-art saliency models for predicting human eye fixations in terms of accuracy and robustness.

  9. Automatic Visual Tracking and Social Behaviour Analysis with Multiple Mice

    PubMed Central

    Giancardo, Luca; Sona, Diego; Huang, Huiping; Sannino, Sara; Managò, Francesca; Scheggia, Diego; Papaleo, Francesco; Murino, Vittorio

    2013-01-01

    Social interactions are made of complex behavioural actions that might be found in all mammalians, including humans and rodents. Recently, mouse models are increasingly being used in preclinical research to understand the biological basis of social-related pathologies or abnormalities. However, reliable and flexible automatic systems able to precisely quantify social behavioural interactions of multiple mice are still missing. Here, we present a system built on two components. A module able to accurately track the position of multiple interacting mice from videos, regardless of their fur colour or light settings, and a module that automatically characterise social and non-social behaviours. The behavioural analysis is obtained by deriving a new set of specialised spatio-temporal features from the tracker output. These features are further employed by a learning-by-example classifier, which predicts for each frame and for each mouse in the cage one of the behaviours learnt from the examples given by the experimenters. The system is validated on an extensive set of experimental trials involving multiple mice in an open arena. In a first evaluation we compare the classifier output with the independent evaluation of two human graders, obtaining comparable results. Then, we show the applicability of our technique to multiple mice settings, using up to four interacting mice. The system is also compared with a solution recently proposed in the literature that, similarly to us, addresses the problem with a learning-by-examples approach. Finally, we further validated our automatic system to differentiate between C57B/6J (a commonly used reference inbred strain) and BTBR T+tf/J (a mouse model for autism spectrum disorders). Overall, these data demonstrate the validity and effectiveness of this new machine learning system in the detection of social and non-social behaviours in multiple (>2) interacting mice, and its versatility to deal with different experimental settings and

  10. DWT features performance analysis for automatic speech recognition of Urdu.

    PubMed

    Ali, Hazrat; Ahmad, Nasir; Zhou, Xianwei; Iqbal, Khalid; Ali, Sahibzada Muhammad

    2014-01-01

    This paper presents the work on Automatic Speech Recognition of Urdu language, using a comparative analysis for Discrete Wavelets Transform (DWT) based features and Mel Frequency Cepstral Coefficients (MFCC). These features have been extracted for one hundred isolated words of Urdu, each word uttered by ten different speakers. The words have been selected from the most frequently used words of Urdu. A variety of age and dialect has been covered by using a balanced corpus approach. After extraction of features, the classification has been achieved by using Linear Discriminant Analysis. After the classification task, the confusion matrix obtained for the DWT features has been compared with the one obtained for Mel-Frequency Cepstral Coefficients based speech recognition. The framework has been trained and tested for speech data recorded under controlled environments. The experimental results are useful in determination of the optimum features for speech recognition task. PMID:25674450

  11. The automaticity of emotional Stroop: a meta-analysis.

    PubMed

    Phaf, R Hans; Kan, Kees-Jan

    2007-06-01

    An automatic bias to threat is often invoked to account for colour-naming interference in emotional Stroop. Recent findings by McKenna and Sharma [(2004). Reversing the emotional Stroop effect reveals that it is not what it seems: The role of fast and slow components. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30, 382-392], however, cast doubt on the fast and non-conscious nature of emotional Stroop. Interference by threat words only occurred with colour naming in the trial subsequent to the threat trial (i.e., a "slow" effect), but not immediately (i.e., a "fast" effect, as would be predicted by the bias hypothesis). In a meta-analysis of 70 published emotional Stroop studies the largest effects occurred when presentation of threat words was blocked, suggesting a strong contribution by slow interference. We did not find evidence; moreover, for interference in suboptimal (less conscious) presentation conditions and the only significant effects were observed in optimal (fully conscious) conditions with high-anxious non-clinical participants and patients. The emotional Stroop effect seems to rely more on a slow disengagement process than on a fast, automatic, bias. PMID:17112461

  12. Automatic analysis for neuron by confocal laser scanning microscope

    NASA Astrophysics Data System (ADS)

    Satou, Kouhei; Aoki, Yoshimitsu; Mataga, Nobuko; Hensh, Takao K.; Taki, Katuhiko

    2005-12-01

    The aim of this study is to develop a system that recognizes both the macro- and microscopic configurations of nerve cells and automatically performs the necessary 3-D measurements and functional classification of spines. The acquisition of 3-D images of cranial nerves has been enabled by the use of a confocal laser scanning microscope, although the highly accurate 3-D measurements of the microscopic structures of cranial nerves and their classification based on their configurations have not yet been accomplished. In this study, in order to obtain highly accurate measurements of the microscopic structures of cranial nerves, existing positions of spines were predicted by the 2-D image processing of tomographic images. Next, based on the positions that were predicted on the 2-D images, the positions and configurations of the spines were determined more accurately by 3-D image processing of the volume data. We report the successful construction of an automatic analysis system that uses a coarse-to-fine technique to analyze the microscopic structures of cranial nerves with high speed and accuracy by combining 2-D and 3-D image analyses.

  13. Automatic Beam Path Analysis of Laser Wakefield Particle Acceleration Data

    SciTech Connect

    Rubel, Oliver; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Wu, Kesheng; Prabhat,; Weber, Gunther H.; Ushizima, Daniela M.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes

    2009-10-19

    Numerical simulations of laser wakefield particle accelerators play a key role in the understanding of the complex acceleration process and in the design of expensive experimental facilities. As the size and complexity of simulation output grows, an increasingly acute challenge is the practical need for computational techniques that aid in scientific knowledge discovery. To that end, we present a set of data-understanding algorithms that work in concert in a pipeline fashion to automatically locate and analyze high energy particle bunches undergoing acceleration in very large simulation datasets. These techniques work cooperatively by first identifying features of interest in individual timesteps, then integrating features across timesteps, and based on the information derived perform analysis of temporally dynamic features. This combination of techniques supports accurate detection of particle beams enabling a deeper level of scientific understanding of physical phenomena than hasbeen possible before. By combining efficient data analysis algorithms and state-of-the-art data management we enable high-performance analysis of extremely large particle datasets in 3D. We demonstrate the usefulness of our methods for a variety of 2D and 3D datasets and discuss the performance of our analysis pipeline.

  14. Automatic Analysis of Cellularity in Glioblastoma and Correlation with ADC Using Trajectory Analysis and Automatic Nuclei Counting

    PubMed Central

    Burth, Sina; Kieslich, Pascal J.; Jungk, Christine; Sahm, Felix; Kickingereder, Philipp; Kiening, Karl; Unterberg, Andreas; Wick, Wolfgang; Schlemmer, Heinz-Peter; Bendszus, Martin; Radbruch, Alexander

    2016-01-01

    Objective Several studies have analyzed a correlation between the apparent diffusion coefficient (ADC) derived from diffusion-weighted MRI and the tumor cellularity of corresponding histopathological specimens in brain tumors with inconclusive findings. Here, we compared a large dataset of ADC and cellularity values of stereotactic biopsies of glioblastoma patients using a new postprocessing approach including trajectory analysis and automatic nuclei counting. Materials and Methods Thirty-seven patients with newly diagnosed glioblastomas were enrolled in this study. ADC maps were acquired preoperatively at 3T and coregistered to the intraoperative MRI that contained the coordinates of the biopsy trajectory. 561 biopsy specimens were obtained; corresponding cellularity was calculated by semi-automatic nuclei counting and correlated to the respective preoperative ADC values along the stereotactic biopsy trajectory which included areas of T1-contrast-enhancement and necrosis. Results There was a weak to moderate inverse correlation between ADC and cellularity in glioblastomas that varied depending on the approach towards statistical analysis: for mean values per patient, Spearman’s ρ = -0.48 (p = 0.002), for all trajectory values in one joint analysis Spearman’s ρ = -0.32 (p < 0.001). The inverse correlation was additionally verified by a linear mixed model. Conclusions Our data confirms a previously reported inverse correlation between ADC and tumor cellularity. However, the correlation in the current article is weaker than the pooled correlation of comparable previous studies. Hence, besides cell density, other factors, such as necrosis and edema might influence ADC values in glioblastomas. PMID:27467557

  15. Automatic quantitative analysis of cardiac MR perfusion images

    NASA Astrophysics Data System (ADS)

    Breeuwer, Marcel M.; Spreeuwers, Luuk J.; Quist, Marcel J.

    2001-07-01

    Magnetic Resonance Imaging (MRI) is a powerful technique for imaging cardiovascular diseases. The introduction of cardiovascular MRI into clinical practice is however hampered by the lack of efficient and accurate image analysis methods. This paper focuses on the evaluation of blood perfusion in the myocardium (the heart muscle) from MR images, using contrast-enhanced ECG-triggered MRI. We have developed an automatic quantitative analysis method, which works as follows. First, image registration is used to compensate for translation and rotation of the myocardium over time. Next, the boundaries of the myocardium are detected and for each position within the myocardium a time-intensity profile is constructed. The time interval during which the contrast agent passes for the first time through the left ventricle and the myocardium is detected and various parameters are measured from the time-intensity profiles in this interval. The measured parameters are visualized as color overlays on the original images. Analysis results are stored, so that they can later on be compared for different stress levels of the heart. The method is described in detail in this paper and preliminary validation results are presented.

  16. Automatic measuring device for octave analysis of noise

    NASA Technical Reports Server (NTRS)

    Memnonov, D. L.; Nikitin, A. M.

    1973-01-01

    An automatic decoder is described that counts noise levels by pulse counters and forms audio signals proportional in duration to the total or to one of the octave noise levels. Automatic ten fold repetition of the measurement cycle is provided at each measurement point before the transition to a new point is made.

  17. Ganalyzer: A Tool for Automatic Galaxy Image Analysis

    NASA Astrophysics Data System (ADS)

    Shamir, Lior

    2011-08-01

    We describe Ganalyzer, a model-based tool that can automatically analyze and classify galaxy images. Ganalyzer works by separating the galaxy pixels from the background pixels, finding the center and radius of the galaxy, generating the radial intensity plot, and then computing the slopes of the peaks detected in the radial intensity plot to measure the spirality of the galaxy and determine its morphological class. Unlike algorithms that are based on machine learning, Ganalyzer is based on measuring the spirality of the galaxy, a task that is difficult to perform manually, and in many cases can provide a more accurate analysis compared to manual observation. Ganalyzer is simple to use, and can be easily embedded into other image analysis applications. Another advantage is its speed, which allows it to analyze ~10,000,000 galaxy images in five days using a standard modern desktop computer. These capabilities can make Ganalyzer a useful tool in analyzing large data sets of galaxy images collected by autonomous sky surveys such as SDSS, LSST, or DES. The software is available for free download at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer, and the data used in the experiment are available at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer/GalaxyImages.zip.

  18. GANALYZER: A TOOL FOR AUTOMATIC GALAXY IMAGE ANALYSIS

    SciTech Connect

    Shamir, Lior

    2011-08-01

    We describe Ganalyzer, a model-based tool that can automatically analyze and classify galaxy images. Ganalyzer works by separating the galaxy pixels from the background pixels, finding the center and radius of the galaxy, generating the radial intensity plot, and then computing the slopes of the peaks detected in the radial intensity plot to measure the spirality of the galaxy and determine its morphological class. Unlike algorithms that are based on machine learning, Ganalyzer is based on measuring the spirality of the galaxy, a task that is difficult to perform manually, and in many cases can provide a more accurate analysis compared to manual observation. Ganalyzer is simple to use, and can be easily embedded into other image analysis applications. Another advantage is its speed, which allows it to analyze {approx}10,000,000 galaxy images in five days using a standard modern desktop computer. These capabilities can make Ganalyzer a useful tool in analyzing large data sets of galaxy images collected by autonomous sky surveys such as SDSS, LSST, or DES. The software is available for free download at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer, and the data used in the experiment are available at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer/GalaxyImages.zip.

  19. Difference image analysis: automatic kernel design using information criteria

    NASA Astrophysics Data System (ADS)

    Bramich, D. M.; Horne, Keith; Alsubai, K. A.; Bachelet, E.; Mislis, D.; Parley, N.

    2016-03-01

    We present a selection of methods for automatically constructing an optimal kernel model for difference image analysis which require very few external parameters to control the kernel design. Each method consists of two components; namely, a kernel design algorithm to generate a set of candidate kernel models, and a model selection criterion to select the simplest kernel model from the candidate models that provides a sufficiently good fit to the target image. We restricted our attention to the case of solving for a spatially invariant convolution kernel composed of delta basis functions, and we considered 19 different kernel solution methods including six employing kernel regularization. We tested these kernel solution methods by performing a comprehensive set of image simulations and investigating how their performance in terms of model error, fit quality, and photometric accuracy depends on the properties of the reference and target images. We find that the irregular kernel design algorithm employing unregularized delta basis functions, combined with either the Akaike or Takeuchi information criterion, is the best kernel solution method in terms of photometric accuracy. Our results are validated by tests performed on two independent sets of real data. Finally, we provide some important recommendations for software implementations of difference image analysis.

  20. Image analysis techniques associated with automatic data base generation.

    NASA Technical Reports Server (NTRS)

    Bond, A. D.; Ramapriyan, H. K.; Atkinson, R. J.; Hodges, B. C.; Thomas, D. T.

    1973-01-01

    This paper considers some basic problems relating to automatic data base generation from imagery, the primary emphasis being on fast and efficient automatic extraction of relevant pictorial information. Among the techniques discussed are recursive implementations of some particular types of filters which are much faster than FFT implementations, a 'sequential similarity detection' technique of implementing matched filters, and sequential linear classification of multispectral imagery. Several applications of the above techniques are presented including enhancement of underwater, aerial and radiographic imagery, detection and reconstruction of particular types of features in images, automatic picture registration and classification of multiband aerial photographs to generate thematic land use maps.

  1. Trends of Science Education Research: An Automatic Content Analysis

    NASA Astrophysics Data System (ADS)

    Chang, Yueh-Hsia; Chang, Chun-Yen; Tseng, Yuen-Hsien

    2010-08-01

    This study used scientometric methods to conduct an automatic content analysis on the development trends of science education research from the published articles in the four journals of International Journal of Science Education, Journal of Research in Science Teaching, Research in Science Education, and Science Education from 1990 to 2007. The multi-stage clustering technique was employed to investigate with what topics, to what development trends, and from whose contribution that the journal publications constructed as a science education research field. This study found that the research topic of Conceptual Change & Concept Mapping was the most studied topic, although the number of publications has slightly declined in the 2000's. The studies in the themes of Professional Development, Nature of Science and Socio-Scientific Issues, and Conceptual Chang and Analogy were found to be gaining attention over the years. This study also found that, embedded in the most cited references, the supporting disciplines and theories of science education research are constructivist learning, cognitive psychology, pedagogy, and philosophy of science.

  2. Automatic analysis of ciliary beat frequency using optical flow

    NASA Astrophysics Data System (ADS)

    Figl, Michael; Lechner, Manuel; Werther, Tobias; Horak, Fritz; Hummel, Johann; Birkfellner, Wolfgang

    2012-02-01

    Ciliary beat frequency (CBF) can be a useful parameter for diagnosis of several diseases, as e.g. primary ciliary dyskinesia. (PCD). CBF computation is usually done using manual evaluation of high speed video sequences, a tedious, observer dependent, and not very accurate procedure. We used the OpenCV's pyramidal implementation of the Lukas-Kanade algorithm for optical flow computation and applied this to certain objects to follow the movements. The objects were chosen by their contrast applying the corner detection by Shi and Tomasi. Discrimination between background/noise and cilia by a frequency histogram allowed to compute the CBF. Frequency analysis was done using the Fourier transform in matlab. The correct number of Fourier summands was found by the slope in an approximation curve. The method showed to be usable to distinguish between healthy and diseased samples. However there remain difficulties in automatically identifying the cilia, and also in finding enough high contrast cilia in the image. Furthermore the some of the higher contrast cilia are lost (and sometimes found) by the method, an easy way to distinguish the correct sub-path of a point's path have yet to be found in the case where the slope methods doesn't work.

  3. Automatic comic page image understanding based on edge segment analysis

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai

    2013-12-01

    Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.

  4. Variable frame rate analysis for automatic speech recognition

    NASA Astrophysics Data System (ADS)

    Tan, Zheng-Hua

    2007-09-01

    In this paper we investigate the use of variable frame rate (VFR) analysis in automatic speech recognition (ASR). First, we review VFR technique and analyze its behavior. It is experimentally shown that VFR improves ASR performance for signals with low signal-to-noise ratios since it generates improved acoustic models and substantially reduces insertion and substitution errors although it may increase deletion errors. It is also underlined that the match between the average frame rate and the number of hidden Markov model states is critical in implementing VFR. Secondly, we analyze an effective VFR method that uses a cumulative, weighted cepstral-distance criterion for frame selection and present a revision for it. Lastly, the revised VFR method is combined with spectral- and cepstral-domain enhancement methods including the minimum statistics noise estimation (MSNE) based spectral subtraction and the cepstral mean subtraction, variance normalization and ARMA filtering (MVA) process. Experiments on the Aurora 2 database justify that VFR is highly complementary to the enhancement methods. Enhancement of speech both facilitates the frame selection in VFR and provides de-noised speech for recognition.

  5. Sentence Similarity Analysis with Applications in Automatic Short Answer Grading

    ERIC Educational Resources Information Center

    Mohler, Michael A. G.

    2012-01-01

    In this dissertation, I explore unsupervised techniques for the task of automatic short answer grading. I compare a number of knowledge-based and corpus-based measures of text similarity, evaluate the effect of domain and size on the corpus-based measures, and also introduce a novel technique to improve the performance of the system by integrating…

  6. volBrain: An Online MRI Brain Volumetry System

    PubMed Central

    Manjón, José V.; Coupé, Pierrick

    2016-01-01

    The amount of medical image data produced in clinical and research settings is rapidly growing resulting in vast amount of data to analyze. Automatic and reliable quantitative analysis tools, including segmentation, allow to analyze brain development and to understand specific patterns of many neurological diseases. This field has recently experienced many advances with successful techniques based on non-linear warping and label fusion. In this work we present a novel and fully automatic pipeline for volumetric brain analysis based on multi-atlas label fusion technology that is able to provide accurate volumetric information at different levels of detail in a short time. This method is available through the volBrain online web interface (http://volbrain.upv.es), which is publically and freely accessible to the scientific community. Our new framework has been compared with current state-of-the-art methods showing very competitive results. PMID:27512372

  7. volBrain: An Online MRI Brain Volumetry System.

    PubMed

    Manjón, José V; Coupé, Pierrick

    2016-01-01

    The amount of medical image data produced in clinical and research settings is rapidly growing resulting in vast amount of data to analyze. Automatic and reliable quantitative analysis tools, including segmentation, allow to analyze brain development and to understand specific patterns of many neurological diseases. This field has recently experienced many advances with successful techniques based on non-linear warping and label fusion. In this work we present a novel and fully automatic pipeline for volumetric brain analysis based on multi-atlas label fusion technology that is able to provide accurate volumetric information at different levels of detail in a short time. This method is available through the volBrain online web interface (http://volbrain.upv.es), which is publically and freely accessible to the scientific community. Our new framework has been compared with current state-of-the-art methods showing very competitive results. PMID:27512372

  8. Automatic Match between Delimitation Line and Real Terrain Based on Least-Cost Path Analysis

    NASA Astrophysics Data System (ADS)

    Feng, C. Q.; Jiang, N.; Zhang, X. N.; Ma, J.

    2013-11-01

    Nowadays, during the international negotiation on separating dispute areas, manual adjusting is lonely applied to the match between delimitation line and real terrain, which not only consumes much time and great labor force, but also cannot ensure high precision. Concerning that, the paper mainly explores automatic match between them and study its general solution based on Least -Cost Path Analysis. First, under the guidelines of delimitation laws, the cost layer is acquired through special disposals of delimitation line and terrain features line. Second, a new delimitation line gets constructed with the help of Least-Cost Path Analysis. Third, the whole automatic match model is built via Module Builder in order to share and reuse it. Finally, the result of automatic match is analyzed from many different aspects, including delimitation laws, two-sided benefits and so on. Consequently, a conclusion is made that the method of automatic match is feasible and effective.

  9. Automatic Crowd Analysis from Very High Resolution Satellite Images

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Reinartz, P.

    2011-04-01

    Recently automatic detection of people crowds from images became a very important research field, since it can provide crucial information especially for police departments and crisis management teams. Due to the importance of the topic, many researchers tried to solve this problem using street cameras. However, these cameras cannot be used to monitor very large outdoor public events. In order to bring a solution to the problem, herein we propose a novel approach to detect crowds automatically from remotely sensed images, and especially from very high resolution satellite images. To do so, we use a local feature based probabilistic framework. We extract local features from color components of the input image. In order to eliminate redundant local features coming from other objects in given scene, we apply a feature selection method. For feature selection purposes, we benefit from three different type of information; digital elevation model (DEM) of the region which is automatically generated using stereo satellite images, possible street segment which is obtained by segmentation, and shadow information. After eliminating redundant local features, remaining features are used to detect individual persons. Those local feature coordinates are also assumed as observations of the probability density function (pdf) of the crowds to be estimated. Using an adaptive kernel density estimation method, we estimate the corresponding pdf which gives us information about dense crowd and people locations. We test our algorithm usingWorldview-2 satellite images over Cairo and Munich cities. Besides, we also provide test results on airborne images for comparison of the detection accuracy. Our experimental results indicate the possible usage of the proposed approach in real-life mass events.

  10. Explodet Project:. Methods of Automatic Data Processing and Analysis for the Detection of Hidden Explosive

    NASA Astrophysics Data System (ADS)

    Lecca, Paola

    2003-12-01

    The research of the INFN Gruppo Collegato di Trento in the ambit of EXPLODET project for the humanitarian demining, is devoted to the development of a software procedure for the automatization of data analysis and decision taking about the presence of hidden explosive. Innovative algorithms of likely background calculation, a system based on neural networks for energy calibration and simple statistical methods for the qualitative consistency check of the signals are the main parts of the software performing the automatic data elaboration.

  11. An analysis of automatic human detection and tracking

    NASA Astrophysics Data System (ADS)

    Demuth, Philipe R.; Cosmo, Daniel L.; Ciarelli, Patrick M.

    2015-12-01

    This paper presents an automatic method to detect and follow people on video streams. This method uses two techniques to determine the initial position of the person at the beginning of the video file: one based on optical flow and the other one based on Histogram of Oriented Gradients (HOG). After defining the initial bounding box, tracking is done using four different trackers: Median Flow tracker, TLD tracker, Mean Shift tracker and a modified version of the Mean Shift tracker using HSV color space. The results of the methods presented in this paper are then compared at the end of the paper.

  12. AVTA: a device for automatic vocal transaction analysis1

    PubMed Central

    Cassotta, Louis; Feldstein, Stanley; Jaffe, Joseph

    1964-01-01

    The Automatic Vocal Transaction Analyzer was designed to recognize the pattern of certain variables in spontaneous vocal transactions. In addition, it records these variables directly in a machine-readable form and preserves their sequential relationships. This permits the immediate extraction of data by a digital computer. The AVTA system reliability has been shown to be equal to or better than that of a trained human operator in uncomplicated interaction. The superiority of the machine was demonstrated in complex interactions which tax the information processing abilities of the human observer. PMID:14120152

  13. Structuring Lecture Videos by Automatic Projection Screen Localization and Analysis.

    PubMed

    Li, Kai; Wang, Jue; Wang, Haoqian; Dai, Qionghai

    2015-06-01

    We present a fully automatic system for extracting the semantic structure of a typical academic presentation video, which captures the whole presentation stage with abundant camera motions such as panning, tilting, and zooming. Our system automatically detects and tracks both the projection screen and the presenter whenever they are visible in the video. By analyzing the image content of the tracked screen region, our system is able to detect slide progressions and extract a high-quality, non-occluded, geometrically-compensated image for each slide, resulting in a list of representative images that reconstruct the main presentation structure. Afterwards, our system recognizes text content and extracts keywords from the slides, which can be used for keyword-based video retrieval and browsing. Experimental results show that our system is able to generate more stable and accurate screen localization results than commonly-used object tracking methods. Our system also extracts more accurate presentation structures than general video summarization methods, for this specific type of video. PMID:26357345

  14. Development and Uncertainty Analysis of an Automatic Testing System for Diffusion Pump Performance

    NASA Astrophysics Data System (ADS)

    Zhang, S. W.; Liang, W. S.; Zhang, Z. J.

    A newly developed automatic testing system used in laboratory for diffusion pump performance measurement is introduced in this paper. By using two optical fiber sensors to indicate the oil level in glass-buret and a needle valve driven by a stepper motor to regulate the pressure in the test dome, the system can automatically test the ultimate pressure and pumping speed of a diffusion pump in accordance with ISO 1608. The uncertainty analysis theory is applied to analyze pumping speed measurement results. Based on the test principle and system structure, it is studied how much influence each component and test step contributes to the final uncertainty. According to differential method, the mathematical model for systematic uncertainty transfer function is established. Finally, by case study, combined uncertainties of manual operation and automatic operation are compared with each other (6.11% and 5.87% respectively). The reasonableness and practicality of this newly developed automatic testing system is proved.

  15. Finite element fatigue analysis of rectangular clutch spring of automatic slack adjuster

    NASA Astrophysics Data System (ADS)

    Xu, Chen-jie; Luo, Zai; Hu, Xiao-feng; Jiang, Wen-song

    2015-02-01

    The failure of rectangular clutch spring of automatic slack adjuster directly affects the work of automatic slack adjuster. We establish the structural mechanics model of automatic slack adjuster rectangular clutch spring based on its working principle and mechanical structure. In addition, we upload such structural mechanics model to ANSYS Workbench FEA system to predict the fatigue life of rectangular clutch spring. FEA results show that the fatigue life of rectangular clutch spring is 2.0403×105 cycle under the effect of braking loads. In the meantime, fatigue tests of 20 automatic slack adjusters are carried out on the fatigue test bench to verify the conclusion of the structural mechanics model. The experimental results show that the mean fatigue life of rectangular clutch spring is 1.9101×105, which meets the results based on the finite element analysis using ANSYS Workbench FEA system.

  16. Development of a System for Automatic Facial Expression Analysis

    NASA Astrophysics Data System (ADS)

    Diago, Luis A.; Kitaoka, Tetsuko; Hagiwara, Ichiro

    Automatic recognition of facial expressions can be an important component of natural human-machine interactions. While a lot of samples are desirable for estimating more accurately the feelings of a person (e.g. likeness) about a machine interface, in real world situation, only a small number of samples must be obtained because the high cost in collecting emotions from observed person. This paper proposes a system that solves this problem conforming to individual differences. A new method is developed for facial expression classification based on the combination of Holographic Neural Networks (HNN) and Type-2 Fuzzy Logic. For the recognition of emotions induced by facial expressions, compared with former HNN and Support Vector Machines (SVM) classifiers, proposed method achieved the best generalization performance using less learning time than SVM classifiers.

  17. Mathematical morphology for TOFD image analysis and automatic crack detection.

    PubMed

    Merazi-Meksen, Thouraya; Boudraa, Malika; Boudraa, Bachir

    2014-08-01

    The aim of this work is to automate the interpretation of ultrasonic images during the non-destructive testing (NDT) technique called time-of-flight diffraction (TOFD) to aid in decision making. In this paper, the mathematical morphology approach is used to extract relevant pixels corresponding to the presence of a discontinuity, and a pattern recognition technique is used to characterize the discontinuity. The watershed technique is exploited to determine the region of interest and image background is removed using an erosion process, thereby improving the detection of connected shapes present in the image. Remaining shapes, are finally reduced to curves using a skeletonization technique. In the case of crack defects, the curve formed by such pixels has a parabolic form that can be automatically detected using the randomized Hough transform. PMID:24709071

  18. Automatic K scaling by means of fractal and harmonic analysis

    NASA Astrophysics Data System (ADS)

    de Santis, A.; Chiappini, M.

    1992-08-01

    The K index indicates the level of magnetic perturbation with respect to the normal diurnal variation. Usually K is taken manually from magnetograms, and the involved operations are consequently rather subjective. When data are available in digital form, it is possible to derive the K index automatically, using computer algorithms. This work applies a new combined technique based on both fractal and harmonic analyses. While the latter is often used in K determination, the former provides a substantially novel approach. One year (1989) of K observations at L'Aquila observatory has been used as a basis for comparison between hand and computer estimations of K. Agreements which have been found are comparable with those expected from two different operators.

  19. Discourse Analysis for Language Learners. Australian Review of Applied Linguistics, Vol. 1, No. 2.

    ERIC Educational Resources Information Center

    Hartmann, R. R. K.

    Discourse analysis, a field that reflects an interest in language as text and social interaction, is discussed. Discourse analysis deals with the way language varies from one communicative situation to another; textological analysis deals with the internal organization of such discourse in terms of grammar and vocabulary. Assumptions in…

  20. Formal Specification and Automatic Analysis of Business Processes under Authorization Constraints: An Action-Based Approach

    NASA Astrophysics Data System (ADS)

    Armando, Alessandro; Giunchiglia, Enrico; Ponta, Serena Elisa

    We present an approach to the formal specification and automatic analysis of business processes under authorization constraints based on the action language \\cal{C}. The use of \\cal{C} allows for a natural and concise modeling of the business process and the associated security policy and for the automatic analysis of the resulting specification by using the Causal Calculator (CCALC). Our approach improves upon previous work by greatly simplifying the specification step while retaining the ability to perform a fully automatic analysis. To illustrate the effectiveness of the approach we describe its application to a version of a business process taken from the banking domain and use CCALC to determine resource allocation plans complying with the security policy.

  1. Towards automatic music transcription: note extraction based on independent subspace analysis

    NASA Astrophysics Data System (ADS)

    Wellhausen, Jens; Hoynck, Michael

    2005-01-01

    Due to the increasing amount of music available electronically the need of automatic search, retrieval and classification systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications, music analysis and music classification. The first part of the algorithm performs a note accurate temporal audio segmentation. In the second part, the resulting segments are examined using Independent Subspace Analysis to extract sounding notes. Finally, the results are used to build a MIDI file as a new representation of the piece of music which is examined.

  2. Statistical language analysis for automatic exfiltration event detection.

    SciTech Connect

    Robinson, David Gerald

    2010-04-01

    This paper discusses the recent development a statistical approach for the automatic identification of anomalous network activity that is characteristic of exfiltration events. This approach is based on the language processing method eferred to as latent dirichlet allocation (LDA). Cyber security experts currently depend heavily on a rule-based framework for initial detection of suspect network events. The application of the rule set typically results in an extensive list of uspect network events that are then further explored manually for suspicious activity. The ability to identify anomalous network events is heavily dependent on the experience of the security personnel wading through the network log. Limitations f this approach are clear: rule-based systems only apply to exfiltration behavior that has previously been observed, and experienced cyber security personnel are rare commodities. Since the new methodology is not a discrete rule-based pproach, it is more difficult for an insider to disguise the exfiltration events. A further benefit is that the methodology provides a risk-based approach that can be implemented in a continuous, dynamic or evolutionary fashion. This permits uspect network activity to be identified early with a quantifiable risk associated with decision making when responding to suspicious activity.

  3. Automatic classification for pathological prostate images based on fractal analysis.

    PubMed

    Huang, Po-Whei; Lee, Cheng-Hsiung

    2009-07-01

    Accurate grading for prostatic carcinoma in pathological images is important to prognosis and treatment planning. Since human grading is always time-consuming and subjective, this paper presents a computer-aided system to automatically grade pathological images according to Gleason grading system which is the most widespread method for histological grading of prostate tissues. We proposed two feature extraction methods based on fractal dimension to analyze variations of intensity and texture complexity in regions of interest. Each image can be classified into an appropriate grade by using Bayesian, k-NN, and support vector machine (SVM) classifiers, respectively. Leave-one-out and k-fold cross-validation procedures were used to estimate the correct classification rates (CCR). Experimental results show that 91.2%, 93.7%, and 93.7% CCR can be achieved by Bayesian, k-NN, and SVM classifiers, respectively, for a set of 205 pathological prostate images. If our fractal-based feature set is optimized by the sequential floating forward selection method, the CCR can be promoted up to 94.6%, 94.2%, and 94.6%, respectively, using each of the above three classifiers. Experimental results also show that our feature set is better than the feature sets extracted from multiwavelets, Gabor filters, and gray-level co-occurrence matrix methods because it has a much smaller size and still keeps the most powerful discriminating capability in grading prostate images. PMID:19164082

  4. The Romanian-English Contrastive Analysis Project; Further Developments in Contrastive Studies, Vol. 5.

    ERIC Educational Resources Information Center

    Chitoran, Dumitru, Ed.

    The fifth volume in this series contains ten articles dealing with various aspects of Romanian-English contrastive analysis. They are: "Theoretical Interpretation and Methodological Consequences of 'REGULARIZATION'," by Tatiana Slama-Cazacu; "On Error Analysis," by Charles M. Carlton; "The Contrastive Hypothesis in Second Language Acquisition," by…

  5. System for the Analysis of Global Energy Markets - Vol. I, Model Documentation

    EIA Publications

    2003-01-01

    Documents the objectives and the conceptual and methodological approach used in the development of projections for the International Energy Outlook. The first volume of this report describes the System for the Analysis of Global Energy Markets (SAGE) methodology and provides an in-depth explanation of the equations of the model.

  6. System for the Analysis of Global Energy Markets - Vol. II, Model Documentation

    EIA Publications

    2003-01-01

    The second volume provides a data implementation guide that lists all naming conventions and model constraints. In addition, Volume 1 has two appendixes that provide a schematic of the System for the Analysis of Global Energy Markets (SAGE) structure and a listing of the source code, respectively.

  7. Automatic differentiation for design sensitivity analysis of structural systems using multiple processors

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi

    1994-01-01

    An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.

  8. Automatic photolaryngoscope for vibration analysis of vocal cords

    NASA Astrophysics Data System (ADS)

    Igielski, J.; Kujawinska, Malgorzata; Pawlowski, Z.

    1995-05-01

    The vibration analysis of vocal cords gives information about the functioning of speech organs as well as about some illness within human organism. The analysis is usually performed by electroglottography or stroboscopic methods. The authors present the new opto-mechanical and electronic system of photolaryngoscope. The instrument uses laser diode light for illumination of vocal cords. The light reflected from the vibrating cord surface is detected electronically and analyzed. The further mathematical analysis of glottograms by autoregression method with covariance or by periodogram method is performed in order to define new criteria for medical interpretation of results.

  9. Alleviating Search Uncertainty through Concept Associations: Automatic Indexing, Co-Occurrence Analysis, and Parallel Computing.

    ERIC Educational Resources Information Center

    Chen, Hsinchun; Martinez, Joanne; Kirchhoff, Amy; Ng, Tobun D.; Schatz, Bruce R.

    1998-01-01

    Grounded on object filtering, automatic indexing, and co-occurrence analysis, an experiment was performed using a parallel supercomputer to analyze over 400,000 abstracts in an INSPEC computer engineering collection. A user evaluation revealed that system-generated thesauri were better than the human-generated INSPEC subject thesaurus in concept…

  10. Automatic Method of Supernovae Classification by Modeling Human Procedure of Spectrum Analysis

    NASA Astrophysics Data System (ADS)

    Módolo, Marcelo; Rosa, Reinaldo; Guimaraes, Lamartine N. F.

    2016-07-01

    The classification of a recently discovered supernova must be done as quickly as possible in order to define what information will be captured and analyzed in the following days. This classification is not trivial and only a few experts astronomers are able to perform it. This paper proposes an automatic method that models the human procedure of classification. It uses Multilayer Perceptron Neural Networks to analyze the supernovae spectra. Experiments were performed using different pre-processing and multiple neural network configurations to identify the classic types of supernovae. Significant results were obtained indicating the viability of using this method in places that have no specialist or that require an automatic analysis.

  11. System for Automatic Detection and Analysis of Targets in FMICW Radar Signal

    NASA Astrophysics Data System (ADS)

    Rejfek, Luboš; Mošna, Zbyšek; Urbář, Jaroslav; Koucká Knížová, Petra

    2016-01-01

    This paper presents the automatic system for the processing of the signals from the frequency modulated interrupted continuous wave (FMICW) radar and describes methods for the primary signal processing. Further, we present methods for the detection of the targets in strong noise. These methods are tested both on the real and simulated signals. The real signals were measured using the developed at the IAP CAS experimental prototype of FMICW radar with operational frequency 35.4 GHz. The measurement campaign took place at the TU Delft, the Netherlands. The obtained results were used for development of the system for the automatic detection and analysis of the targets measured by the FMICW radar.

  12. USE OF ULTRASONICS IN THE RAPID EXTRACTION OF HI-VOL FILTERS FOR BENZO-A-PYRENE (BAP) ANALYSIS

    EPA Science Inventory

    A rapid simple procedure was developed to extract residual Benzo-a-pyrene from a single hi-vol filter strip. It involves quantitative dispensing of cyclohexane, 10 minutes of ultrasonication at 78C and a quiescent period of 1 hour. At that time the solvent is ready for chromatogr...

  13. Automatic "pipeline" analysis of 3-D MRI data for clinical trials: application to multiple sclerosis.

    PubMed

    Zijdenbos, Alex P; Forghani, Reza; Evans, Alan C

    2002-10-01

    The quantitative analysis of magnetic resonance imaging (MRI) data has become increasingly important in both research and clinical studies aiming at human brain development, function, and pathology. Inevitably, the role of quantitative image analysis in the evaluation of drug therapy will increase, driven in part by requirements imposed by regulatory agencies. However, the prohibitive length of time involved and the significant intraand inter-rater variability of the measurements obtained from manual analysis of large MRI databases represent major obstacles to the wider application of quantitative MRI analysis. We have developed a fully automatic "pipeline" image analysis framework and have successfully applied it to a number of large-scale, multicenter studies (more than 1,000 MRI scans). This pipeline system is based on robust image processing algorithms, executed in a parallel, distributed fashion. This paper describes the application of this system to the automatic quantification of multiple sclerosis lesion load in MRI, in the context of a phase III clinical trial. The pipeline results were evaluated through an extensive validation study, revealing that the obtained lesion measurements are statistically indistinguishable from those obtained by trained human observers. Given that intra- and inter-rater measurement variability is eliminated by automatic analysis, this system enhances the ability to detect small treatment effects not readily detectable through conventional analysis techniques. While useful for clinical trial analysis in multiple sclerosis, this system holds widespread potential for applications in other neurological disorders, as well as for the study of neurobiology in general. PMID:12585710

  14. CAD system for automatic analysis of CT perfusion maps

    NASA Astrophysics Data System (ADS)

    Hachaj, T.; Ogiela, M. R.

    2011-03-01

    In this article, authors present novel algorithms developed for the computer-assisted diagnosis (CAD) system for analysis of dynamic brain perfusion, computer tomography (CT) maps, cerebral blood flow (CBF), and cerebral blood volume (CBV). Those methods perform both quantitative analysis [detection and measurement and description with brain anatomy atlas (AA) of potential asymmetries/lesions] and qualitative analysis (semantic interpretation of visualized symptoms). The semantic interpretation (decision about type of lesion: ischemic/hemorrhagic, is the brain tissue at risk of infraction or not) of visualized symptoms is done by, so-called, cognitive inference processes allowing for reasoning on character of pathological regions based on specialist image knowledge. The whole system is implemented in.NET platform (C# programming language) and can be used on any standard PC computer with.NET framework installed.

  15. The symbolic computation and automatic analysis of trajectories

    NASA Technical Reports Server (NTRS)

    Grossman, Robert

    1991-01-01

    Research was generally done on computation of trajectories of dynamical systems, especially control systems. Algorithms were further developed for rewriting expressions involving differential operators. The differential operators involved arise in the local analysis of nonlinear control systems. An initial design was completed of the system architecture for software to analyze nonlinear control systems using data base computing.

  16. Rapid Automatized Naming and Reading Performance: A Meta-Analysis

    ERIC Educational Resources Information Center

    Araújo, Susana; Reis, Alexandra; Petersson, Karl Magnus; Faísca, Luís

    2015-01-01

    Evidence that rapid naming skill is associated with reading ability has become increasingly prevalent in recent years. However, there is considerable variation in the literature concerning the magnitude of this relationship. The objective of the present study was to provide a comprehensive analysis of the evidence on the relationship between rapid…

  17. Automatic forensic analysis of automotive paints using optical microscopy.

    PubMed

    Thoonen, Guy; Nys, Bart; Vander Haeghen, Yves; De Roy, Gilbert; Scheunders, Paul

    2016-02-01

    The timely identification of vehicles involved in an accident, such as a hit-and-run situation, bears great importance in forensics. To this end, procedures have been defined for analyzing car paint samples that combine techniques such as visual analysis and Fourier transform infrared spectroscopy. This work proposes a new methodology in order to automate the visual analysis using image retrieval. Specifically, color and texture information is extracted from a microscopic image of a recovered paint sample, and this information is then compared with the same features for a database of paint types, resulting in a shortlist of candidate paints. In order to demonstrate the operation of the methodology, a test database has been set up and two retrieval experiments have been performed. The first experiment quantifies the performance of the procedure for retrieving exact matches, while the second experiment emulates the real-life situation of paint samples that experience changes in color and texture over time. PMID:26774250

  18. Automatic data processing and analysis system for monitoring region around a planned nuclear power plant

    NASA Astrophysics Data System (ADS)

    Tiira, Timo; Kaisko, Outi; Kortström, Jari; Vuorinen, Tommi; Uski, Marja; Korja, Annakaisa

    2015-04-01

    The site of a new planned nuclear power plant is located in Pyhäjoki, eastern coast of the Bay of Bothnia. The area is characterized by low-active intraplate seismicity, with earthquake magnitudes rarely exceeding 4.0. IAEA guidelines state that when a nuclear power plant site is evaluated a network of sensitive seismographs having a recording capability for micro-earthquakes should be installed to acquire more detailed information on potential seismic sources. The operation period of the network should be long enough to obtain a comprehensive earthquake catalogue for seismotectonic interpretation. A near optimal configuration of ten seismograph stations will be installed around the site. A central station, including 3-C high-frequency and strong motion seismographs, is located in the site area. In addition, the network comprises nine high-frequency 3-C stations within a distance of 50 km from the central station. The network is dense enough to fulfil the requirements of azimuthal coverage better than 180o and automatic event location capability down to ~ ML -0.1 within a radius of 25 km from the site. Automatic processing and analysis of the planned seismic network is presented. Following the IAEA guidelines, real-time monitoring of the site area is integrated with the automatic detection and location process operated by the Institute of Seismology, University of Helsinki. In addition interactive data analysis is needed. At the end of year 2013 5 stations have been installed. The automatic analysis utilizes also 7 near by stations of national seismic networks of Finland and Sweden. During this preliminary phase several small earthquakes have been detected. The detection capability and location accuracy of the automatic analysis is estimated using chemical explosions at 15 known sites.

  19. A framework for automatic heart sound analysis without segmentation

    PubMed Central

    2011-01-01

    Background A new framework for heart sound analysis is proposed. One of the most difficult processes in heart sound analysis is segmentation, due to interference form murmurs. Method Equal number of cardiac cycles were extracted from heart sounds with different heart rates using information from envelopes of autocorrelation functions without the need to label individual fundamental heart sounds (FHS). The complete method consists of envelope detection, calculation of cardiac cycle lengths using auto-correlation of envelope signals, features extraction using discrete wavelet transform, principal component analysis, and classification using neural network bagging predictors. Result The proposed method was tested on a set of heart sounds obtained from several on-line databases and recorded with an electronic stethoscope. Geometric mean was used as performance index. Average classification performance using ten-fold cross-validation was 0.92 for noise free case, 0.90 under white noise with 10 dB signal-to-noise ratio (SNR), and 0.90 under impulse noise up to 0.3 s duration. Conclusion The proposed method showed promising results and high noise robustness to a wide range of heart sounds. However, more tests are needed to address any bias that may have been introduced by different sources of heart sounds in the current training set, and to concretely validate the method. Further work include building a new training set recorded from actual patients, then further evaluate the method based on this new training set. PMID:21303558

  20. Development of automatic movement analysis system for a small laboratory animal using image processing

    NASA Astrophysics Data System (ADS)

    Nagatomo, Satoshi; Kawasue, Kikuhito; Koshimoto, Chihiro

    2013-03-01

    Activity analysis in a small laboratory animal is an effective procedure for various bioscience fields. The simplest way to obtain animal activity data is just observation and recording manually, even though this is labor intensive and rather subjective. In order to analyze animal movement automatically and objectivity, expensive equipment is usually needed. In the present study, we develop animal activity analysis system by means of a template matching method with video recorded movements in laboratory animal at a low cost.

  1. Theory and algorithms of an efficient fringe analysis technology for automatic measurement applications.

    PubMed

    Juarez-Salazar, Rigoberto; Guerrero-Sanchez, Fermin; Robledo-Sanchez, Carlos

    2015-06-10

    Some advances in fringe analysis technology for phase computing are presented. A full scheme for phase evaluation, applicable to automatic applications, is proposed. The proposal consists of: a fringe-pattern normalization method, Fourier fringe-normalized analysis, generalized phase-shifting processing for inhomogeneous nonlinear phase shifts and spatiotemporal visibility, and a phase-unwrapping method by a rounding-least-squares approach. The theoretical principles of each algorithm are given. Numerical examples and an experimental evaluation are presented. PMID:26192836

  2. Biosignal Analysis to Assess Mental Stress in Automatic Driving of Trucks: Palmar Perspiration and Masseter Electromyography

    PubMed Central

    Zheng, Rencheng; Yamabe, Shigeyuki; Nakano, Kimihiko; Suda, Yoshihiro

    2015-01-01

    Nowadays insight into human-machine interaction is a critical topic with the large-scale development of intelligent vehicles. Biosignal analysis can provide a deeper understanding of driver behaviors that may indicate rationally practical use of the automatic technology. Therefore, this study concentrates on biosignal analysis to quantitatively evaluate mental stress of drivers during automatic driving of trucks, with vehicles set at a closed gap distance apart to reduce air resistance to save energy consumption. By application of two wearable sensor systems, a continuous measurement was realized for palmar perspiration and masseter electromyography, and a biosignal processing method was proposed to assess mental stress levels. In a driving simulator experiment, ten participants completed automatic driving with 4, 8, and 12 m gap distances from the preceding vehicle, and manual driving with about 25 m gap distance as a reference. It was found that mental stress significantly increased when the gap distances decreased, and an abrupt increase in mental stress of drivers was also observed accompanying a sudden change of the gap distance during automatic driving, which corresponded to significantly higher ride discomfort according to subjective reports. PMID:25738768

  3. Toward automatic computer aided dental X-ray analysis using level set method.

    PubMed

    Li, Shuo; Fevens, Thomas; Krzyzak, Adam; Jin, Chao; Li, Song

    2005-01-01

    A Computer Aided Dental X-rays Analysis (CADXA) framework is proposed to semi-automatically detect areas of bone loss and root decay in digital dental X-rays. In this framework, first, a new proposed competitive coupled level set method is proposed to segment the image into three pathologically meaningful regions using two coupled level set functions. Tailored for the dental clinical environment, the segmentation stage uses a trained support vector machine (SVM) classifier to provide initial contours. Then, based on the segmentation results, an analysis scheme is applied. First, the scheme builds an uncertainty map from which those areas with bone loss will be automatically detected. Secondly, the scheme employs a method based on the SVM and the average intensity profile to isolate the teeth and detect root decay. Experimental results show that our proposed framework is able to automatically detect the areas of bone loss and, when given the orientation of the teeth, it is able to automatically detect the root decay with a seriousness level marked for diagnosis. PMID:16685904

  4. Biosignal analysis to assess mental stress in automatic driving of trucks: palmar perspiration and masseter electromyography.

    PubMed

    Zheng, Rencheng; Yamabe, Shigeyuki; Nakano, Kimihiko; Suda, Yoshihiro

    2015-01-01

    Nowadays insight into human-machine interaction is a critical topic with the large-scale development of intelligent vehicles. Biosignal analysis can provide a deeper understanding of driver behaviors that may indicate rationally practical use of the automatic technology. Therefore, this study concentrates on biosignal analysis to quantitatively evaluate mental stress of drivers during automatic driving of trucks, with vehicles set at a closed gap distance apart to reduce air resistance to save energy consumption. By application of two wearable sensor systems, a continuous measurement was realized for palmar perspiration and masseter electromyography, and a biosignal processing method was proposed to assess mental stress levels. In a driving simulator experiment, ten participants completed automatic driving with 4, 8, and 12 m gap distances from the preceding vehicle, and manual driving with about 25 m gap distance as a reference. It was found that mental stress significantly increased when the gap distances decreased, and an abrupt increase in mental stress of drivers was also observed accompanying a sudden change of the gap distance during automatic driving, which corresponded to significantly higher ride discomfort according to subjective reports. PMID:25738768

  5. Social Risk and Depression: Evidence from Manual and Automatic Facial Expression Analysis

    PubMed Central

    Girard, Jeffrey M.; Cohn, Jeffrey F.; Mahoor, Mohammad H.; Mavadati, Seyedmohammad; Rosenwald, Dean P.

    2014-01-01

    Investigated the relationship between change over time in severity of depression symptoms and facial expression. Depressed participants were followed over the course of treatment and video recorded during a series of clinical interviews. Facial expressions were analyzed from the video using both manual and automatic systems. Automatic and manual coding were highly consistent for FACS action units, and showed similar effects for change over time in depression severity. For both systems, when symptom severity was high, participants made more facial expressions associated with contempt, smiled less, and those smiles that occurred were more likely to be accompanied by facial actions associated with contempt. These results are consistent with the “social risk hypothesis” of depression. According to this hypothesis, when symptoms are severe, depressed participants withdraw from other people in order to protect themselves from anticipated rejection, scorn, and social exclusion. As their symptoms fade, participants send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and produced the same pattern of depression effects suggests that automatic facial expression analysis may be ready for use in behavioral and clinical science. PMID:24598859

  6. Automatic Fatigue Detection of Drivers through Yawning Analysis

    NASA Astrophysics Data System (ADS)

    Azim, Tayyaba; Jaffar, M. Arfan; Ramzan, M.; Mirza, Anwar M.

    This paper presents a non-intrusive fatigue detection system based on the video analysis of drivers. The focus of the paper is on how to detect yawning which is an important cue for determining driver's fatigue. Initially, the face is located through Viola-Jones face detection method in a video frame. Then, a mouth window is extracted from the face region, in which lips are searched through spatial fuzzy c-means (s-FCM) clustering. The degree of mouth openness is extracted on the basis of mouth features, to determine driver's yawning state. If the yawning state of the driver persists for several consecutive frames, the system concludes that the driver is non-vigilant due to fatigue and is thus warned through an alarm. The system reinitializes when occlusion or misdetection occurs. Experiments were carried out using real data, recorded in day and night lighting conditions, and with users belonging to different race and gender.

  7. Gaussian curvature analysis allows for automatic block placement in multi-block hexahedral meshing.

    PubMed

    Ramme, Austin J; Shivanna, Kiran H; Magnotta, Vincent A; Grosland, Nicole M

    2011-10-01

    Musculoskeletal finite element analysis (FEA) has been essential to research in orthopaedic biomechanics. The generation of a volumetric mesh is often the most challenging step in a FEA. Hexahedral meshing tools that are based on a multi-block approach rely on the manual placement of building blocks for their mesh generation scheme. We hypothesise that Gaussian curvature analysis could be used to automatically develop a building block structure for multi-block hexahedral mesh generation. The Automated Building Block Algorithm incorporates principles from differential geometry, combinatorics, statistical analysis and computer science to automatically generate a building block structure to represent a given surface without prior information. We have applied this algorithm to 29 bones of varying geometries and successfully generated a usable mesh in all cases. This work represents a significant advancement in automating the definition of building blocks. PMID:20924860

  8. Two dimensional barcode-inspired automatic analysis for arrayed microfluidic immunoassays

    PubMed Central

    Zhang, Yi; Qiao, Lingbo; Ren, Yunke; Wang, Xuwei; Gao, Ming; Tang, Yunfang; Jeff Xi, Jianzhong; Fu, Tzung-May; Jiang, Xingyu

    2013-01-01

    The usability of many high-throughput lab-on-a-chip devices in point-of-care applications is currently limited by the manual data acquisition and analysis process, which are labor intensive and time consuming. Based on our original design in the biochemical reactions, we proposed here a universal approach to perform automatic, fast, and robust analysis for high-throughput array-based microfluidic immunoassays. Inspired by two-dimensional (2D) barcodes, we incorporated asymmetric function patterns into a microfluidic array. These function patterns provide quantitative information on the characteristic dimensions of the microfluidic array, as well as mark its orientation and origin of coordinates. We used a computer program to perform automatic analysis for a high-throughput antigen/antibody interaction experiment in 10 s, which was more than 500 times faster than conventional manual processing. Our method is broadly applicable to many other microchannel-based immunoassays. PMID:24404030

  9. Analysis of Fiber deposition using Automatic Image Processing Method

    NASA Astrophysics Data System (ADS)

    Belka, M.; Lizal, F.; Jedelsky, J.; Jicha, M.

    2013-04-01

    Fibers are permanent threat for a human health. They have an ability to penetrate deeper in the human lung, deposit there and cause health hazards, e.glung cancer. An experiment was carried out to gain more data about deposition of fibers. Monodisperse glass fibers were delivered into a realistic model of human airways with an inspiratory flow rate of 30 l/min. Replica included human airways from oral cavity up to seventh generation of branching. Deposited fibers were rinsed from the model and placed on nitrocellulose filters after the delivery. A new novel method was established for deposition data acquisition. The method is based on a principle of image analysis. The images were captured by high definition camera attached to a phase contrast microscope. Results of new method were compared with standard PCM method, which follows methodology NIOSH 7400, and a good match was found. The new method was found applicable for evaluation of fibers and deposition fraction and deposition efficiency were calculated afterwards.

  10. Automatic localization of cerebral cortical malformations using fractal analysis

    NASA Astrophysics Data System (ADS)

    De Luca, A.; Arrigoni, F.; Romaniello, R.; Triulzi, F. M.; Peruzzo, D.; Bertoldo, A.

    2016-08-01

    Malformations of cortical development (MCDs) encompass a variety of brain disorders affecting the normal development and organization of the brain cortex. The relatively low incidence and the extreme heterogeneity of these disorders hamper the application of classical group level approaches for the detection of lesions. Here, we present a geometrical descriptor for a voxel level analysis based on fractal geometry, then define two similarity measures to detect the lesions at single subject level. The pipeline was applied to 15 normal children and nine pediatric patients affected by MCDs following two criteria, maximum accuracy (WACC) and minimization of false positives (FPR), and proved that our lesion detection algorithm is able to detect and locate abnormalities of the brain cortex with high specificity (WACC  =  85%, FPR  =  96%), sensitivity (WACC  =  83%, FPR  =  63%) and accuracy (WACC  =  85%, FPR  =  90%). The combination of global and local features proves to be effective, making the algorithm suitable for the detection of both focal and diffused malformations. Compared to other existing algorithms, this method shows higher accuracy and sensitivity.

  11. Automatic localization of cerebral cortical malformations using fractal analysis.

    PubMed

    De Luca, A; Arrigoni, F; Romaniello, R; Triulzi, F M; Peruzzo, D; Bertoldo, A

    2016-08-21

    Malformations of cortical development (MCDs) encompass a variety of brain disorders affecting the normal development and organization of the brain cortex. The relatively low incidence and the extreme heterogeneity of these disorders hamper the application of classical group level approaches for the detection of lesions. Here, we present a geometrical descriptor for a voxel level analysis based on fractal geometry, then define two similarity measures to detect the lesions at single subject level. The pipeline was applied to 15 normal children and nine pediatric patients affected by MCDs following two criteria, maximum accuracy (WACC) and minimization of false positives (FPR), and proved that our lesion detection algorithm is able to detect and locate abnormalities of the brain cortex with high specificity (WACC  =  85%, FPR  =  96%), sensitivity (WACC  =  83%, FPR  =  63%) and accuracy (WACC  =  85%, FPR  =  90%). The combination of global and local features proves to be effective, making the algorithm suitable for the detection of both focal and diffused malformations. Compared to other existing algorithms, this method shows higher accuracy and sensitivity. PMID:27444964

  12. Generalization versus contextualization in automatic evaluation revisited: A meta-analysis of successful and failed replications.

    PubMed

    Gawronski, Bertram; Hu, Xiaoqing; Rydell, Robert J; Vervliet, Bram; De Houwer, Jan

    2015-08-01

    To account for disparate findings in the literature on automatic evaluation, Gawronski, Rydell, Vervliet, and De Houwer (2010) proposed a representational theory that specifies the contextual conditions under which automatic evaluations reflect initially acquired attitudinal information or subsequently acquired counterattitudinal information. The theory predicts that automatic evaluations should reflect the valence of expectancy-violating counterattitudinal information only in the context in which this information had been learned. In contrast, automatic evaluations should reflect the valence of initial attitudinal information in any other context, be it the context in which the initial attitudinal information had been acquired (ABA renewal) or a novel context in which the target object had not been encountered before (ABC renewal). The current article presents a meta-analysis of all published and unpublished studies from the authors' research groups regardless of whether they produced the predicted pattern of results. Results revealed average effect sizes of d = 0.249 for ABA renewal (30 studies, N = 3,142) and d = 0.174 for ABC renewal (27 studies, N = 2,930), both of which were significantly different from zero. Effect sizes were moderated by attention to context during learning, order of positive and negative information, context-valence contingencies during learning, and sample country. Although some of the obtained moderator effects are consistent with the representational theory, others require theoretical refinements and future research to gain deeper insights into the mechanisms underlying contextual renewal. PMID:26010481

  13. Investigation of Ballistic Evidence through an Automatic Image Analysis and Identification System.

    PubMed

    Kara, Ilker

    2016-05-01

    Automated firearms identification (AFI) systems contribute to shedding light on criminal events by comparison between different pieces of evidence on cartridge cases and bullets and by matching similar ones that were fired from the same firearm. Ballistic evidence can be rapidly analyzed and classified by means of an automatic image analysis and identification system. In addition, it can be used to narrow the range of possible matching evidence. In this study conducted on the cartridges ejected from the examined pistol, three imaging areas, namely the firing pin impression, capsule traces, and the intersection of these traces, were compared automatically using the image analysis and identification system through the correlation ranking method to determine the numeric values that indicate the significance of the similarities. These numerical features that signify the similarities and differences between pistol makes and models can be used in groupings to make a distinction between makes and models of pistols. PMID:27122419

  14. CADLIVE toolbox for MATLAB: automatic dynamic modeling of biochemical networks with comprehensive system analysis.

    PubMed

    Inoue, Kentaro; Maeda, Kazuhiro; Miyabe, Takaaki; Matsuoka, Yu; Kurata, Hiroyuki

    2014-09-01

    Mathematical modeling has become a standard technique to understand the dynamics of complex biochemical systems. To promote the modeling, we had developed the CADLIVE dynamic simulator that automatically converted a biochemical map into its associated mathematical model, simulated its dynamic behaviors and analyzed its robustness. To enhance the feasibility by CADLIVE and extend its functions, we propose the CADLIVE toolbox available for MATLAB, which implements not only the existing functions of the CADLIVE dynamic simulator, but also the latest tools including global parameter search methods with robustness analysis. The seamless, bottom-up processes consisting of biochemical network construction, automatic construction of its dynamic model, simulation, optimization, and S-system analysis greatly facilitate dynamic modeling, contributing to the research of systems biology and synthetic biology. This application can be freely downloaded from http://www.cadlive.jp/CADLIVE_MATLAB/ together with an instruction. PMID:24623466

  15. Four-Channel Biosignal Analysis and Feature Extraction for Automatic Emotion Recognition

    NASA Astrophysics Data System (ADS)

    Kim, Jonghwa; André, Elisabeth

    This paper investigates the potential of physiological signals as a reliable channel for automatic recognition of user's emotial state. For the emotion recognition, little attention has been paid so far to physiological signals compared to audio-visual emotion channels such as facial expression or speech. All essential stages of automatic recognition system using biosignals are discussed, from recording physiological dataset up to feature-based multiclass classification. Four-channel biosensors are used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to search the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by emotion recognition results.

  16. Theoretical Analysis of the Longitudinal Behavior of an Automatically Controlled Supersonic Interceptor During the Attack Phase

    NASA Technical Reports Server (NTRS)

    Gates, Ordway B., Jr.; Woodling, C. H.

    1959-01-01

    Theoretical analysis of the longitudinal behavior of an automatically controlled supersonic interceptor during the attack phase against a nonmaneuvering target is presented. Control of the interceptor's flight path is obtained by use of a pitch rate command system. Topics lift, and pitching moment, effects of initial tracking errors, discussion of normal acceleration limited, limitations of control surface rate and deflection, and effects of neglecting forward velocity changes of interceptor during attack phase.

  17. Automatic Assessment and Reduction of Noise using Edge Pattern Analysis in Non-Linear Image Enhancement

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.; Rahman, Zia-Ur; Woodell, Glenn A.; Hines, Glenn D.

    2004-01-01

    Noise is the primary visibility limit in the process of non-linear image enhancement, and is no longer a statistically stable additive noise in the post-enhancement image. Therefore novel approaches are needed to both assess and reduce spatially variable noise at this stage in overall image processing. Here we will examine the use of edge pattern analysis both for automatic assessment of spatially variable noise and as a foundation for new noise reduction methods.

  18. Sleep-monitoring, experiment M133. [electronic recording system for automatic analysis of human sleep patterns

    NASA Technical Reports Server (NTRS)

    Frost, J. D., Jr.; Salamy, J. G.

    1973-01-01

    The Skylab sleep-monitoring experiment simulated the timelines and environment expected during a 56-day Skylab mission. Two crewmembers utilized the data acquisition and analysis hardware, and their sleep characteristics were studied in an online fashion during a number of all night recording sessions. Comparison of the results of online automatic analysis with those of postmission visual data analysis was favorable, confirming the feasibility of obtaining reliable objective information concerning sleep characteristics during the Skylab missions. One crewmember exhibited definite changes in certain sleep characteristics (e.g., increased sleep latency, increased time Awake during first third of night, and decreased total sleep time) during the mission.

  19. SAVLOC, computer program for automatic control and analysis of X-ray fluorescence experiments

    NASA Technical Reports Server (NTRS)

    Leonard, R. F.

    1977-01-01

    A program for a PDP-15 computer is presented which provides for control and analysis of trace element determinations by using X-ray fluorescence. The program simultaneously handles data accumulation for one sample and analysis of data from previous samples. Data accumulation consists of sample changing, timing, and data storage. Analysis requires the locating of peaks in X-ray spectra, determination of intensities of peaks, identification of origins of peaks, and determination of a real density of the element responsible for each peak. The program may be run in either a manual (supervised) mode or an automatic (unsupervised) mode.

  20. Imaging and analysis platform for automatic phenotyping and trait ranking of plant root systems.

    PubMed

    Iyer-Pascuzzi, Anjali S; Symonova, Olga; Mileyko, Yuriy; Hao, Yueling; Belcher, Heather; Harer, John; Weitz, Joshua S; Benfey, Philip N

    2010-03-01

    The ability to nondestructively image and automatically phenotype complex root systems, like those of rice (Oryza sativa), is fundamental to identifying genes underlying root system architecture (RSA). Although root systems are central to plant fitness, identifying genes responsible for RSA remains an underexplored opportunity for crop improvement. Here we describe a nondestructive imaging and analysis system for automated phenotyping and trait ranking of RSA. Using this system, we image rice roots from 12 genotypes. We automatically estimate RSA traits previously identified as important to plant function. In addition, we expand the suite of features examined for RSA to include traits that more comprehensively describe monocot RSA but that are difficult to measure with traditional methods. Using 16 automatically acquired phenotypic traits for 2,297 images from 118 individuals, we observe (1) wide variation in phenotypes among the genotypes surveyed; and (2) greater intergenotype variance of RSA features than variance within a genotype. RSA trait values are integrated into a computational pipeline that utilizes supervised learning methods to determine which traits best separate two genotypes, and then ranks the traits according to their contribution to each pairwise comparison. This trait-ranking step identifies candidate traits for subsequent quantitative trait loci analysis and demonstrates that depth and average radius are key contributors to differences in rice RSA within our set of genotypes. Our results suggest a strong genetic component underlying rice RSA. This work enables the automatic phenotyping of RSA of individuals within mapping populations, providing an integrative framework for quantitative trait loci analysis of RSA. PMID:20107024

  1. Nonverbal Social Withdrawal in Depression: Evidence from manual and automatic analysis

    PubMed Central

    Girard, Jeffrey M.; Cohn, Jeffrey F.; Mahoor, Mohammad H.; Mavadati, S. Mohammad; Hammal, Zakia; Rosenwald, Dean P.

    2014-01-01

    The relationship between nonverbal behavior and severity of depression was investigated by following depressed participants over the course of treatment and video recording a series of clinical interviews. Facial expressions and head pose were analyzed from video using manual and automatic systems. Both systems were highly consistent for FACS action units (AUs) and showed similar effects for change over time in depression severity. When symptom severity was high, participants made fewer affiliative facial expressions (AUs 12 and 15) and more non-affiliative facial expressions (AU 14). Participants also exhibited diminished head motion (i.e., amplitude and velocity) when symptom severity was high. These results are consistent with the Social Withdrawal hypothesis: that depressed individuals use nonverbal behavior to maintain or increase interpersonal distance. As individuals recover, they send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and revealed the same pattern of findings suggests that automatic facial expression analysis may be ready to relieve the burden of manual coding in behavioral and clinical science. PMID:25378765

  2. Dynamic Response and Stability Analysis of AN Automatic Ball Balancer for a Flexible Rotor

    NASA Astrophysics Data System (ADS)

    Chung, J.; Jang, I.

    2003-01-01

    Dynamic stability and time responses are studied for an automatic ball balancer of a rotor with a flexible shaft. The Stodola-Green rotor model, of which the shaft is flexible, is selected for analysis. This rotor model is able to include the influence of rigid-body rotations due to the shaft flexibility on dynamic responses. Applying Lagrange's equation to the rotor with the ball balancer, the non-linear equations of motion are derived. Based on the linearized equations, the stability of the ball balancer around the balanced equilibrium position is analyzed. On the other hand, the time responses computed from the non-linear equations are investigated. This study shows that the automatic ball balancer can achieve the balancing of a rotor with a flexible shaft if the system parameters of the balancer satisfy the stability conditions for the balanced equilibrium position.

  3. Analysis of Social Variables when an Initial Functional Analysis Indicates Automatic Reinforcement as the Maintaining Variable for Self-Injurious Behavior

    ERIC Educational Resources Information Center

    Kuhn, Stephanie A. Contrucci; Triggs, Mandy

    2009-01-01

    Self-injurious behavior (SIB) that occurs at high rates across all conditions of a functional analysis can suggest automatic or multiple functions. In the current study, we conducted a functional analysis for 1 individual with SIB. Results indicated that SIB was, at least in part, maintained by automatic reinforcement. Further analyses using…

  4. Automatic Derivation of Statistical Data Analysis Algorithms: Planetary Nebulae and Beyond

    NASA Astrophysics Data System (ADS)

    Fischer, Bernd; Hajian, Arsen; Knuth, Kevin; Schumann, Johann

    2004-04-01

    AUTOBAYES is a fully automatic program synthesis system for the data analysis domain. Its input is a declarative problem description in form of a statistical model; its output is documented and optimized C/C++ code. The synthesis process relies on the combination of three key techniques. Bayesian networks are used as a compact internal representation mechanism which enables problem decompositions and guides the algorithm derivation. Program schemas are used as independently composable building blocks for the algorithm construction; they can encapsulate advanced algorithms and data structures. A symbolic-algebraic system is used to find closed-form solutions for problems and emerging subproblems. In this paper, we describe the application of AUTOBAYES to the analysis of planetary nebulae images taken by the Hubble Space Telescope. We explain the system architecture, and present in detail the automatic derivation of the scientists' original analysis as well as a refined analysis using clustering models. This study demonstrates that AUTOBAYES is now mature enough so that it can be applied to realistic scientific data analysis tasks.

  5. Automaticity in acute ischemia: Bifurcation analysis of a human ventricular model

    NASA Astrophysics Data System (ADS)

    Bouchard, Sylvain; Jacquemet, Vincent; Vinet, Alain

    2011-01-01

    Acute ischemia (restriction in blood supply to part of the heart as a result of myocardial infarction) induces major changes in the electrophysiological properties of the ventricular tissue. Extracellular potassium concentration ([Ko+]) increases in the ischemic zone, leading to an elevation of the resting membrane potential that creates an “injury current” (IS) between the infarcted and the healthy zone. In addition, the lack of oxygen impairs the metabolic activity of the myocytes and decreases ATP production, thereby affecting ATP-sensitive potassium channels (IKatp). Frequent complications of myocardial infarction are tachycardia, fibrillation, and sudden cardiac death, but the mechanisms underlying their initiation are still debated. One hypothesis is that these arrhythmias may be triggered by abnormal automaticity. We investigated the effect of ischemia on myocyte automaticity by performing a comprehensive bifurcation analysis (fixed points, cycles, and their stability) of a human ventricular myocyte model [K. H. W. J. ten Tusscher and A. V. Panfilov, Am. J. Physiol. Heart Circ. Physiol.AJPHAP0363-613510.1152/ajpheart.00109.2006 291, H1088 (2006)] as a function of three ischemia-relevant parameters [Ko+], IS, and IKatp. In this single-cell model, we found that automatic activity was possible only in the presence of an injury current. Changes in [Ko+] and IKatp significantly altered the bifurcation structure of IS, including the occurrence of early-after depolarization. The results provide a sound basis for studying higher-dimensional tissue structures representing an ischemic heart.

  6. NGS-Trex: an automatic analysis workflow for RNA-Seq data.

    PubMed

    Boria, Ilenia; Boatti, Lara; Saggese, Igor; Mignone, Flavio

    2015-01-01

    RNA-Seq technology allows the rapid analysis of whole transcriptomes taking advantage of next-generation sequencing platforms. Moreover with the constant decrease of the cost of NGS analysis RNA-Seq is becoming very popular and widespread. Unfortunately data analysis is quite demanding in terms of bioinformatic skills and infrastructures required, thus limiting the potential users of this method. Here we describe the complete analysis of sample data from raw sequences to data mining of results by using NGS-Trex platform, a low user interaction, fully automatic analysis workflow. Used through a web interface, NGS-Trex processes data and profiles the transcriptome of the samples identifying expressed genes, transcripts, and new and known splice variants. It also detects differentially expressed genes and transcripts across different experiments. PMID:25577383

  7. Automatic Evaluation of Voice Quality Using Text-Based Laryngograph Measurements and Prosodic Analysis

    PubMed Central

    Haderlein, Tino; Schwemmle, Cornelia; Döllinger, Michael; Matoušek, Václav; Ptok, Martin; Nöth, Elmar

    2015-01-01

    Due to low intra- and interrater reliability, perceptual voice evaluation should be supported by objective, automatic methods. In this study, text-based, computer-aided prosodic analysis and measurements of connected speech were combined in order to model perceptual evaluation of the German Roughness-Breathiness-Hoarseness (RBH) scheme. 58 connected speech samples (43 women and 15 men; 48.7 ± 17.8 years) containing the German version of the text “The North Wind and the Sun” were evaluated perceptually by 19 speech and voice therapy students according to the RBH scale. For the human-machine correlation, Support Vector Regression with measurements of the vocal fold cycle irregularities (CFx) and the closed phases of vocal fold vibration (CQx) of the Laryngograph and 33 features from a prosodic analysis module were used to model the listeners' ratings. The best human-machine results for roughness were obtained from a combination of six prosodic features and CFx (r = 0.71, ρ = 0.57). These correlations were approximately the same as the interrater agreement among human raters (r = 0.65, ρ = 0.61). CQx was one of the substantial features of the hoarseness model. For hoarseness and breathiness, the human-machine agreement was substantially lower. Nevertheless, the automatic analysis method can serve as the basis for a meaningful objective support for perceptual analysis. PMID:26136813

  8. Automatic Evaluation of Voice Quality Using Text-Based Laryngograph Measurements and Prosodic Analysis.

    PubMed

    Haderlein, Tino; Schwemmle, Cornelia; Döllinger, Michael; Matoušek, Václav; Ptok, Martin; Nöth, Elmar

    2015-01-01

    Due to low intra- and interrater reliability, perceptual voice evaluation should be supported by objective, automatic methods. In this study, text-based, computer-aided prosodic analysis and measurements of connected speech were combined in order to model perceptual evaluation of the German Roughness-Breathiness-Hoarseness (RBH) scheme. 58 connected speech samples (43 women and 15 men; 48.7 ± 17.8 years) containing the German version of the text "The North Wind and the Sun" were evaluated perceptually by 19 speech and voice therapy students according to the RBH scale. For the human-machine correlation, Support Vector Regression with measurements of the vocal fold cycle irregularities (CFx) and the closed phases of vocal fold vibration (CQx) of the Laryngograph and 33 features from a prosodic analysis module were used to model the listeners' ratings. The best human-machine results for roughness were obtained from a combination of six prosodic features and CFx (r = 0.71, ρ = 0.57). These correlations were approximately the same as the interrater agreement among human raters (r = 0.65, ρ = 0.61). CQx was one of the substantial features of the hoarseness model. For hoarseness and breathiness, the human-machine agreement was substantially lower. Nevertheless, the automatic analysis method can serve as the basis for a meaningful objective support for perceptual analysis. PMID:26136813

  9. Algorithm Summary and Evaluation: Automatic Implementation of Ringdown Analysis for Electromechanical Mode Identification from Phasor Measurements

    SciTech Connect

    Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.; Jin, Shuangshuang; Lin, Jenglung; Hauer, Matthew L.

    2010-02-28

    Small signal stability problems are one of the major threats to grid stability and reliability. Prony analysis has been successfully applied on ringdown data to monitor electromechanical modes of a power system using phasor measurement unit (PMU) data. To facilitate an on-line application of mode estimation, this paper develops a recursive algorithm for implementing Prony analysis and proposed an oscillation detection method to detect ringdown data in real time. By automatically detecting ringdown data, the proposed method helps guarantee that Prony analysis is applied properly and timely on the ringdown data. Thus, the mode estimation results can be performed reliably and timely. The proposed method is tested using Monte Carlo simulations based on a 17-machine model and is shown to be able to properly identify the oscillation data for on-line application of Prony analysis. In addition, the proposed method is applied to field measurement data from WECC to show the performance of the proposed algorithm.

  10. Automatic quantitative analysis of ultrasound tongue contours via wavelet-based functional mixed models.

    PubMed

    Lancia, Leonardo; Rausch, Philip; Morris, Jeffrey S

    2015-02-01

    This paper illustrates the application of wavelet-based functional mixed models to automatic quantification of differences between tongue contours obtained through ultrasound imaging. The reliability of this method is demonstrated through the analysis of tongue positions recorded from a female and a male speaker at the onset of the vowels /a/ and /i/ produced in the context of the consonants /t/ and /k/. The proposed method allows detection of significant differences between configurations of the articulators that are visible in ultrasound images during the production of different speech gestures and is compatible with statistical designs containing both fixed and random terms. PMID:25698047

  11. Automatic generation of stop word lists for information retrieval and analysis

    DOEpatents

    Rose, Stuart J

    2013-01-08

    Methods and systems for automatically generating lists of stop words for information retrieval and analysis. Generation of the stop words can include providing a corpus of documents and a plurality of keywords. From the corpus of documents, a term list of all terms is constructed and both a keyword adjacency frequency and a keyword frequency are determined. If a ratio of the keyword adjacency frequency to the keyword frequency for a particular term on the term list is less than a predetermined value, then that term is excluded from the term list. The resulting term list is truncated based on predetermined criteria to form a stop word list.

  12. Automatic Analysis of Single-Channel Sleep EEG: Validation in Healthy Individuals

    PubMed Central

    Berthomier, Christian; Drouot, Xavier; Herman-Stoïca, Maria; Berthomier, Pierre; Prado, Jacques; Bokar-Thire, Djibril; Benoit, Odile; Mattout, Jérémie; d'Ortho, Marie-Pia

    2007-01-01

    Study Objective: To assess the performance of automatic sleep scoring software (ASEEGA) based on a single EEG channel comparatively with manual scoring (2 experts) of conventional full polysomnograms. Design: Polysomnograms from 15 healthy individuals were scored by 2 independent experts using conventional R&K rules. The results were compared to those of ASEEGA scoring on an epoch-by-epoch basis. Setting: Sleep laboratory in the physiology department of a teaching hospital. Participants: Fifteen healthy volunteers. Measurements and Results: The epoch-by-epoch comparison was based on classifying into 2 states (wake/sleep), 3 states (wake/REM/NREM), 4 states (wake/REM/stages 1-2/SWS), or 5 states (wake/REM/stage 1/stage 2/SWS). The obtained overall agreements, as quantified by the kappa coefficient, were 0.82, 0.81, 0.75, and 0.72, respectively. Furthermore, obtained agreements between ASEEGA and the expert consensual scoring were 96.0%, 92.1%, 84.9%, and 82.9%, respectively. Finally, when classifying into 5 states, the sensitivity and positive predictive value of ASEEGA regarding wakefulness were 82.5% and 89.7%, respectively. Similarly, sensitivity and positive predictive value regarding REM state were 83.0% and 89.1%. Conclusions: Our results establish the face validity and convergent validity of ASEEGA for single-channel sleep analysis in healthy individuals. ASEEGA appears as a good candidate for diagnostic aid and automatic ambulant scoring. Citation: Berthomier C; Drouot X; Herman-Stoïca M; Berthomier P; Prado J; Bokar-Thire D; Benoit O; Mattout J; d'Ortho MP. Automatic analysis of single-channel sleep EEG: validation in healthy individuals. SLEEP 2007;30(11):1587-1595. PMID:18041491

  13. Real time automatic detection of bearing fault in induction machine using kurtogram analysis.

    PubMed

    Tafinine, Farid; Mokrani, Karim

    2012-11-01

    A proposed signal processing technique for incipient real time bearing fault detection based on kurtogram analysis is presented in this paper. The kurtogram is a fourth-order spectral analysis tool introduced for detecting and characterizing non-stationarities in a signal. This technique starts from investigating the resonance signatures over selected frequency bands to extract the representative features. The traditional spectral analysis is not appropriate for non-stationary vibration signal and for real time diagnosis. The performance of the proposed technique is examined by a series of experimental tests corresponding to different bearing conditions. Test results show that this signal processing technique is an effective bearing fault automatic detection method and gives a good basis for an integrated induction machine condition monitor. PMID:23145702

  14. Urban land use of the Sao Paulo metropolitan area by automatic analysis of LANDSAT data

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Niero, M.; Foresti, C.

    1983-01-01

    The separability of urban land use classes in the metropolitan area of Sao Paulo was studied by means of automatic analysis of MSS/LANDSAT digital data. The data were analyzed using the media K and MAXVER classification algorithms. The land use classes obtained were: CBD/vertical growth area, residential area, mixed area, industrial area, embankment area type 1, embankment area type 2, dense vegetation area and sparse vegetation area. The spectral analysis of representative samples of urban land use classes was done using the "Single Cell" analysis option. The classes CBD/vertical growth area, residential area and embankment area type 2 showed better spectral separability when compared to the other classes.

  15. Automatic quantitative evaluation of autoradiographic band films by computerized image analysis

    SciTech Connect

    Masseroli, M.; Messori, A.; Bendotti, C.; Ponti, M.; Forloni, G. )

    1993-01-01

    The present paper describes a new image processing method for automatic quantitative analysis of autoradiographic band films. It was developed in a specific image analysis environment (IBAS 2.0) but the algorithms and methods can be utilized elsewhere. The program is easy to use and presents some particularly useful features for evaluation of autoradiographic band films, such as the choice of whole film or single lane background determination; the possibility of evaluating bands with film scratch artifacts and the quantification in absolute terms or relative to reference values. The method was tested by comparison with laser-scanner densitometric quantifications of the same autoradiograms. The results show the full compatibility of the two methods and demonstrate the reliability and sensitivity of image analysis. The method can be used not only to evaluate autoradiographic band films, but to analyze any type of signal bands on other materials (e.g electrophoresis gel, chromatographic paper, etc.).

  16. Computer program for analysis of impedance cardiography signals enabling manual correction of points detected automatically

    NASA Astrophysics Data System (ADS)

    Oleksiak, Justyna; Cybulski, Gerard

    2014-11-01

    The aim of this work was to create a computer program, written in LabVIEW, which enables the visualization and analysis of hemodynamic parameters. It allows the user to import data collected using ReoMonitor, an ambulatory monitoring impedance cardiography (AICG) device. The data include one channel of the ECG and one channel of the first derivative of the impedance signal (dz/dt) sampled at 200Hz and the base impedance signal (Z0) sampled every 8s. The program consist of two parts: a bioscope allowing the presentation of traces (ECG, AICG, Z0) and an analytical portion enabling the detection of characteristic points on the signals and automatic calculation of hemodynamic parameters. The detection of characteristic points in both signals is done automatically, with the option to make manual corrections, which may be necessary to avoid "false positive" recognitions. This application is used to determine the values of basic hemodynamic variables: pre-ejection period (PEP), left ventricular ejection time (LVET), stroke volume (SV), cardiac output (CO), and heart rate (HR). It leaves room for further development of additional features, for both the analysis panel and the data acquisition function.

  17. [Design and analysis of automatic measurement instrument for diffraction efficiency of plane reflection grating].

    PubMed

    Wang, Fang; Qi, Xiang-Dong; Yu, Hong-Zhu; Yu, Hai-Li

    2009-02-01

    A new-style system that automatically measures the diffraction efficiency of plane reflection grating was designed. The continuous illuminant was adopted for illumination, the duplex grating spectrograph structure was applied, and the linear array NMOS was the receiving component. Wielding relevant principle of the grating spectrograph, theoretical analysis principle was carried out for the testing system. Integrating the aberration theory of geometrical optics, the image quality of this optics system was analyzed. Analysis indicated that the systematic device structure is compact, and electronics system is simplified. The system does not have the problem about wavelength sweep synchronization of the two grating spectrographs, and its wavelength repeatability is very good. So the precision is easy to guarantee. Compared with the former automated scheme, the production cost is reduced, moreover it is easy to operate, and the working efficiency is enhanced. The study showed that this automatic measurement instrument system features a spectral range of 190-1 100 nm and resolution is less than 3 nm, which entirely satisfies the design request. It is an economical and feasible plan. PMID:19445251

  18. AnaSP: a software suite for automatic image analysis of multicellular spheroids.

    PubMed

    Piccinini, Filippo

    2015-04-01

    Today, more and more biological laboratories use 3D cell cultures and tissues grown in vitro as a 3D model of in vivo tumours and metastases. In the last decades, it has been extensively established that multicellular spheroids represent an efficient model to validate effects of drugs and treatments for human care applications. However, a lack of methods for quantitative analysis limits the usage of spheroids as models for routine experiments. Several methods have been proposed in literature to perform high throughput experiments employing spheroids by automatically computing different morphological parameters, such as diameter, volume and sphericity. Nevertheless, these systems are typically grounded on expensive automated technologies, that make the suggested solutions affordable only for a limited subset of laboratories, frequently performing high content screening analysis. In this work we propose AnaSP, an open source software suitable for automatically estimating several morphological parameters of spheroids, by simply analyzing brightfield images acquired with a standard widefield microscope, also not endowed with a motorized stage. The experiments performed proved sensitivity and precision of the segmentation method proposed, and excellent reliability of AnaSP to compute several morphological parameters of spheroids imaged in different conditions. AnaSP is distributed as an open source software tool. Its modular architecture and graphical user interface make it attractive also for researchers who do not work in areas of computer vision and suitable for both high content screenings and occasional spheroid-based experiments. PMID:25737369

  19. Intercellular fluorescence background on microscope slides: some problems and solutions for automatic analysis

    NASA Astrophysics Data System (ADS)

    Piper, Jim; Sudar, Damir; Peters, Don; Pinkel, Daniel

    1994-05-01

    Although high contrast between signal and the dark background is often claimed as a major advantage of fluorescence staining in cytology and cytogenetics, in practice this is not always the case and in some circumstances the inter-cellular or, in the case of metaphase preparations, the inter-chromosome background can be both brightly fluorescent and vary substantially across the slide or even across a single metaphase. Bright background results in low image contrast, making automatic detection of metaphase cells more difficult. The background correction strategy employed in automatic search must both cope with variable background and be computationally efficient. The method employed in a fluorescence metaphase finder is presented, and the compromises involved are discussed. A different set of problems arise when the analysis is aimed at accurate quantification of the fluorescence signal. Some insight into the nature of the background in the case of comparative genomic hybridization is obtained by image analysis of data obtained from experiments using cell lines with known abnormal copy numbers of particular chromosome types.

  20. Acoustic Analysis of Inhaler Sounds From Community-Dwelling Asthmatic Patients for Automatic Assessment of Adherence

    PubMed Central

    D'arcy, Shona; Costello, Richard W.

    2014-01-01

    Inhalers are devices which deliver medication to the airways in the treatment of chronic respiratory diseases. When used correctly inhalers relieve and improve patients' symptoms. However, adherence to inhaler medication has been demonstrated to be poor, leading to reduced clinical outcomes, wasted medication, and higher healthcare costs. There is a clinical need for a system that can accurately monitor inhaler adherence as currently no method exists to evaluate how patients use their inhalers between clinic visits. This paper presents a method of automatically evaluating inhaler adherence through acoustic analysis of inhaler sounds. An acoustic monitoring device was employed to record the sounds patients produce while using a Diskus dry powder inhaler, in addition to the time and date patients use the inhaler. An algorithm was designed and developed to automatically detect inhaler events from the audio signals and provide feedback regarding patient adherence. The algorithm was evaluated on 407 audio files obtained from 12 community dwelling asthmatic patients. Results of the automatic classification were compared against two expert human raters. For patient data for whom the human raters Cohen's kappa agreement score was \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}${>}{0.81}$\\end{document}, results indicated that the algorithm's accuracy was 83% in determining the correct inhaler technique score compared with the raters. This paper has several clinical implications as it demonstrates the feasibility of using acoustics to objectively monitor patient inhaler adherence and provide real-time personalized medical care for a chronic respiratory illness. PMID:27170883

  1. Analysis and Exploitation of Automatically Generated Scene Structure from Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Nilosek, David R.

    The recent advancements made in the field of computer vision, along with the ever increasing rate of computational power has opened up opportunities in the field of automated photogrammetry. Many researchers have focused on using these powerful computer vision algorithms to extract three-dimensional point clouds of scenes from multi-view imagery, with the ultimate goal of creating a photo-realistic scene model. However, geographically accurate three-dimensional scene models have the potential to be exploited for much more than just visualization. This work looks at utilizing automatically generated scene structure from near-nadir aerial imagery to identify and classify objects within the structure, through the analysis of spatial-spectral information. The limitation to this type of imagery is imposed due to the common availability of this type of aerial imagery. Popular third-party computer-vision algorithms are used to generate the scene structure. A voxel-based approach for surface estimation is developed using Manhattan-world assumptions. A surface estimation confidence metric is also presented. This approach provides the basis for further analysis of surface materials, incorporating spectral information. Two cases of spectral analysis are examined: when additional hyperspectral imagery of the reconstructed scene is available, and when only R,G,B spectral information can be obtained. A method for registering the surface estimation to hyperspectral imagery, through orthorectification, is developed. Atmospherically corrected hyperspectral imagery is used to assign reflectance values to estimated surface facets for physical simulation with DIRSIG. A spatial-spectral region growing-based segmentation algorithm is developed for the R,G,B limited case, in order to identify possible materials for user attribution. Finally, an analysis of the geographic accuracy of automatically generated three-dimensional structure is performed. An end-to-end, semi-automated, workflow

  2. Techniques for automatic large scale change analysis of temporal multispectral imagery

    NASA Astrophysics Data System (ADS)

    Mercovich, Ryan A.

    Change detection in remotely sensed imagery is a multi-faceted problem with a wide variety of desired solutions. Automatic change detection and analysis to assist in the coverage of large areas at high resolution is a popular area of research in the remote sensing community. Beyond basic change detection, the analysis of change is essential to provide results that positively impact an image analyst's job when examining potentially changed areas. Present change detection algorithms are geared toward low resolution imagery, and require analyst input to provide anything more than a simple pixel level map of the magnitude of change that has occurred. One major problem with this approach is that change occurs in such large volume at small spatial scales that a simple change map is no longer useful. This research strives to create an algorithm based on a set of metrics that performs a large area search for change in high resolution multispectral image sequences and utilizes a variety of methods to identify different types of change. Rather than simply mapping the magnitude of any change in the scene, the goal of this research is to create a useful display of the different types of change in the image. The techniques presented in this dissertation are used to interpret large area images and provide useful information to an analyst about small regions that have undergone specific types of change while retaining image context to make further manual interpretation easier. This analyst cueing to reduce information overload in a large area search environment will have an impact in the areas of disaster recovery, search and rescue situations, and land use surveys among others. By utilizing a feature based approach founded on applying existing statistical methods and new and existing topological methods to high resolution temporal multispectral imagery, a novel change detection methodology is produced that can automatically provide useful information about the change occurring

  3. Automatic yield-line analysis of slabs using discontinuity layout optimization

    PubMed Central

    Gilbert, Matthew; He, Linwei; Smith, Colin C.; Le, Canh V.

    2014-01-01

    The yield-line method of analysis is a long established and extremely effective means of estimating the maximum load sustainable by a slab or plate. However, although numerous attempts to automate the process of directly identifying the critical pattern of yield-lines have been made over the past few decades, to date none has proved capable of reliably analysing slabs of arbitrary geometry. Here, it is demonstrated that the discontinuity layout optimization (DLO) procedure can successfully be applied to such problems. The procedure involves discretization of the problem using nodes inter-connected by potential yield-line discontinuities, with the critical layout of these then identified using linear programming. The procedure is applied to various benchmark problems, demonstrating that highly accurate solutions can be obtained, and showing that DLO provides a truly systematic means of directly and reliably automatically identifying yield-line patterns. Finally, since the critical yield-line patterns for many problems are found to be quite complex in form, a means of automatically simplifying these is presented. PMID:25104905

  4. Semi-automatic system for UV images analysis of historical musical instruments

    NASA Astrophysics Data System (ADS)

    Dondi, Piercarlo; Invernizzi, Claudia; Licchelli, Maurizio; Lombardi, Luca; Malagodi, Marco; Rovetta, Tommaso

    2015-06-01

    The selection of representative areas to be analyzed is a common problem in the study of Cultural Heritage items. UV fluorescence photography is an extensively used technique to highlight specific surface features which cannot be observed in visible light (e.g. restored parts or treated with different materials), and it proves to be very effective in the study of historical musical instruments. In this work we propose a new semi-automatic solution for selecting areas with the same perceived color (a simple clue of similar materials) on UV photos, using a specifically designed interactive tool. The proposed method works in two steps: (i) users select a small rectangular area of the image; (ii) program automatically highlights all the areas that have the same color of the selected input. The identification is made by the analysis of the image in HSV color model, the most similar to the human perception. The achievable result is more accurate than a manual selection, because it can detect also points that users do not recognize as similar due to perception illusion. The application has been developed following the rules of usability, and Human Computer Interface has been improved after a series of tests performed by expert and non-expert users. All the experiments were performed on UV imagery of the Stradivari violins collection stored by "Museo del Violino" in Cremona.

  5. Automatic yield-line analysis of slabs using discontinuity layout optimization.

    PubMed

    Gilbert, Matthew; He, Linwei; Smith, Colin C; Le, Canh V

    2014-08-01

    The yield-line method of analysis is a long established and extremely effective means of estimating the maximum load sustainable by a slab or plate. However, although numerous attempts to automate the process of directly identifying the critical pattern of yield-lines have been made over the past few decades, to date none has proved capable of reliably analysing slabs of arbitrary geometry. Here, it is demonstrated that the discontinuity layout optimization (DLO) procedure can successfully be applied to such problems. The procedure involves discretization of the problem using nodes inter-connected by potential yield-line discontinuities, with the critical layout of these then identified using linear programming. The procedure is applied to various benchmark problems, demonstrating that highly accurate solutions can be obtained, and showing that DLO provides a truly systematic means of directly and reliably automatically identifying yield-line patterns. Finally, since the critical yield-line patterns for many problems are found to be quite complex in form, a means of automatically simplifying these is presented. PMID:25104905

  6. Automatic fuzzy object-based analysis of VHSR images for urban objects extraction

    NASA Astrophysics Data System (ADS)

    Sebari, Imane; He, Dong-Chen

    2013-05-01

    We present an automatic approach for object extraction from very high spatial resolution (VHSR) satellite images based on Object-Based Image Analysis (OBIA). The proposed solution requires no input data other than the studied image. Not input parameters are required. First, an automatic non-parametric cooperative segmentation technique is applied to create object primitives. A fuzzy rule base is developed based on the human knowledge used for image interpretation. The rules integrate spectral, textural, geometric and contextual object proprieties. The classes of interest are: tree, lawn, bare soil and water for natural classes; building, road, parking lot for man made classes. The fuzzy logic is integrated in our approach in order to manage the complexity of the studied subject, to reason with imprecise knowledge and to give information on the precision and certainty of the extracted objects. The proposed approach was applied to extracts of Ikonos images of Sherbrooke city (Canada). An overall total extraction accuracy of 80% was observed. The correctness rates obtained for building, road and parking lot classes are of 81%, 75% and 60%, respectively.

  7. Automatic computer-aided detection of prostate cancer based on multiparametric magnetic resonance image analysis

    NASA Astrophysics Data System (ADS)

    Vos, P. C.; Barentsz, J. O.; Karssemeijer, N.; Huisman, H. J.

    2012-03-01

    In this paper, a fully automatic computer-aided detection (CAD) method is proposed for the detection of prostate cancer. The CAD method consists of multiple sequential steps in order to detect locations that are suspicious for prostate cancer. In the initial stage, a voxel classification is performed using a Hessian-based blob detection algorithm at multiple scales on an apparent diffusion coefficient map. Next, a parametric multi-object segmentation method is applied and the resulting segmentation is used as a mask to restrict the candidate detection to the prostate. The remaining candidates are characterized by performing histogram analysis on multiparametric MR images. The resulting feature set is summarized into a malignancy likelihood by a supervised classifier in a two-stage classification approach. The detection performance for prostate cancer was tested on a screening population of 200 consecutive patients and evaluated using the free response operating characteristic methodology. The results show that the CAD method obtained sensitivities of 0.41, 0.65 and 0.74 at false positive (FP) levels of 1, 3 and 5 per patient, respectively. In conclusion, this study showed that it is feasible to automatically detect prostate cancer at a FP rate lower than systematic biopsy. The CAD method may assist the radiologist to detect prostate cancer locations and could potentially guide biopsy towards the most aggressive part of the tumour.

  8. Hardware and software system for automatic microemulsion assay evaluation by analysis of optical properties

    NASA Astrophysics Data System (ADS)

    Maeder, Ulf; Schmidts, Thomas; Burg, Jan-Michael; Heverhagen, Johannes T.; Runkel, Frank; Fiebich, Martin

    2010-03-01

    A new hardware device called Microemulsion Analyzer (MEA), which facilitates the preparation and evaluation of microemulsions, was developed. Microemulsions, consisting of three phases (oil, surfactant and water) and prepared on deep well plates according to the PDMPD method can be automatically evaluated by means of the optical properties. The ratio of ingredients to form a microemulsion strongly depends on the properties and the amounts of the used ingredients. A microemulsion assay is set up on deep well plates to determine these ratios. The optical properties of the ingredients change from turbid to transparent as soon as a microemulsion is formed. The MEA contains a frame and an imageprocessing and analysis algorithm. The frame itself consists of aluminum, an electro luminescent foil (ELF) and a camera. As the frame keeps the well plate at the correct position and angle, the ELF provides constant illumination of the plate from below. The camera provides an image that is processed by the algorithm to automatically evaluate the turbidity in the wells. Using the determined parameters, a phase diagram is created that visualizes the information. This build-up can be used to analyze microemulsion assays and to get results in a standardized way. In addition, it is possible to perform stability tests of the assay by creating special differential stability diagrams after a period of time.

  9. Simple automatic strategy for background drift correction in chromatographic data analysis.

    PubMed

    Fu, Hai-Yan; Li, He-Dong; Yu, Yong-Jie; Wang, Bing; Lu, Peng; Cui, Hua-Peng; Liu, Ping-Ping; She, Yuan-Bin

    2016-06-01

    Chromatographic background drift correction, which influences peak detection and time shift alignment results, is a critical stage in chromatographic data analysis. In this study, an automatic background drift correction methodology was developed. Local minimum values in a chromatogram were initially detected and organized as a new baseline vector. Iterative optimization was then employed to recognize outliers, which belong to the chromatographic peaks, in this vector, and update the outliers in the baseline until convergence. The optimized baseline vector was finally expanded into the original chromatogram, and linear interpolation was employed to estimate background drift in the chromatogram. The principle underlying the proposed method was confirmed using a complex gas chromatographic dataset. Finally, the proposed approach was applied to eliminate background drift in liquid chromatography quadrupole time-of-flight samples used in the metabolic study of Escherichia coli samples. The proposed method was comparable with three classical techniques: morphological weighted penalized least squares, moving window minimum value strategy and background drift correction by orthogonal subspace projection. The proposed method allows almost automatic implementation of background drift correction, which is convenient for practical use. PMID:27139215

  10. Automatic brain tumour detection and neovasculature assessment with multiseries MRI analysis.

    PubMed

    Szwarc, Pawel; Kawa, Jacek; Rudzki, Marcin; Pietka, Ewa

    2015-12-01

    In this paper a novel multi-stage automatic method for brain tumour detection and neovasculature assessment is presented. First, the brain symmetry is exploited to register the magnetic resonance (MR) series analysed. Then, the intracranial structures are found and the region of interest (ROI) is constrained within them to tumour and peritumoural areas using the Fluid Light Attenuation Inversion Recovery (FLAIR) series. Next, the contrast-enhanced lesions are detected on the basis of T1-weighted (T1W) differential images before and after contrast medium administration. Finally, their vascularisation is assessed based on the Regional Cerebral Blood Volume (RCBV) perfusion maps. The relative RCBV (rRCBV) map is calculated in relation to a healthy white matter, also found automatically, and visualised on the analysed series. Three main types of brain tumours, i.e. HG gliomas, metastases and meningiomas have been subjected to the analysis. The results of contrast enhanced lesions detection have been compared with manual delineations performed independently by two experts, yielding 64.84% sensitivity, 99.89% specificity and 71.83% Dice Similarity Coefficient (DSC) for twenty analysed studies of subjects with brain tumours diagnosed. PMID:26183648

  11. Two-Stage Automatic Calibration and Predictive Uncertainty Analysis of a Semi-distributed Watershed Model

    NASA Astrophysics Data System (ADS)

    Lin, Z.; Radcliffe, D. E.; Doherty, J.

    2004-12-01

    monthly flow produced a very good fit to the measured data. Nash and Sutcliffe coefficients for daily and monthly flow over the calibration period were 0.60 and 0.86, respectively; they were 0.61 and 0.87 respectively over the validation period. Regardless of the level of model-to-measurement fit, nonuniqueness of the optimal parameter values renders the necessity of uncertainty analysis for model prediction. The nonlinear prediction uncertainty analysis showed that cautions must be exercised when using the SWAT model to predict instantaneous peak flows. The PEST (Parameter Estimation) free software was used to conduct the two-stage automatic calibration and prediction uncertainty analysis of the SWAT model.

  12. Group-wise automatic mesh-based analysis of cortical thickness

    NASA Astrophysics Data System (ADS)

    Vachet, Clement; Cody Hazlett, Heather; Niethammer, Marc; Oguz, Ipek; Cates, Joshua; Whitaker, Ross; Piven, Joseph; Styner, Martin

    2011-03-01

    The analysis of neuroimaging data from pediatric populations presents several challenges. There are normal variations in brain shape from infancy to adulthood and normal developmental changes related to tissue maturation. Measurement of cortical thickness is one important way to analyze such developmental tissue changes. We developed a novel framework that allows group-wise automatic mesh-based analysis of cortical thickness. Our approach is divided into four main parts. First an individual pre-processing pipeline is applied on each subject to create genus-zero inflated white matter cortical surfaces with cortical thickness measurements. The second part performs an entropy-based group-wise shape correspondence on these meshes using a particle system, which establishes a trade-off between an even sampling of the cortical surfaces and the similarity of corresponding points across the population using sulcal depth information and spatial proximity. A novel automatic initial particle sampling is performed using a matched 98-lobe parcellation map prior to a particle-splitting phase. Third, corresponding re-sampled surfaces are computed with interpolated cortical thickness measurements, which are finally analyzed via a statistical vertex-wise analysis module. This framework consists of a pipeline of automated 3D Slicer compatible modules. It has been tested on a small pediatric dataset and incorporated in an open-source C++ based high-level module called GAMBIT. GAMBIT's setup allows efficient batch processing, grid computing and quality control. The current research focuses on the use of an average template for correspondence and surface re-sampling, as well as thorough validation of the framework and its application to clinical pediatric studies.

  13. Automatic Generation of Algorithms for the Statistical Analysis of Planetary Nebulae Images

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd

    2004-01-01

    Analyzing data sets collected in experiments or by observations is a Core scientific activity. Typically, experimentd and observational data are &aught with uncertainty, and the analysis is based on a statistical model of the conjectured underlying processes, The large data volumes collected by modern instruments make computer support indispensible for this. Consequently, scientists spend significant amounts of their time with the development and refinement of the data analysis programs. AutoBayes [GF+02, FS03] is a fully automatic synthesis system for generating statistical data analysis programs. Externally, it looks like a compiler: it takes an abstract problem specification and translates it into executable code. Its input is a concise description of a data analysis problem in the form of a statistical model as shown in Figure 1; its output is optimized and fully documented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Internally, however, it is quite different: AutoBayes derives a customized algorithm implementing the given model using a schema-based process, and then further refines and optimizes the algorithm into code. A schema is a parameterized code template with associated semantic constraints which define and restrict the template s applicability. The schema parameters are instantiated in a problem-specific way during synthesis as AutoBayes checks the constraints against the original model or, recursively, against emerging sub-problems. AutoBayes schema library contains problem decomposition operators (which are justified by theorems in a formal logic in the domain of Bayesian networks) as well as machine learning algorithms (e.g., EM, k-Means) and nu- meric optimization methods (e.g., Nelder-Mead simplex, conjugate gradient). AutoBayes augments this schema-based approach by symbolic computation to derive closed-form solutions whenever possible. This is a major advantage over other statistical data analysis systems

  14. [A new method of automatic analysis of tongue deviation using self-correction].

    PubMed

    Zhu, Mingfeng; Du, Jianqiang; Meng, Fan; Zhang, Kang; Ding, Chenghua

    2012-02-01

    The article analyzes the old analysis method of tongue deviation and introduces a new analysis method of it with self-correction avoiding the shortcomings of the old method. In this article, comparisons and analyses are made to current central axis extraction methods, and the facts proved that these methods were not suitable for central axis extraction of tongue images. To overcome the shortcoming that the old method utilized area symmetry to extract central axis so that it would lead to a failure to find central axis, we introduced a kind of shape symmetry analysis method to extract the central axis. This method was capable of correcting the edge of tongue root automatically, and it improved the accuracy of central axis extraction. In additional, in this article, a kind of mouth corner analysis method by analysis of variational hue of tongue images was introduced. In the experiment for comparison, this method was more accurate than the old one and its efficiency was higher than that of the old one. PMID:22404028

  15. Application of automatic image analysis for the investigation of autoclaved aerated concrete structure

    SciTech Connect

    Petrov, I.; Schlegel, E. . Inst. fuer Silikattechnik)

    1994-01-01

    Autoclaved aerated concrete (AAC) is formed from small-grained mixtures of raw materials and Al-powder as an air entraining agent. Owing to its high porosity AAC has a low bulk density which leads to very good heat insulating qualities. Automatic image analysis in connection with stereology and stochastic geometry was used to describe the size distribution of air pores in autoclaved concrete. The experiments were carried out an AAC samples with extremely different bulk densities and compressive strengths. The assumption of an elliptic shape of pores leads to an unambiguous characterization of structure by bi-histograms. It will be possible to calculate the spatial pore size distribution by these histograms, if the pores are assumed as being spheroids. A marked point field model and the pair correlation function g[sub a](r) were used to describe the pore structure.

  16. Automatic quantification of morphological features for hepatic trabeculae analysis in stained liver specimens.

    PubMed

    Ishikawa, Masahiro; Murakami, Yuri; Ahi, Sercan Taha; Yamaguchi, Masahiro; Kobayashi, Naoki; Kiyuna, Tomoharu; Yamashita, Yoshiko; Saito, Akira; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie

    2016-04-01

    This paper proposes a digital image analysis method to support quantitative pathology by automatically segmenting the hepatocyte structure and quantifying its morphological features. To structurally analyze histopathological hepatic images, we isolate the trabeculae by extracting the sinusoids, fat droplets, and stromata. We then measure the morphological features of the extracted trabeculae, divide the image into cords, and calculate the feature values of the local cords. We propose a method of calculating the nuclear-cytoplasmic ratio, nuclear density, and number of layers using the local cords. Furthermore, we evaluate the effectiveness of the proposed method using surgical specimens. The proposed method was found to be an effective method for the quantification of the Edmondson grade. PMID:27335894

  17. An automatic step adjustment method for average power analysis technique used in fiber amplifiers

    NASA Astrophysics Data System (ADS)

    Liu, Xue-Ming

    2006-04-01

    An automatic step adjustment (ASA) method for average power analysis (APA) technique used in fiber amplifiers is proposed in this paper for the first time. In comparison with the traditional APA technique, the proposed method has suggested two unique merits such as a higher order accuracy and an ASA mechanism, so that it can significantly shorten the computing time and improve the solution accuracy. A test example demonstrates that, by comparing to the APA technique, the proposed method increases the computing speed by more than a hundredfold under the same errors. By computing the model equations of erbium-doped fiber amplifiers, the numerical results show that our method can improve the solution accuracy by over two orders of magnitude at the same amplifying section number. The proposed method has the capacity to rapidly and effectively compute the model equations of fiber Raman amplifiers and semiconductor lasers.

  18. Analysis of steranes and triterpanes in geolipid extracts by automatic classification of mass spectra

    NASA Technical Reports Server (NTRS)

    Wardroper, A. M. K.; Brooks, P. W.; Humberston, M. J.; Maxwell, J. R.

    1977-01-01

    A computer method is described for the automatic classification of triterpanes and steranes into gross structural type from their mass spectral characteristics. The method has been applied to the spectra obtained by gas-chromatographic/mass-spectroscopic analysis of two mixtures of standards and of hydrocarbon fractions isolated from Green River and Messel oil shales. Almost all of the steranes and triterpanes identified previously in both shales were classified, in addition to a number of new components. The results indicate that classification of such alkanes is possible with a laboratory computer system. The method has application to diagenesis and maturation studies as well as to oil/oil and oil/source rock correlations in which rapid screening of large numbers of samples is required.

  19. Development of automatic image analysis algorithms for protein localization studies in budding yeast

    NASA Astrophysics Data System (ADS)

    Logg, Katarina; Kvarnström, Mats; Diez, Alfredo; Bodvard, Kristofer; Käll, Mikael

    2007-02-01

    Microscopy of fluorescently labeled proteins has become a standard technique for live cell imaging. However, it is still a challenge to systematically extract quantitative data from large sets of images in an unbiased fashion, which is particularly important in high-throughput or time-lapse studies. Here we describe the development of a software package aimed at automatic quantification of abundance and spatio-temporal dynamics of fluorescently tagged proteins in vivo in the budding yeast Saccharomyces cerevisiae, one of the most important model organisms in proteomics. The image analysis methodology is based on first identifying cell contours from bright field images, and then use this information to measure and statistically analyse protein abundance in specific cellular domains from the corresponding fluorescence images. The applicability of the procedure is exemplified for two nuclear localized GFP-tagged proteins, Mcm4p and Nrm1p.

  20. Automatic Denoising of Functional MRI Data: Combining Independent Component Analysis and Hierarchical Fusion of Classifiers

    PubMed Central

    Salimi-Khorshidi, Gholamreza; Douaud, Gwenaëlle; Beckmann, Christian F; Glasser, Matthew F; Griffanti, Ludovica; Smith, Stephen M

    2014-01-01

    Many sources of fluctuation contribute to the fMRI signal, and this makes identifying the effects that are truly related to the underlying neuronal activity difficult. Independent component analysis (ICA) - one of the most widely used techniques for the exploratory analysis of fMRI data - has shown to be a powerful technique in identifying various sources of neuronally-related and artefactual fluctuation in fMRI data (both with the application of external stimuli and with the subject “at rest”). ICA decomposes fMRI data into patterns of activity (a set of spatial maps and their corresponding time series) that are statistically independent and add linearly to explain voxel-wise time series. Given the set of ICA components, if the components representing “signal” (brain activity) can be distinguished form the “noise” components (effects of motion, non-neuronal physiology, scanner artefacts and other nuisance sources), the latter can then be removed from the data, providing an effective cleanup of structured noise. Manual classification of components is labour intensive and requires expertise; hence, a fully automatic noise detection algorithm that can reliably detect various types of noise sources (in both task and resting fMRI) is desirable. In this paper, we introduce FIX (“FMRIB’s ICA-based X-noiseifier”), which provides an automatic solution for denoising fMRI data via accurate classification of ICA components. For each ICA component FIX generates a large number of distinct spatial and temporal features, each describing a different aspect of the data (e.g., what proportion of temporal fluctuations are at high frequencies). The set of features is then fed into a multi-level classifier (built around several different Classifiers). Once trained through the hand-classification of a sufficient number of training datasets, the classifier can then automatically classify new datasets. The noise components can then be subtracted from (or regressed out of

  1. Cold Flow Properties of Biodiesel by Automatic and Manual Analysis Methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Biodiesel from most common feedstocks has inferior cold flow properties compared to conventional diesel fuel. Blends with as little as 10 vol% biodiesel content typically have significantly higher cloud point (CP), pour point (PP) and cold filter plugging point (CFPP) than No. 2 grade diesel fuel (...

  2. Evaluating the effectiveness of treatment of corneal ulcers via computer-based automatic image analysis

    NASA Astrophysics Data System (ADS)

    Otoum, Nesreen A.; Edirisinghe, Eran A.; Dua, Harminder; Faraj, Lana

    2012-06-01

    Corneal Ulcers are a common eye disease that requires prompt treatment. Recently a number of treatment approaches have been introduced that have been proven to be very effective. Unfortunately, the monitoring process of the treatment procedure remains manual and hence time consuming and prone to human errors. In this research we propose an automatic image analysis based approach to measure the size of an ulcer and its subsequent further investigation to determine the effectiveness of any treatment process followed. In Ophthalmology an ulcer area is detected for further inspection via luminous excitation of a dye. Usually in the imaging systems utilised for this purpose (i.e. a slit lamp with an appropriate dye) the ulcer area is excited to be luminous green in colour as compared to rest of the cornea which appears blue/brown. In the proposed approach we analyse the image in the HVS colour space. Initially a pre-processing stage that carries out a local histogram equalisation is used to bring back detail in any over or under exposed areas. Secondly we deal with the removal of potential reflections from the affected areas by making use of image registration of two candidate corneal images based on the detected corneal areas. Thirdly the exact corneal boundary is detected by initially registering an ellipse to the candidate corneal boundary detected via edge detection and subsequently allowing the user to modify the boundary to overlap with the boundary of the ulcer being observed. Although this step makes the approach semi automatic, it removes the impact of breakages of the corneal boundary due to occlusion, noise, image quality degradations. The ratio between the ulcer area confined within the corneal area to the corneal area is used as a measure of comparison. We demonstrate the use of the proposed tool in the analysis of the effectiveness of a treatment procedure adopted for corneal ulcers in patients by comparing the variation of corneal size over time.

  3. A system for automatic recording and analysis of motor activity in rats.

    PubMed

    Heredia-López, Francisco J; May-Tuyub, Rossana M; Bata-García, José L; Góngora-Alfaro, José L; Alvarez-Cervera, Fernando J

    2013-03-01

    We describe the design and evaluation of an electronic system for the automatic recording of motor activity in rats. The device continually locates the position of a rat inside a transparent acrylic cube (50 cm/side) with infrared sensors arranged on its walls so as to correspond to the x-, y-, and z-axes. The system is governed by two microcontrollers. The raw data are saved in a text file within a secure digital memory card, and offline analyses are performed with a library of programs that automatically compute several parameters based on the sequence of coordinates and the time of occurrence of each movement. Four analyses can be made at specified time intervals: traveled distance (cm), movement speed (cm/s), time spent in vertical exploration (s), and thigmotaxis (%). In addition, three analyses are made for the total duration of the experiment: time spent at each x-y coordinate pair (min), time spent on vertical exploration at each x-y coordinate pair (s), and frequency distribution of vertical exploration episodes of distinct durations. User profiles of frequently analyzed parameters may be created and saved for future experimental analyses, thus obtaining a full set of analyses for a group of rats in a short time. The performance of the developed system was assessed by recording the spontaneous motor activity of six rats, while their behaviors were simultaneously videotaped for manual analysis by two trained observers. A high and significant correlation was found between the values measured by the electronic system and by the observers. PMID:22707401

  4. An automatic granular structure generation and finite element analysis of heterogeneous semi-solid materials

    NASA Astrophysics Data System (ADS)

    Sharifi, Hamid; Larouche, Daniel

    2015-09-01

    The quality of cast metal products depends on the capacity of the semi-solid metal to sustain the stresses generated during the casting. Predicting the evolution of these stresses with accuracy in the solidification interval should be highly helpful to avoid the formation of defects like hot tearing. This task is however very difficult because of the heterogeneous nature of the material. In this paper, we propose to evaluate the mechanical behaviour of a metal during solidification using a mesh generation technique of the heterogeneous semi-solid material for a finite element analysis at the microscopic level. This task is done on a two-dimensional (2D) domain in which the granular structure of the solid phase is generated surrounded by an intergranular and interdendritc liquid phase. Some basic solid grains are first constructed and projected in the 2D domain with random orientations and scale factors. Depending on their orientation, the basic grains are combined to produce larger grains or separated by a liquid film. Different basic grain shapes can produce different granular structures of the mushy zone. As a result, using this automatic grain generation procedure, we can investigate the effect of grain shapes and sizes on the thermo-mechanical behaviour of the semi-solid material. The granular models are automatically converted to the finite element meshes. The solid grains and the liquid phase are meshed properly using quadrilateral elements. This method has been used to simulate the microstructure of a binary aluminium-copper alloy (Al-5.8 wt% Cu) when the fraction solid is 0.92. Using the finite element method and the Mie-Grüneisen equation of state for the liquid phase, the transient mechanical behaviour of the mushy zone under tensile loading has been investigated. The stress distribution and the bridges, which are formed during the tensile loading, have been detected.

  5. Fractal analysis of elastographic images for automatic detection of diffuse diseases of salivary glands: preliminary results.

    PubMed

    Badea, Alexandru Florin; Lupsor Platon, Monica; Crisan, Maria; Cattani, Carlo; Badea, Iulia; Pierro, Gaetano; Sannino, Gianpaolo; Baciut, Grigore

    2013-01-01

    The geometry of some medical images of tissues, obtained by elastography and ultrasonography, is characterized in terms of complexity parameters such as the fractal dimension (FD). It is well known that in any image there are very subtle details that are not easily detectable by the human eye. However, in many cases like medical imaging diagnosis, these details are very important since they might contain some hidden information about the possible existence of certain pathological lesions like tissue degeneration, inflammation, or tumors. Therefore, an automatic method of analysis could be an expedient tool for physicians to give a faultless diagnosis. The fractal analysis is of great importance in relation to a quantitative evaluation of "real-time" elastography, a procedure considered to be operator dependent in the current clinical practice. Mathematical analysis reveals significant discrepancies among normal and pathological image patterns. The main objective of our work is to demonstrate the clinical utility of this procedure on an ultrasound image corresponding to a submandibular diffuse pathology. PMID:23762183

  6. Semi-automatic analysis of rock fracture orientations from borehole wall images

    SciTech Connect

    Thapa, B.B.; Hughett, P.; Karasaki, K.

    1997-01-01

    The authors develop a semiautomatic method of identifying rock fractures and analyzing their orientations from digital images of borehole walls. This method is based on an algorithm related to the Hough transform which is modified to find sinusoidal rather than linear patterns. The algorithm uses the high-intensity contrast between the fracture aperture and the rock wall, as well as the sinusoidal trajectory defined by the intersection of the borehole and the fracture. The analysis rate of the algorithm itself is independent of fracture contrast and network complexity. The method has successfully identified fractures both in test cases containing several fractures in a noisy background and in real borehole images. The analysis rate was 0.3--1.2 minutes/m of input data, compared to an average of 12 minutes/m using an existing interactive method. An automatic version under development should open new possibilities for site characterization, such as real-time exploration and analysis of tunnel stability and support requirements as construction proceeds.

  7. Performance analysis of distributed applications using automatic classification of communication inefficiencies

    DOEpatents

    Vetter, Jeffrey S.

    2005-02-01

    The method and system described herein presents a technique for performance analysis that helps users understand the communication behavior of their message passing applications. The method and system described herein may automatically classifies individual communication operations and reveal the cause of communication inefficiencies in the application. This classification allows the developer to quickly focus on the culprits of truly inefficient behavior, rather than manually foraging through massive amounts of performance data. Specifically, the method and system described herein trace the message operations of Message Passing Interface (MPI) applications and then classify each individual communication event using a supervised learning technique: decision tree classification. The decision tree may be trained using microbenchmarks that demonstrate both efficient and inefficient communication. Since the method and system described herein adapt to the target system's configuration through these microbenchmarks, they simultaneously automate the performance analysis process and improve classification accuracy. The method and system described herein may improve the accuracy of performance analysis and dramatically reduce the amount of data that users must encounter.

  8. A methodology for the semi-automatic digital image analysis of fragmental impactites

    NASA Astrophysics Data System (ADS)

    Chanou, A.; Osinski, G. R.; Grieve, R. A. F.

    2014-04-01

    A semi-automated digital image analysis method is developed for the comparative textural study of impact melt-bearing breccias. This method uses the freeware software ImageJ developed by the National Institute of Health (NIH). Digital image analysis is performed on scans of hand samples (10-15 cm across), based on macroscopic interpretations of the rock components. All image processing and segmentation are done semi-automatically, with the least possible manual intervention. The areal fraction of components is estimated and modal abundances can be deduced, where the physical optical properties (e.g., contrast, color) of the samples allow it. Other parameters that can be measured include, for example, clast size, clast-preferred orientations, average box-counting dimension or fragment shape complexity, and nearest neighbor distances (NnD). This semi-automated method allows the analysis of a larger number of samples in a relatively short time. Textures, granulometry, and shape descriptors are of considerable importance in rock characterization. The methodology is used to determine the variations of the physical characteristics of some examples of fragmental impactites.

  9. Automatic target detection algorithm for foliage-penetrating ultrawideband SAR data using split spectral analysis

    NASA Astrophysics Data System (ADS)

    Damarla, Thyagaraju; Kapoor, Ravinder; Ressler, Marc A.

    1999-07-01

    We present an automatic target detection (ATD) algorithm for foliage penetrating (FOPEN) ultra-wideband (UWB) synthetic aperture radar (SAR) data using split spectral analysis. Split spectral analysis is commonly used in the ultrasonic, non-destructive evaluation of materials using wide band pulses for flaw detection. In this paper, we show the application of split spectral analysis for detecting obscured targets in foliage using UWB pulse returns to discriminate targets from foliage, the data spectrum is split into several bands, namely, 20 to 75, 75 to 150, ..., 825 to 900 MHz. An ATD algorithm is developed based on the relative energy levels in various bands, the number of bands containing significant energy (spread of energy), and chip size (number of crossrange and range bins). The algorithm is tested on the (FOPEN UWB SAR) data of foliage and vehicles obscured by foliage collected at Aberdeen Proving Ground, MD. The paper presents various split spectral parameters used in the algorithm and discusses the rationale for their use.

  10. Automatic system for analysis of locomotor activity in rodents--a reproducibility study.

    PubMed

    Aragão, Raquel da Silva; Rodrigues, Marco Aurélio Benedetti; de Barros, Karla Mônica Ferraz Teixeira; Silva, Sebastião Rogério Freitas; Toscano, Ana Elisa; de Souza, Ricardo Emmanuel; Manhães-de-Castro, Raul

    2011-02-15

    Automatic analysis of locomotion in studies of behavior and development is of great importance because it eliminates the subjective influence of evaluators on the study. This study aimed to develop and test the reproducibility of a system for automated analysis of locomotor activity in rats. For this study, 15 male Wistar were evaluated at P8, P14, P17, P21, P30 and P60. A monitoring system was developed that consisted of an open field of 1m in diameter with a black surface, an infrared digital camera and a video capture card. The animals were filmed for 2 min as they moved freely in the field. The images were sent to a computer connected to the camera. Afterwards, the videos were analyzed using software developed using MATLAB® (mathematical software). The software was able to recognize the pixels constituting the image and extract the following parameters: distance traveled, average speed, average potency, time immobile, number of stops, time spent in different areas of the field and time immobile/number of stops. All data were exported for further analysis. The system was able to effectively extract the desired parameters. Thus, it was possible to observe developmental changes in the patterns of movement of the animals. We also discuss similarities and differences between this system and previously described systems. PMID:21182870

  11. Automatic generation of skeletal mechanisms for ignition combustion based on level of importance analysis

    SciTech Connect

    Loevaas, Terese

    2009-07-15

    A level of importance (LOI) selection parameter is employed in order to identify species with general low importance to the overall accuracy of a chemical model. This enables elimination of the minor reaction paths in which these species are involved. The generation of such skeletal mechanisms is performed automatically in a pre-processing step ranking species according to their level of importance. This selection criterion is a combined parameter based on a time scale and sensitivity analysis, identifying both short lived species and species with respect to which the observable of interest has low sensitivity. In this work a careful element flux analysis demonstrates that such species do not interact in major reaction paths. Employing the LOI procedure replaces the previous method of identifying redundant species through a two step procedure involving a reaction flow analysis followed by a sensitivity analysis. The flux analysis is performed using DARS {sup copyright}, a digital analysis tool modelling reactive systems. Simplified chemical models are generated based on a detailed ethylene mechanism involving 111 species and 784 reactions (1566 forward and backward reactions) proposed by Wang et al. Eliminating species from detailed mechanisms introduces errors in the predicted combustion parameters. In the present work these errors are systematically studied for a wide range of conditions, including temperature, pressure and mixtures. Results show that the accuracy of simplified models is particularly lowered when the initial temperatures are close to the transition between low- and high-temperature chemistry. A speed-up factor of 5 is observed when using a simplified model containing only 27% of the original species and 19% of the original reactions. (author)

  12. Automatic Differentiation Package

    Energy Science and Technology Software Center (ESTSC)

    2007-03-01

    Sacado is an automatic differentiation package for C++ codes using operator overloading and C++ templating. Sacado provide forward, reverse, and Taylor polynomial automatic differentiation classes and utilities for incorporating these classes into C++ codes. Users can compute derivatives of computations arising in engineering and scientific applications, including nonlinear equation solving, time integration, sensitivity analysis, stability analysis, optimization and uncertainity quantification.

  13. Automatic Roof Plane Detection and Analysis in Airborne Lidar Point Clouds for Solar Potential Assessment

    PubMed Central

    Jochem, Andreas; Höfle, Bernhard; Rutzinger, Martin; Pfeifer, Norbert

    2009-01-01

    A relative height threshold is defined to separate potential roof points from the point cloud, followed by a segmentation of these points into homogeneous areas fulfilling the defined constraints of roof planes. The normal vector of each laser point is an excellent feature to decompose the point cloud into segments describing planar patches. An object-based error assessment is performed to determine the accuracy of the presented classification. It results in 94.4% completeness and 88.4% correctness. Once all roof planes are detected in the 3D point cloud, solar potential analysis is performed for each point. Shadowing effects of nearby objects are taken into account by calculating the horizon of each point within the point cloud. Effects of cloud cover are also considered by using data from a nearby meteorological station. As a result the annual sum of the direct and diffuse radiation for each roof plane is derived. The presented method uses the full 3D information for both feature extraction and solar potential analysis, which offers a number of new applications in fields where natural processes are influenced by the incoming solar radiation (e.g., evapotranspiration, distribution of permafrost). The presented method detected fully automatically a subset of 809 out of 1,071 roof planes where the arithmetic mean of the annual incoming solar radiation is more than 700 kWh/m2. PMID:22346695

  14. Automatic Behavior Analysis During a Clinical Interview with a Virtual Human.

    PubMed

    Rizzo, Albert; Lucas, Gale; Gratch, Jonathan; Stratou, Giota; Morency, Louis-Philippe; Chavez, Kenneth; Shilling, Russ; Scherer, Stefan

    2016-01-01

    SimSensei is a Virtual Human (VH) interviewing platform that uses off-the-shelf sensors (i.e., webcams, Microsoft Kinect and a microphone) to capture and interpret real-time audiovisual behavioral signals from users interacting with the VH system. The system was specifically designed for clinical interviewing and health care support by providing a face-to-face interaction between a user and a VH that can automatically react to the inferred state of the user through analysis of behavioral signals gleaned from the user's facial expressions, body gestures and vocal parameters. Akin to how non-verbal behavioral signals have an impact on human-to-human interaction and communication, SimSensei aims to capture and infer user state from signals generated from user non-verbal communication to improve engagement between a VH and a user and to quantify user state from the data captured across a 20 minute interview. Results from of sample of service members (SMs) who were interviewed before and after a deployment to Afghanistan indicate that SMs reveal more PTSD symptoms to the VH than they report on the Post Deployment Health Assessment. Pre/Post deployment facial expression analysis indicated more sad expressions and few happy expressions at post deployment. PMID:27046598

  15. Automatic analysis (aa): efficient neuroimaging workflows and parallel processing using Matlab and XML.

    PubMed

    Cusack, Rhodri; Vicente-Grabovetsky, Alejandro; Mitchell, Daniel J; Wild, Conor J; Auer, Tibor; Linke, Annika C; Peelle, Jonathan E

    2014-01-01

    Recent years have seen neuroimaging data sets becoming richer, with larger cohorts of participants, a greater variety of acquisition techniques, and increasingly complex analyses. These advances have made data analysis pipelines complicated to set up and run (increasing the risk of human error) and time consuming to execute (restricting what analyses are attempted). Here we present an open-source framework, automatic analysis (aa), to address these concerns. Human efficiency is increased by making code modular and reusable, and managing its execution with a processing engine that tracks what has been completed and what needs to be (re)done. Analysis is accelerated by optional parallel processing of independent tasks on cluster or cloud computing resources. A pipeline comprises a series of modules that each perform a specific task. The processing engine keeps track of the data, calculating a map of upstream and downstream dependencies for each module. Existing modules are available for many analysis tasks, such as SPM-based fMRI preprocessing, individual and group level statistics, voxel-based morphometry, tractography, and multi-voxel pattern analyses (MVPA). However, aa also allows for full customization, and encourages efficient management of code: new modules may be written with only a small code overhead. aa has been used by more than 50 researchers in hundreds of neuroimaging studies comprising thousands of subjects. It has been found to be robust, fast, and efficient, for simple-single subject studies up to multimodal pipelines on hundreds of subjects. It is attractive to both novice and experienced users. aa can reduce the amount of time neuroimaging laboratories spend performing analyses and reduce errors, expanding the range of scientific questions it is practical to address. PMID:25642185

  16. Automatic analysis (aa): efficient neuroimaging workflows and parallel processing using Matlab and XML

    PubMed Central

    Cusack, Rhodri; Vicente-Grabovetsky, Alejandro; Mitchell, Daniel J.; Wild, Conor J.; Auer, Tibor; Linke, Annika C.; Peelle, Jonathan E.

    2015-01-01

    Recent years have seen neuroimaging data sets becoming richer, with larger cohorts of participants, a greater variety of acquisition techniques, and increasingly complex analyses. These advances have made data analysis pipelines complicated to set up and run (increasing the risk of human error) and time consuming to execute (restricting what analyses are attempted). Here we present an open-source framework, automatic analysis (aa), to address these concerns. Human efficiency is increased by making code modular and reusable, and managing its execution with a processing engine that tracks what has been completed and what needs to be (re)done. Analysis is accelerated by optional parallel processing of independent tasks on cluster or cloud computing resources. A pipeline comprises a series of modules that each perform a specific task. The processing engine keeps track of the data, calculating a map of upstream and downstream dependencies for each module. Existing modules are available for many analysis tasks, such as SPM-based fMRI preprocessing, individual and group level statistics, voxel-based morphometry, tractography, and multi-voxel pattern analyses (MVPA). However, aa also allows for full customization, and encourages efficient management of code: new modules may be written with only a small code overhead. aa has been used by more than 50 researchers in hundreds of neuroimaging studies comprising thousands of subjects. It has been found to be robust, fast, and efficient, for simple-single subject studies up to multimodal pipelines on hundreds of subjects. It is attractive to both novice and experienced users. aa can reduce the amount of time neuroimaging laboratories spend performing analyses and reduce errors, expanding the range of scientific questions it is practical to address. PMID:25642185

  17. Estimating the quality of pasturage in the municipality of Paragominas (PA) by means of automatic analysis of LANDSAT data

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Dossantos, A. P.; Novo, E. M. L. D.; Duarte, V.

    1981-01-01

    The use of LANDSAT data to evaluate pasture quality in the Amazon region is demonstrated. Pasture degradation in deforested areas of a traditional tropical forest cattle-raising region was estimated. Automatic analysis using interactive multispectral analysis (IMAGE-100) shows that 24% of the deforested areas were occupied by natural vegetation regrowth, 24% by exposed soil, 15% by degraded pastures, and 46% was suitable grazing land.

  18. A marked point process of rectangles and segments for automatic analysis of digital elevation models.

    PubMed

    Ortner, Mathias; Descombe, Xavier; Zerubia, Josiane

    2008-01-01

    This work presents a framework for automatic feature extraction from images using stochastic geometry. Features in images are modeled as realizations of a spatial point process of geometrical shapes. This framework allows the incorporation of a priori knowledge on the spatial repartition of features. More specifically, we present a model based on the superposition of a process of segments and a process of rectangles. The former is dedicated to the detection of linear networks of discontinuities, while the latter aims at segmenting homogeneous areas. An energy is defined, favoring connections of segments, alignments of rectangles, as well as a relevant interaction between both types of objects. The estimation is performed by minimizing the energy using a simulated annealing algorithm. The proposed model is applied to the analysis of Digital Elevation Models (DEMs). These images are raster data representing the altimetry of a dense urban area. We present results on real data provided by the IGN (French National Geographic Institute) consisting in low quality DEMs of various types. PMID:18000328

  19. Multi-level Bayesian safety analysis with unprocessed Automatic Vehicle Identification data for an urban expressway.

    PubMed

    Shi, Qi; Abdel-Aty, Mohamed; Yu, Rongjie

    2016-03-01

    In traffic safety studies, crash frequency modeling of total crashes is the cornerstone before proceeding to more detailed safety evaluation. The relationship between crash occurrence and factors such as traffic flow and roadway geometric characteristics has been extensively explored for a better understanding of crash mechanisms. In this study, a multi-level Bayesian framework has been developed in an effort to identify the crash contributing factors on an urban expressway in the Central Florida area. Two types of traffic data from the Automatic Vehicle Identification system, which are the processed data capped at speed limit and the unprocessed data retaining the original speed were incorporated in the analysis along with road geometric information. The model framework was proposed to account for the hierarchical data structure and the heterogeneity among the traffic and roadway geometric data. Multi-level and random parameters models were constructed and compared with the Negative Binomial model under the Bayesian inference framework. Results showed that the unprocessed traffic data was superior. Both multi-level models and random parameters models outperformed the Negative Binomial model and the models with random parameters achieved the best model fitting. The contributing factors identified imply that on the urban expressway lower speed and higher speed variation could significantly increase the crash likelihood. Other geometric factors were significant including auxiliary lanes and horizontal curvature. PMID:26722989

  20. Automatic detection of basal cell carcinoma using telangiectasia analysis in dermoscopy skin lesion images

    PubMed Central

    Cheng, Beibei; Erdos, David; Stanley, Ronald J.; Stoecker, William V.; Calcara, David A.; Gómez, David D.

    2011-01-01

    Background Telangiectasia, dilated blood vessels near the surface of the skin of small, varying diameter, are critical dermoscopy structures used in the detection of basal cell carcinoma (BCC). Distinguishing these vessels from other telangiectasia, that are commonly found in sun-damaged skin, is challenging. Methods Image analysis techniques are investigated to find vessels structures found in BCC automatically. The primary screen for vessels uses an optimized local color drop technique. A noise filter is developed to eliminate false-positive structures, primarily bubbles, hair, and blotch and ulcer edges. From the telangiectasia mask containing candidate vessel-like structures, shape, size and normalized count features are computed to facilitate the discrimination of benign skin lesions from BCCs with telangiectasia. Results Experimental results yielded a diagnostic accuracy as high as 96.7% using a neural network classifier for a data set of 59 BCCs and 152 benign lesions for skin lesion discrimination based on features computed from the telangiectasia masks. Conclusion In current clinical practice, it is possible to find smaller BCCs by dermoscopy than by clinical inspection. Although almost all of these small BCCs have telangiectasia, they can be short and thin. Normalization of lengths and areas helps to detect these smaller BCCs. PMID:23815446

  1. Sensitivity analysis of a mixed-phase chemical mechanism using automatic differentiation

    SciTech Connect

    Zhang, Y.; Easter, R.C.

    1998-08-01

    A sensitivity analysis of a comprehensive mixed-phase chemical mechanism is conducted under a variety of atmospheric conditions. The local sensitivities of gas and aqueous phase species concentrations with respect to a variety of model parameters are calculated using the novel automatic differentiation ADIFOR tool. The main chemical reaction pathways in all phases, interfacial mass transfer processes, and ambient physical parameters that affect tropospheric O{sub 3} formation and O{sub 3}-precursor relations under all modeled conditions are identified and analyzed. The results show that the presence of clouds not only reduces many gas phase species concentrations and the total oxidizing capacity but alters O{sub 3}-precursor relations. Decreases in gas phase concentrations and photochemical formation rates of O{sub 3} can be up to 9{percent} and 100{percent}, respectively, depending on the preexisting atmospheric conditions. The decrease in O{sub 3} formation is primarily caused by the aqueous phase reactions of O{sub 2}{sup {minus}} with dissolved HO{sub 2} and O{sub 3} under most cloudy conditions. {copyright} 1998 American Geophysical Union

  2. Sensitivity Analysis of Photochemical Indicators for O3 Chemistry Using Automatic Differentiation

    SciTech Connect

    Zhang, Yang; Bischof, Christian H.; Easter, Richard C.; Wu, Po-Ting

    2005-05-01

    Photochemical indicators for determination of O{sub 3}-NO{sub x}-ROG sensitivity and their sensitivity to model parameters are studied for a variety of polluted conditions using a comprehensive mixed-phase chemistry box model and the novel automatic differentiation ADIFOR tool. The main chemical reaction pathways in all phases, interfacial mass transfer processes, and ambient physical parameters that affect the indicators are identified and analyzed. Condensed mixed-phase chemical mechanisms are derived from the sensitivity analysis. Our results show that cloud chemistry has a significant impact on the indicators and their sensitivities, particularly on those involving H{sub 2}O{sub 2}, HNO{sub 3}, HCHO, and NO{sub z}. Caution should be taken when applying the established threshold values of indicators in regions with large cloud coverage. Among the commonly used indicators, NO{sub y} and O{sub 3}/NO{sub y} are relatively insensitive to most model parameters, whereas indicators involving H{sub 2}O{sub 2}, HNO{sub 3}, HCHO, and NO{sub z} are highly sensitive to changes in initial species concentrations, reaction rate constants, equilibrium constants, temperature, relative humidity, cloud droplet size, and cloud water content.

  3. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications.

    PubMed

    Lingua, Andrea; Marenchino, Davide; Nex, Francesco

    2009-01-01

    In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc.) and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model) generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A(2) SIFT) has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems. PMID:22412336

  4. A visual latent semantic approach for automatic analysis and interpretation of anaplastic medulloblastoma virtual slides.

    PubMed

    Cruz-Roa, Angel; González, Fabio; Galaro, Joseph; Judkins, Alexander R; Ellison, David; Baccon, Jennifer; Madabhushi, Anant; Romero, Eduardo

    2012-01-01

    A method for automatic analysis and interpretation of histopathology images is presented. The method uses a representation of the image data set based on bag of features histograms built from visual dictionary of Haar-based patches and a novel visual latent semantic strategy for characterizing the visual content of a set of images. One important contribution of the method is the provision of an interpretability layer, which is able to explain a particular classification by visually mapping the most important visual patterns associated with such classification. The method was evaluated on a challenging problem involving automated discrimination of medulloblastoma tumors based on image derived attributes from whole slide images as anaplastic or non-anaplastic. The data set comprised 10 labeled histopathological patient studies, 5 for anaplastic and 5 for non-anaplastic, where 750 square images cropped randomly from cancerous region from whole slide per study. The experimental results show that the new method is competitive in terms of classification accuracy achieving 0.87 in average. PMID:23285547

  5. Approaches to automatic parameter fitting in a microscopy image segmentation pipeline: An exploratory parameter space analysis

    PubMed Central

    Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas

    2013-01-01

    Introduction: Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. Methods: In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. Results: This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. Conclusion: The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum. PMID:23766941

  6. Automatic Robust Neurite Detection and Morphological Analysis of Neuronal Cell Cultures in High-content Screening

    PubMed Central

    Wu, Chaohong; Schulte, Joost; Sepp, Katharine J.; Littleton, J. Troy

    2011-01-01

    Cell-based high content screening (HCS) is becoming an important and increasingly favored approach in therapeutic drug discovery and functional genomics. In HCS, changes in cellular morphology and biomarker distributions provide an information-rich profile of cellular responses to experimental treatments such as small molecules or gene knockdown probes. One obstacle that currently exists with such cell-based assays is the availability of image processing algorithms that are capable of reliably and automatically analyzing large HCS image sets. HCS images of primary neuronal cell cultures are particularly challenging to analyze due to complex cellular morphology. Here we present a robust method for quantifying and statistically analyzing the morphology of neuronal cells in HCS images. The major advantages of our method over existing software lie in its capability to correct non-uniform illumination using the contrast-limited adaptive histogram equalization method; segment neuromeres using Gabor-wavelet texture analysis; and detect faint neurites by a novel phase-based neurite extraction algorithm that is invariant to changes in illumination and contrast and can accurately localize neurites. Our method was successfully applied to analyze a large HCS image set generated in a morphology screen for polyglutamine-mediated neuronal toxicity using primary neuronal cell cultures derived from embryos of a Drosophila Huntington’s Disease (HD) model. PMID:20405243

  7. Automatic aerial image shadow detection through the hybrid analysis of RGB and HIS color space

    NASA Astrophysics Data System (ADS)

    Wu, Jun; Li, Huilin; Peng, Zhiyong

    2015-12-01

    This paper presents our research on automatic shadow detection from high-resolution aerial image through the hybrid analysis of RGB and HIS color space. To this end, the spectral characteristics of shadow are firstly discussed and three kinds of spectral components including the difference between normalized blue and normalized red component - BR, intensity and saturation components are selected as criterions to obtain initial segmentation of shadow region (called primary segmentation). After that, within the normalized RGB color space and HIS color space, the shadow region is extracted again (called auxiliary segmentation) using the OTSU operation, respectively. Finally, the primary segmentation and auxiliary segmentation are combined through a logical AND-connection operation to obtain reliable shadow region. In this step, small shadow areas are removed from combined shadow region and morphological algorithms are apply to fill small holes as well. The experimental results show that the proposed approach can effectively detect the shadow region from high-resolution aerial image and in high degree of automaton.

  8. Automatic detection of diabetic foot complications with infrared thermography by asymmetric analysis

    NASA Astrophysics Data System (ADS)

    Liu, Chanjuan; van Netten, Jaap J.; van Baal, Jeff G.; Bus, Sicco A.; van der Heijden, Ferdi

    2015-02-01

    Early identification of diabetic foot complications and their precursors is essential in preventing their devastating consequences, such as foot infection and amputation. Frequent, automatic risk assessment by an intelligent telemedicine system might be feasible and cost effective. Infrared thermography is a promising modality for such a system. The temperature differences between corresponding areas on contralateral feet are the clinically significant parameters. This asymmetric analysis is hindered by (1) foot segmentation errors, especially when the foot temperature and the ambient temperature are comparable, and by (2) different shapes and sizes between contralateral feet due to deformities or minor amputations. To circumvent the first problem, we used a color image and a thermal image acquired synchronously. Foot regions, detected in the color image, were rigidly registered to the thermal image. This resulted in 97.8%±1.1% sensitivity and 98.4%±0.5% specificity over 76 high-risk diabetic patients with manual annotation as a reference. Nonrigid landmark-based registration with B-splines solved the second problem. Corresponding points in the two feet could be found regardless of the shapes and sizes of the feet. With that, the temperature difference of the left and right feet could be obtained.

  9. Automatic analysis and characterization of the hummingbird wings motion using dense optical flow features.

    PubMed

    Martínez, Fabio; Manzanera, Antoine; Romero, Eduardo

    2015-01-01

    A new method for automatic analysis and characterization of recorded hummingbird wing motion is proposed. The method starts by computing a multiscale dense optical flow field, which is used to segment the wings, i.e., pixels with larger velocities. Then, the kinematic and deformation of the wings were characterized as a temporal set of global and local measures: a global angular acceleration as a time function of each wing and a local acceleration profile that approximates the dynamics of the different wing segments. Additionally, the variance of the apparent velocity orientation estimates those wing foci with larger deformation. Finally a local measure of the orientation highlights those regions with maximal deformation. The approach was evaluated in a total of 91 flight cycles, captured using three different setups. The proposed measures follow the yaw turn hummingbird flight dynamics, with a strong correlation of all computed paths, reporting a standard deviation of [Formula: see text] and [Formula: see text] for the global angular acceleration and the global wing deformation respectively. PMID:25599248

  10. Automatic fault diagnosis of rotating machines by time-scale manifold ridge analysis

    NASA Astrophysics Data System (ADS)

    Wang, Jun; He, Qingbo; Kong, Fanrang

    2013-10-01

    This paper explores the improved time-scale representation by considering the non-linear property for effectively identifying rotating machine faults in the time-scale domain. A new time-scale signature, called time-scale manifold (TSM), is proposed in this study through combining phase space reconstruction (PSR), continuous wavelet transform (CWT), and manifold learning. For the TSM generation, an optimal scale band is selected to eliminate the influence of unconcerned scale components, and the noise in the selected band is suppressed by manifold learning to highlight the inherent non-linear structure of faulty impacts. The TSM reserves the non-stationary information and reveals the non-linear structure of the fault pattern, with the merits of noise suppression and resolution improvement. The TSM ridge is further extracted by seeking the ridge with energy concentration lying on the TSM signature. It inherits the advantages of both the TSM and ridge analysis, and hence is beneficial to demodulation of the fault information. Through analyzing the instantaneous amplitude (IA) of the TSM ridge, in which the noise is nearly not contained, the fault characteristic frequency can be exactly identified. The whole process of the proposed fault diagnosis scheme is automatic, and its effectiveness has been verified by means of typical faulty vibration/acoustic signals from a gearbox and bearings. A reliable performance of the new method is validated in comparison with traditional enveloping methods for rotating machine fault diagnosis.

  11. Automatic Selection of Order Parameters in the Analysis of Large Scale Molecular Dynamics Simulations

    PubMed Central

    2015-01-01

    Given the large number of crystal structures and NMR ensembles that have been solved to date, classical molecular dynamics (MD) simulations have become powerful tools in the atomistic study of the kinetics and thermodynamics of biomolecular systems on ever increasing time scales. By virtue of the high-dimensional conformational state space that is explored, the interpretation of large-scale simulations faces difficulties not unlike those in the big data community. We address this challenge by introducing a method called clustering based feature selection (CB-FS) that employs a posterior analysis approach. It combines supervised machine learning (SML) and feature selection with Markov state models to automatically identify the relevant degrees of freedom that separate conformational states. We highlight the utility of the method in the evaluation of large-scale simulations and show that it can be used for the rapid and automated identification of relevant order parameters involved in the functional transitions of two exemplary cell-signaling proteins central to human disease states. PMID:25516725

  12. Automatic Sleep Stage Scoring Using Time-Frequency Analysis and Stacked Sparse Autoencoders.

    PubMed

    Tsinalis, Orestis; Matthews, Paul M; Guo, Yike

    2016-05-01

    We developed a machine learning methodology for automatic sleep stage scoring. Our time-frequency analysis-based feature extraction is fine-tuned to capture sleep stage-specific signal features as described in the American Academy of Sleep Medicine manual that the human experts follow. We used ensemble learning with an ensemble of stacked sparse autoencoders for classifying the sleep stages. We used class-balanced random sampling across sleep stages for each model in the ensemble to avoid skewed performance in favor of the most represented sleep stages, and addressed the problem of misclassification errors due to class imbalance while significantly improving worst-stage classification. We used an openly available dataset from 20 healthy young adults for evaluation. We used a single channel of EEG from this dataset, which makes our method a suitable candidate for longitudinal monitoring using wearable EEG in real-world settings. Our method has both high overall accuracy (78%, range 75-80%), and high mean [Formula: see text]-score (84%, range 82-86%) and mean accuracy across individual sleep stages (86%, range 84-88%) over all subjects. The performance of our method appears to be uncorrelated with the sleep efficiency and percentage of transitional epochs in each recording. PMID:26464268

  13. Automatic Analysis for the Chemical Testing of Urine Examination Using Digital Image Processing Techniques

    NASA Astrophysics Data System (ADS)

    Vilardy, Juan M.; Peña, Jose C.; Daza, Miller F.; Torres, Cesar O.; Mattos, Lorenzo

    2008-04-01

    For to make the chemical testing of urine examination a dipstick is used, which contains pads that have incorporated within them the reagents for chemical reactions for the detection of a number from substances in the urine. Urine is added to the pads for reaction by dipping the dipstick into the urine and then slowly withdrawing it. The subsequent colorimetric reactions are timed to an endpoint; the extent of colors formation is directly related to the level of the urine constituent. The colors can be read manually by comparison with color charts or with the use of automated reflectance meters. The aim of the System described in this paper is to analyze and to determine automatically the changes of the colors in the dipstick when this is retired of the urine sample and to compare the results with color charts for the diagnosis of many common diseases such as diabetes. The system consists of: (a) a USB camera. (b) Computer. (c) Software Matlab v7.4. Image analysis begins with a digital capturing of the image as data. Once the image is acquired in digital format, the data can be manipulated through digital image processing. Our objective was to develop a computerised image processing system and an interactive software package for the backing of clinicians, medical research and medical students.

  14. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications

    PubMed Central

    Lingua, Andrea; Marenchino, Davide; Nex, Francesco

    2009-01-01

    In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc.) and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model) generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A2 SIFT) has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems. PMID:22412336

  15. Automatic Detections of P and S Phases using Singular Value Decomposition Analysis

    NASA Astrophysics Data System (ADS)

    Kurzon, I.; Vernon, F.; Ben-Zion, Y.; Rosenberger, A.

    2012-12-01

    We implement a new method for the automatic detection of the primary P and S phases using Singular Value Decomposition (SVD) analysis. The method is based on a real-time iteration algorithm of Rosenberger (2010) for the SVD of three component seismograms. Rosenberger's algorithm identifies the incidence angle by applying SVD and separates the waveforms into their P and S components. We have been using the same algorithm, with the modification that we apply a set of filters prior to the SVD, and study the success of these filters in detecting correctly the P and S arrivals, in different stations and segments of the San Jacinto Fault Zone. A recent deployment in San Jacinto Fault Zone area provides a very dense seismic networks, with ~ 90 stations in a fault zone which is 150km long and 30km wide. Embedded in this network are 5 linear arrays crossing the fault trace, with ~ 10 stations at ~ 25-50m spacing in each array. This allows us to test the detection algorithm in a diverse setting, including events with different source mechanisms, stations with different site characteristics, and ray paths that diverge from the SVD approximation used in the algorithm, such as rays propagating within the fault and recorded on the linear arrays. Comparing our new method with classic automatic detection methods using Short Time Average (STA) to Long Time Average (LTA) ratios, we show the success of this SVD detection. Unlike the STA to LTA ratio methods that normally tend to detect the P phase, but in many cases cannot distinguish the S arrival, the main advantage of the SVD method is that almost all the P arrivals have an associated S arrival. Moreover, even for cases of short distance events, in which the S arrivals are masked by the P waves, the SVD algorithm under low band filters, manages to detect those S arrivals. The method is less consistent for stations located directly on the fault traces, in which the SVD approximation is not always valid; but even in such cases the

  16. First- and Second-Order Sensitivity Analysis of a P-Version Finite Element Equation Via Automatic Differentiation

    NASA Technical Reports Server (NTRS)

    Hou, Gene

    1998-01-01

    Sensitivity analysis is a technique for determining derivatives of system responses with respect to design parameters. Among many methods available for sensitivity analysis, automatic differentiation has been proven through many applications in fluid dynamics and structural mechanics to be an accurate and easy method for obtaining derivatives. Nevertheless, the method can be computational expensive and can require a high memory space. This project will apply an automatic differentiation tool, ADIFOR, to a p-version finite element code to obtain first- and second- order then-nal derivatives, respectively. The focus of the study is on the implementation process and the performance of the ADIFOR-enhanced codes for sensitivity analysis in terms of memory requirement, computational efficiency, and accuracy.

  17. Automatic differentiation bibliography

    SciTech Connect

    Corliss, G.F.

    1992-07-01

    This is a bibliography of work related to automatic differentiation. Automatic differentiation is a technique for the fast, accurate propagation of derivative values using the chain rule. It is neither symbolic nor numeric. Automatic differentiation is a fundamental tool for scientific computation, with applications in optimization, nonlinear equations, nonlinear least squares approximation, stiff ordinary differential equation, partial differential equations, continuation methods, and sensitivity analysis. This report is an updated version of the bibliography which originally appeared in Automatic Differentiation of Algorithms: Theory, Implementation, and Application.

  18. Automatic Imitation

    ERIC Educational Resources Information Center

    Heyes, Cecilia

    2011-01-01

    "Automatic imitation" is a type of stimulus-response compatibility effect in which the topographical features of task-irrelevant action stimuli facilitate similar, and interfere with dissimilar, responses. This article reviews behavioral, neurophysiological, and neuroimaging research on automatic imitation, asking in what sense it is "automatic"…

  19. Multifractal Analysis and Relevance Vector Machine-Based Automatic Seizure Detection in Intracranial EEG.

    PubMed

    Zhang, Yanli; Zhou, Weidong; Yuan, Shasha

    2015-09-01

    Automatic seizure detection technology is of great significance for long-term electroencephalogram (EEG) monitoring of epilepsy patients. The aim of this work is to develop a seizure detection system with high accuracy. The proposed system was mainly based on multifractal analysis, which describes the local singular behavior of fractal objects and characterizes the multifractal structure using a continuous spectrum. Compared with computing the single fractal dimension, multifractal analysis can provide a better description on the transient behavior of EEG fractal time series during the evolvement from interictal stage to seizures. Thus both interictal EEG and ictal EEG were analyzed by multifractal formalism and their differences in the multifractal features were used to distinguish the two class of EEG and detect seizures. In the proposed detection system, eight features (α0, α(min), α(max), Δα, f(α(min)), f(α(max)), Δf and R) were extracted from the multifractal spectrums of the preprocessed EEG to construct feature vectors. Subsequently, relevance vector machine (RVM) was applied for EEG patterns classification, and a series of post-processing operations were used to increase the accuracy and reduce false detections. Both epoch-based and event-based evaluation methods were performed to appraise the system's performance on the EEG recordings of 21 patients in the Freiburg database. The epoch-based sensitivity of 92.94% and specificity of 97.47% were achieved, and the proposed system obtained a sensitivity of 92.06% with a false detection rate of 0.34/h in event-based performance assessment. PMID:25986754

  20. Digital automatic gain control

    NASA Technical Reports Server (NTRS)

    Uzdy, Z.

    1980-01-01

    Performance analysis, used to evaluated fitness of several circuits to digital automatic gain control (AGC), indicates that digital integrator employing coherent amplitude detector (CAD) is best device suited for application. Circuit reduces gain error to half that of conventional analog AGC while making it possible to automatically modify response of receiver to match incoming signal conditions.

  1. Learner Systems and Error Analysis. Perspective: A New Freedom. ACTFL Review of Foreign Language Education, Vol. 7.

    ERIC Educational Resources Information Center

    Valdman, Albert

    Errors in second language learning are viewed as evidence of the learner's hypotheses and strategies about the new data. Error observation and analysis are important to the formulation of theories about language learning and the preparation of teaching materials. Learning a second language proceeds by a series of approximative reorganizations…

  2. Implementation of terbium-sensitized luminescence in sequential-injection analysis for automatic analysis of orbifloxacin.

    PubMed

    Llorent-Martínez, E J; Ortega-Barrales, P; Molina-Díaz, A; Ruiz-Medina, A

    2008-12-01

    Orbifloxacin (ORBI) is a third-generation fluoroquinolone developed exclusively for use in veterinary medicine, mainly in companion animals. This antimicrobial agent has bactericidal activity against numerous gram-negative and gram-positive bacteria. A few chromatographic methods for its analysis have been described in the scientific literature. Here, coupling of sequential-injection analysis and solid-phase spectroscopy is described in order to develop, for the first time, a terbium-sensitized luminescent optosensor for analysis of ORBI. The cationic resin Sephadex-CM C-25 was used as solid support and measurements were made at 275/545 nm. The system had a linear dynamic range of 10-150 ng mL(-1), with a detection limit of 3.3 ng mL(-1) and an R.S.D. below 3% (n = 10). The analyte was satisfactorily determined in veterinary drugs and dog and horse urine. PMID:18958455

  3. Quantification of coronary artery plaque using 64-slice dual-source CT: comparison of semi-automatic and automatic computer-aided analysis based on intravascular ultrasonography as the gold standard.

    PubMed

    Kim, Young Jun; Jin, Gong Yong; Kim, Eun Young; Han, Young Min; Chae, Jei Keon; Lee, Sang Rok; Kwon, Keun Sang

    2013-12-01

    We evaluated the feasibility of automatic computer-aided analysis (CAA) compared with semi-automatic CAA for differentiating lipid-rich from fibrous plaques based on coronary CT angiography (CCTA) imaging. Seventy-four coronary plaques in 57 patients were evaluated by CCTA using 64-slice dual-source CT. Quantitative analysis of coronary artery plaques was performed by measuring the relative volumes (low, medium, and calcified) of plaque components using automatic CAA and by measuring mean CT density using semi-automatic CAA. We compared the two plaque measurement methods for lipid-rich and fibrous plaques using Pearson's correlation. Intravascular ultrasonography was used as the goal standard for assessment of plaques. Mean CT density of plaques tended to increase in the order of lipid [36 ± 19 Hounsfield unit (HU)], fibrous (106 ± 34 HU), and then calcified plaques (882 ± 296 HU). The mean relative volumes of 'low' components measured by automatic CAA were 13.8 ± 4.6, 7.9 ± 6.7, and 3.5 ± 3.0 % for lipid, fibrous, and calcified plaques, respectively (r = -0.348, P = 0.022). The mean relative volumes of 'medium' components on automatic CAA were 12.9 ± 4.1, 15.7 ± 9.6, and 5.6 ± 4.8 % for lipid, fibrous, and calcified plaques, respectively (r = -0.385, P = 0.011). The mean relative volumes of low and medium components within plaques significantly correlated with the types of plaques. Plaque analysis using automatic CAA has the potential to differentiate lipid from fibrous plaques based on measurement of the relative volume percentages of the low and medium components. PMID:24293043

  4. Automatic identification of responses from porphyry intrusive systems within magnetic data using image analysis

    NASA Astrophysics Data System (ADS)

    Holden, Eun-Jung; Fu, Shih Ching; Kovesi, Peter; Dentith, Michael; Bourne, Barry; Hope, Matthew

    2011-08-01

    Direct targeting of mineral deposits using magnetic data may be facilitated by hydrothermal alteration associated with the mineralising event if the alteration changes the magnetic properties of the host rock. Hydrothermal alteration associated with porphyry-style mineralisation typically comprises concentric near-circular alteration zones surrounding a roughly circular central intrusion. The intrusion itself and the proximal alteration zone are usually associated with positive magnetic anomalies whilst the outer alteration zones are much less magnetic. Because the country rocks are usually magnetic, this pattern of alteration produces a central magnetic 'high' surrounded by an annular magnetic 'low'. This paper presents an automatic image analysis system for gridded data that provides an efficient, accurate and non-subjective way to seek the magnetic response of an idealised porphyry mineralising system within magnetic datasets. The method finds circular anomalies that are associated with the central intrusion and inner alteration zone of the porphyry system using a circular feature detection method called the radial symmetry transform. Next, their boundaries are traced using deformable splines that are drawn to the locations of maximum contrast between the amplitudes of the central 'high' and surrounding area of lower magnetisation. Experiments were conducted on magnetic data from Reko Diq, Pakistan; a region known to contain numerous occurrences of porphyry-style mineralisation. The predicted locations of porphyry systems closely match the locations of the known deposits in this region. This system is suitable as an initial screening tool for large geophysical datasets, therefore reducing the time and cost imposed by manual data inspection in the exploration process. The same principles can be applied to the search for circular magnetic responses with different amplitude characteristics.

  5. Automatic quantitative analysis of experimental primary and secondary retinal neurodegeneration: implications for optic neuropathies

    PubMed Central

    Davis, B M; Guo, L; Brenton, J; Langley, L; Normando, E M; Cordeiro, M F

    2016-01-01

    Secondary neurodegeneration is thought to play an important role in the pathology of neurodegenerative disease, which potential therapies may target. However, the quantitative assessment of the degree of secondary neurodegeneration is difficult. The present study describes a novel algorithm from which estimates of primary and secondary degeneration are computed using well-established rodent models of partial optic nerve transection (pONT) and ocular hypertension (OHT). Brn3-labelled retinal ganglion cells (RGCs) were identified in whole-retinal mounts from which RGC density, nearest neighbour distances and regularity indices were determined. The spatial distribution and rate of RGC loss were assessed and the percentage of primary and secondary degeneration in each non-overlapping segment was calculated. Mean RGC number (82 592±681) and RGC density (1695±23.3 RGC/mm2) in naïve eyes were comparable with previous studies, with an average decline in RGC density of 71±17 and 23±5% over the time course of pONT and OHT models, respectively. Spatial analysis revealed greatest RGC loss in the superior and central retina in pONT, but significant RGC loss in the inferior retina from 3 days post model induction. In comparison, there was no significant difference between superior and inferior retina after OHT induction, and RGC loss occurred mainly along the superior/inferior axis (~30%) versus the nasal–temporal axis (~15%). Intriguingly, a significant loss of RGCs was also observed in contralateral eyes in experimental OHT. In conclusion, a novel algorithm to automatically segment Brn3a-labelled retinal whole-mounts into non-overlapping segments is described, which enables automated spatial and temporal segmentation of RGCs, revealing heterogeneity in the spatial distribution of primary and secondary degenerative processes. This method provides an attractive means to rapidly determine the efficacy of neuroprotective therapies with implications for any

  6. A new and fast methodology to assess oxidative damage in cardiovascular diseases risk development through eVol-MEPS-UHPLC analysis of four urinary biomarkers.

    PubMed

    Mendes, Berta; Silva, Pedro; Mendonça, Isabel; Pereira, Jorge; Câmara, José S

    2013-11-15

    In this work, a new, fast and reliable methodology using a digitally controlled microextraction by packed sorbent (eVol(®)-MEPS) followed by ultra-high pressure liquid chromatography (UHPLC) analysis with photodiodes (PDA) detection, was developed to establish the urinary profile levels of four putative oxidative stress biomarkers (OSBs) in healthy subjects and patients evidencing cardiovascular diseases (CVDs). This data was used to verify the suitability of the selected OSBs (uric acid-UAc, malondialdehyde-MDA, 5-(hydroxymethyl)uracil-5-HMUra and 8-hydroxy-2'-deoxyguanosine-8-oxodG) as potential biomarkers of CVDs progression. Important parameters affecting the efficiency of the extraction process were optimized, particularly stationary phase selection, pH influence, sample volume, number of extraction cycles and washing and elution volumes. The experimental conditions that allowed the best extraction efficiency, expressed in terms of total area of the target analytes and data reproducibility, includes a 10 times dilution and pH adjustment of the urine samples to 6.0, followed by a gradient elution through the C8 adsorbent with 5 times 50 µL of 0.01% formic acid and 3×50 µL of 20% methanol in 0.01% formic acid. The chromatographic separation of the target analytes was performed with a HSS T3 column (100 mm × 2.1 mm, 1.7 µm in particle size) using 0.01% formic acid 20% methanol at 250 µL min(-1). The methodology was validated in terms of selectivity, linearity, instrumental limit of detection (LOD), method limit of quantification (LOQ), matrix effect, accuracy and precision (intra-and inter-day). Good results were obtained in terms of selectivity and linearity (r(2)>0.9906), as well as the LOD and LOQ, whose values were low, ranging from 0.00005 to 0.72 µg mL(-1) and 0.00023 to 2.31 µg mL(-1), respectively. The recovery results (91.1-123.0%), intra-day (1.0-8.3%), inter-day precision (4.6-6.3%) and the matrix effect (60.1-110.3%) of eVol

  7. Image structural analysis in the tasks of automatic navigation of unmanned vehicles and inspection of Earth surface

    NASA Astrophysics Data System (ADS)

    Lutsiv, Vadim; Malyshev, Igor

    2013-10-01

    The automatic analysis of images of terrain is urgent for several decades. On the one hand, such analysis is a base of automatic navigation of unmanned vehicles. On the other hand, the amount of information transferred to the Earth by modern video-sensors increases, thus a preliminary classification of such data by onboard computer becomes urgent. We developed an object-independent approach to structural analysis of images. While creating the methods of image structural description, we did our best to abstract away from the partial peculiarities of scenes. Only the most general limitations were taken into account, that were derived from the laws of organization of observable environment and from the properties of image formation systems. The practical application of this theoretic approach enables reliable matching the aerospace photographs acquired from differing aspect angles, in different day-time and seasons by sensors of differing types. The aerospace photographs can be matched even with the geographic maps. The developed approach enabled solving the tasks of automatic navigation of unmanned vehicles. The signs of changes and catastrophes can be detected by means of matching and comparison of aerospace photographs acquired at different time. We present the theoretical proofs of chosen strategy of structural description and matching of images. Several examples of matching of acquired images with template pictures and maps of terrain are shown within the frameworks of navigation of unmanned vehicles or detection of signs of disasters.

  8. Automatic and objective oral cancer diagnosis by Raman spectroscopic detection of keratin with multivariate curve resolution analysis

    PubMed Central

    Chen, Po-Hsiung; Shimada, Rintaro; Yabumoto, Sohshi; Okajima, Hajime; Ando, Masahiro; Chang, Chiou-Tzu; Lee, Li-Tzu; Wong, Yong-Kie; Chiou, Arthur; Hamaguchi, Hiro-o

    2016-01-01

    We have developed an automatic and objective method for detecting human oral squamous cell carcinoma (OSCC) tissues with Raman microspectroscopy. We measure 196 independent Raman spectra from 196 different points of one oral tissue sample and globally analyze these spectra using a Multivariate Curve Resolution (MCR) analysis. Discrimination of OSCC tissues is automatically and objectively made by spectral matching comparison of the MCR decomposed Raman spectra and the standard Raman spectrum of keratin, a well-established molecular marker of OSCC. We use a total of 24 tissue samples, 10 OSCC and 10 normal tissues from the same 10 patients, 3 OSCC and 1 normal tissues from different patients. Following the newly developed protocol presented here, we have been able to detect OSCC tissues with 77 to 92% sensitivity (depending on how to define positivity) and 100% specificity. The present approach lends itself to a reliable clinical diagnosis of OSCC substantiated by the “molecular fingerprint” of keratin. PMID:26806007

  9. A system for automatic analysis of blood pressure data for digital computer entry

    NASA Technical Reports Server (NTRS)

    Miller, R. L.

    1972-01-01

    Operation of automatic blood pressure data system is described. Analog blood pressure signal is analyzed by three separate circuits, systolic, diastolic, and cycle defect. Digital computer output is displayed on teletype paper tape punch and video screen. Illustration of system is included.

  10. AUTOMATIC ANALYSIS OF DISSOLVED METAL POLLUTANTS IN WATER BY ENERGY DISPERSIVE X-RAY SPECTROSCOPY

    EPA Science Inventory

    An automated system for the quantitative determination of dissolved metals such as Fe, Cu, Zn, Ca, Co, Ni, Cr, Hg, Se, and Pb in water is described. The system collects a water sample, preconcentrates the dissolved metals with ion-exchange paper automatically in a sample collecti...

  11. Automatic data processing and analysis system for monitoring region around a planned nuclear power plant

    NASA Astrophysics Data System (ADS)

    Kortström, Jari; Tiira, Timo; Kaisko, Outi

    2016-03-01

    The Institute of Seismology of University of Helsinki is building a new local seismic network, called OBF network, around planned nuclear power plant in Northern Ostrobothnia, Finland. The network will consist of nine new stations and one existing station. The network should be dense enough to provide azimuthal coverage better than 180° and automatic detection capability down to ML -0.1 within a radius of 25 km from the site.The network construction work began in 2012 and the first four stations started operation at the end of May 2013. We applied an automatic seismic signal detection and event location system to a network of 13 stations consisting of the four new stations and the nearest stations of Finnish and Swedish national seismic networks. Between the end of May and December 2013 the network detected 214 events inside the predefined area of 50 km radius surrounding the planned nuclear power plant site. Of those detections, 120 were identified as spurious events. A total of 74 events were associated with known quarries and mining areas. The average location error, calculated as a difference between the announced location from environment authorities and companies and the automatic location, was 2.9 km. During the same time period eight earthquakes between magnitude range 0.1-1.0 occurred within the area. Of these seven could be automatically detected. The results from the phase 1 stations of the OBF network indicates that the planned network can achieve its goals.

  12. A Meta-Analysis on the Malleability of Automatic Gender Stereotypes

    ERIC Educational Resources Information Center

    Lenton, Alison P.; Bruder, Martin; Sedikides, Constantine

    2009-01-01

    This meta-analytic review examined the efficacy of interventions aimed at reducing automatic gender stereotypes. Such interventions included attentional distraction, salience of within-category heterogeneity, and stereotype suppression. A small but significant main effect (g = 0.32) suggests that these interventions are successful but that their…

  13. [Development of a Japanese version of the Valuation of Life (VOL) scale].

    PubMed

    Nakagawa, Takeshi; Gondo, Yasuyuki; Masui, Yukie; Ishioka, Yoshiko; Tabuchi, Megumi; Kamide, Kei; Ikebe, Kazunori; Arai, Yasumichi; Takahashi, Ryutaro

    2013-04-01

    This study developed a Japanese version of the Valuation of Life (VOL) scale, to measure psychological wellbeing among older adults. In Analysis 1, we conducted a factor analysis of 13 items, and identified two factors: positive VOL and spiritual well-being. These factors had adequate degrees of internal consistency, and were related to positive mental health. In Analysis 2, we examined sociodemographic, social, and health predictors for VOL. The role of social factors was stronger than the role of health factors, and spiritual well-being was more related to moral or religious activities than positive VOL. These results suggest that predictors for VOL vary by culture. In Analysis 3, we investigated the relationship between VOL and desired years of life. Positive VOL significantly predicted more desired years of life, whereas spiritual well-being did not. Positive VOL had acceptable reliability and validity. Future research is required to investigate whether VOL predicts survival duration or end-of-life decisions. PMID:23705232

  14. Analysis of operators for detection of corners set in automatic image matching

    NASA Astrophysics Data System (ADS)

    Zawieska, D.

    2011-12-01

    Reconstruction of three dimensional models of objects from images has been a long lasting research topic in photogrammetry and computer vision. The demand for 3D models is continuously increasing in such fields as cultural heritage, computer graphics, robotics and many others. The number and types of features of a 3D model are highly dependent on the use of the models, and can be very variable in terms of accuracy and time for their creation. In last years, both computer vision and photogrammetric communities have approached the reconstruction problems by using different methods to solve the same tasks, such as camera calibration, orientation, object reconstruction and modelling. The terminology which is used for addressing the particular task in both disciplines is sometimes diverse. On the other hand, the integration of methods and algorithms coming from them can be used to improve both. The image based modelling of an object has been defined as a complete process that starts with image acquisition and ends with an interactive 3D virtual model. The photogrammetric approach to create 3D models involves the followings steps: image pre-processing, camera calibration, orientation of images network, image scanning for point detection, surface measurement and point triangulation, blunder detection and statistical filtering, mesh generation and texturing, visualization and analysis. Currently there is no single software package available that allows for each of those steps to be executed within the same environment. For high accuracy of 3D objects reconstruction operators are required as a preliminary step in the surface measurement process, to find the features that serve as suitable points when matching across multiple images. Operators are the algorithms which detect the features of interest in an image, such as corners, edges or regions. This paper reports on the first phase of research on the generation of high accuracy 3D model measurement and modelling, focusing

  15. Texture analysis of the 3D collagen network and automatic classification of the physiology of articular cartilage.

    PubMed

    Duan, Xiaojuan; Wu, Jianping; Swift, Benjamin; Kirk, Thomas Brett

    2015-07-01

    A close relationship has been found between the 3D collagen structure and physiological condition of articular cartilage (AC). Studying the 3D collagen network in AC offers a way to determine the condition of the cartilage. However, traditional qualitative studies are time consuming and subjective. This study aims to develop a computer vision-based classifier to automatically determine the condition of AC tissue based on the structural characteristics of the collagen network. Texture analysis was applied to quantitatively characterise the 3D collagen structure in normal (International Cartilage Repair Society, ICRS, grade 0), aged (ICRS grade 1) and osteoarthritic cartilages (ICRS grade 2). Principle component techniques and linear discriminant analysis were then used to classify the microstructural characteristics of the 3D collagen meshwork and the condition of the AC. The 3D collagen meshwork in the three physiological condition groups displayed distinctive characteristics. Texture analysis indicated a significant difference in the mean texture parameters of the 3D collagen network between groups. The principle component and linear discriminant analysis of the texture data allowed for the development of a classifier for identifying the physiological status of the AC with an expected prediction error of 4.23%. An automatic image analysis classifier has been developed to predict the physiological condition of AC (from ICRS grade 0 to 2) based on texture data from the 3D collagen network in the tissue. PMID:24428581

  16. Design and implementation of a context-sensitive, flow-sensitive activity analysis algorithm for automatic differentiation.

    SciTech Connect

    Shin, J.; Malusare, P.; Hovland, P. D.; Mathematics and Computer Science

    2008-01-01

    Automatic differentiation (AD) has been expanding its role in scientific computing. While several AD tools have been actively developed and used, a wide range of problems remain to be solved. Activity analysis allows AD tools to generate derivative code for fewer variables, leading to a faster run time of the output code. This paper describes a new context-sensitive, flow-sensitive (CSFS) activity analysis, which is developed by extending an existing context-sensitive, flow-insensitive (CSFI) activity analysis. Our experiments with eight benchmarks show that the new CSFS activity analysis is more than 27 times slower but reduces 8 overestimations for the MIT General Circulation Model (MITgcm) and 1 for an ODE solver (c2) compared with the existing CSFI activity analysis implementation. Although the number of reduced overestimations looks small, the additionally identified passive variables may significantly reduce tedious human effort in maintaining a large code base such as MITgcm.

  17. Assessment of Severe Apnoea through Voice Analysis, Automatic Speech, and Speaker Recognition Techniques

    NASA Astrophysics Data System (ADS)

    Fernández Pozo, Rubén; Blanco Murillo, Jose Luis; Hernández Gómez, Luis; López Gonzalo, Eduardo; Alcázar Ramírez, José; Toledano, Doroteo T.

    2009-12-01

    This study is part of an ongoing collaborative effort between the medical and the signal processing communities to promote research on applying standard Automatic Speech Recognition (ASR) techniques for the automatic diagnosis of patients with severe obstructive sleep apnoea (OSA). Early detection of severe apnoea cases is important so that patients can receive early treatment. Effective ASR-based detection could dramatically cut medical testing time. Working with a carefully designed speech database of healthy and apnoea subjects, we describe an acoustic search for distinctive apnoea voice characteristics. We also study abnormal nasalization in OSA patients by modelling vowels in nasal and nonnasal phonetic contexts using Gaussian Mixture Model (GMM) pattern recognition on speech spectra. Finally, we present experimental findings regarding the discriminative power of GMMs applied to severe apnoea detection. We have achieved an 81% correct classification rate, which is very promising and underpins the interest in this line of inquiry.

  18. NASA automatic subject analysis technique for extracting retrievable multi-terms (NASA TERM) system

    NASA Technical Reports Server (NTRS)

    Kirschbaum, J.; Williamson, R. E.

    1978-01-01

    Current methods for information processing and retrieval used at the NASA Scientific and Technical Information Facility are reviewed. A more cost effective computer aided indexing system is proposed which automatically generates print terms (phrases) from the natural text. Satisfactory print terms can be generated in a primarily automatic manner to produce a thesaurus (NASA TERMS) which extends all the mappings presently applied by indexers, specifies the worth of each posting term in the thesaurus, and indicates the areas of use of the thesaurus entry phrase. These print terms enable the computer to determine which of several terms in a hierarchy is desirable and to differentiate ambiguous terms. Steps in the NASA TERMS algorithm are discussed and the processing of surrogate entry phrases is demonstrated using four previously manually indexed STAR abstracts for comparison. The simulation shows phrase isolation, text phrase reduction, NASA terms selection, and RECON display.

  19. Automatic stress-relieving music recommendation system based on photoplethysmography-derived heart rate variability analysis.

    PubMed

    Shin, Il-Hyung; Cha, Jaepyeong; Cheon, Gyeong Woo; Lee, Choonghee; Lee, Seung Yup; Yoon, Hyung-Jin; Kim, Hee Chan

    2014-01-01

    This paper presents an automatic stress-relieving music recommendation system (ASMRS) for individual music listeners. The ASMRS uses a portable, wireless photoplethysmography module with a finger-type sensor, and a program that translates heartbeat signals from the sensor to the stress index. The sympathovagal balance index (SVI) was calculated from heart rate variability to assess the user's stress levels while listening to music. Twenty-two healthy volunteers participated in the experiment. The results have shown that the participants' SVI values are highly correlated with their prespecified music preferences. The sensitivity and specificity of the favorable music classification also improved as the number of music repetitions increased to 20 times. Based on the SVI values, the system automatically recommends favorable music lists to relieve stress for individuals. PMID:25571461

  20. Sensitivity analysis using parallel ODE solvers and automatic differentiation in C : SensPVODE and ADIC.

    SciTech Connect

    Lee, S. L.; Hovland, P. D.

    2000-11-01

    PVODE is a high-performance ordinary differential equation solver for the types of initial value problems (IVPs) that arise in large-scale computational simulations. Often, one wants to compute sensitivities with respect to certain parameters in the IVP. We discuss the use of automatic differentiation (AD) to compute these sensitivities in the context of PVODE. Results on a simple test problem indicate that the use of AD-generated derivative code can reduce the time to solution over finite difference approximations.

  1. Sensitivity analysis using parallel ODE solvers and automatic differentiation in C: sensPVODE and ADIC

    SciTech Connect

    Lee, S L; Hovland, P D

    2000-09-15

    PVODE is a high-performance ordinary differential equation solver for the types of initial value problems (IVPs) that arise in large-scale computational simulations. often, one wants to compute sensitivities with respect to certain parameters in the IVP. They discuss the use of automatic differentiation (AD) to compute these sensitivities in the context of PVODE. Results on a simple test problem indicate that the use of AD-generated derivative code can reduce the time to solution over finite difference approximations.

  2. Design and analysis of an automatic method of measuring silicon-controlled-rectifier holding current

    NASA Technical Reports Server (NTRS)

    Maslowski, E. A.

    1971-01-01

    The design of an automated SCR holding-current measurement system is described. The circuits used in the measurement system were designed to meet the major requirements of automatic data acquisition, reliability, and repeatability. Performance data are presented and compared with calibration data. The data verified the accuracy of the measurement system. Data taken over a 48-hr period showed that the measurement system operated satisfactorily and met all the design requirements.

  3. Semi-automatic measures of activity in selected south polar regions of Mars using morphological image analysis

    NASA Astrophysics Data System (ADS)

    Aye, Klaus-Michael; Portyankina, Ganna; Pommerol, Antoine; Thomas, Nicolas

    results of these semi-automatically determined seasonal fan count evolutions for Inca City, Ithaca and Manhattan ROIs, compare these evolutionary patterns with each other and with surface reflectance evolutions of both HiRISE and CRISM for the same locations. References: Aye, K.-M. et. al. (2010), LPSC 2010, 2707 Hansen, C. et. al (2010) Icarus, 205, Issue 1, p. 283-295 Kieffer, H.H. (2007), JGR 112 Portyankina, G. et. al. (2010), Icarus, 205, Issue 1, p. 311-320 Thomas, N. et. Al. (2009), Vol. 4, EPSC2009-478

  4. AUTOCASK (AUTOmatic Generation of 3-D CASK models). A microcomputer based system for shipping cask design review analysis

    SciTech Connect

    Gerhard, M.A.; Sommer, S.C.

    1995-04-01

    AUTOCASK (AUTOmatic Generation of 3-D CASK models) is a microcomputer-based system of computer programs and databases developed at the Lawrence Livermore National Laboratory (LLNL) for the structural analysis of shipping casks for radioactive material. Model specification is performed on the microcomputer, and the analyses are performed on an engineering workstation or mainframe computer. AUTOCASK is based on 80386/80486 compatible microcomputers. The system is composed of a series of menus, input programs, display programs, a mesh generation program, and archive programs. All data is entered through fill-in-the-blank input screens that contain descriptive data requests.

  5. Automatic Detection of Laryngeal Pathology on Sustained Vowels Using Short-Term Cepstral Parameters: Analysis of Performance and Theoretical Justification

    NASA Astrophysics Data System (ADS)

    Fraile, Rubén; Godino-Llorente, Juan Ignacio; Sáenz-Lechón, Nicolás; Osma-Ruiz, Víctor; Gómez-Vilda, Pedro

    The majority of speech signal analysis procedures for automatic detection of laryngeal pathologies mainly rely on parameters extracted from time-domain processing. Moreover, calculation of these parameters often requires prior pitch period estimation; therefore, their validity heavily depends on the robustness of pitch detection. Within this paper, an alternative approach based on cepstral - domain processing is presented which has the advantage of not requiring pitch estimation, thus providing a gain in both simplicity and robustness. While the proposed scheme is similar to solutions based on Mel-frequency cepstral parameters, already present in literature, it has an easier physical interpretation while achieving similar performance standards.

  6. Texture analysis of automatic graph cuts segmentations for detection of lung cancer recurrence after stereotactic radiotherapy

    NASA Astrophysics Data System (ADS)

    Mattonen, Sarah A.; Palma, David A.; Haasbeek, Cornelis J. A.; Senan, Suresh; Ward, Aaron D.

    2015-03-01

    Stereotactic ablative radiotherapy (SABR) is a treatment for early-stage lung cancer with local control rates comparable to surgery. After SABR, benign radiation induced lung injury (RILI) results in tumour-mimicking changes on computed tomography (CT) imaging. Distinguishing recurrence from RILI is a critical clinical decision determining the need for potentially life-saving salvage therapies whose high risks in this population dictate their use only for true recurrences. Current approaches do not reliably detect recurrence within a year post-SABR. We measured the detection accuracy of texture features within automatically determined regions of interest, with the only operator input being the single line segment measuring tumour diameter, normally taken during the clinical workflow. Our leave-one-out cross validation on images taken 2-5 months post-SABR showed robustness of the entropy measure, with classification error of 26% and area under the receiver operating characteristic curve (AUC) of 0.77 using automatic segmentation; the results using manual segmentation were 24% and 0.75, respectively. AUCs for this feature increased to 0.82 and 0.93 at 8-14 months and 14-20 months post SABR, respectively, suggesting even better performance nearer to the date of clinical diagnosis of recurrence; thus this system could also be used to support and reinforce the physician's decision at that time. Based on our ongoing validation of this automatic approach on a larger sample, we aim to develop a computer-aided diagnosis system which will support the physician's decision to apply timely salvage therapies and prevent patients with RILI from undergoing invasive and risky procedures.

  7. Finite Element Analysis of Osteosynthesis Screw Fixation in the Bone Stock: An Appropriate Method for Automatic Screw Modelling

    PubMed Central

    Wieding, Jan; Souffrant, Robert; Fritsche, Andreas; Mittelmeier, Wolfram; Bader, Rainer

    2012-01-01

    The use of finite element analysis (FEA) has grown to a more and more important method in the field of biomedical engineering and biomechanics. Although increased computational performance allows new ways to generate more complex biomechanical models, in the area of orthopaedic surgery, solid modelling of screws and drill holes represent a limitation of their use for individual cases and an increase of computational costs. To cope with these requirements, different methods for numerical screw modelling have therefore been investigated to improve its application diversity. Exemplarily, fixation was performed for stabilization of a large segmental femoral bone defect by an osteosynthesis plate. Three different numerical modelling techniques for implant fixation were used in this study, i.e. without screw modelling, screws as solid elements as well as screws as structural elements. The latter one offers the possibility to implement automatically generated screws with variable geometry on arbitrary FE models. Structural screws were parametrically generated by a Python script for the automatic generation in the FE-software Abaqus/CAE on both a tetrahedral and a hexahedral meshed femur. Accuracy of the FE models was confirmed by experimental testing using a composite femur with a segmental defect and an identical osteosynthesis plate for primary stabilisation with titanium screws. Both deflection of the femoral head and the gap alteration were measured with an optical measuring system with an accuracy of approximately 3 µm. For both screw modelling techniques a sufficient correlation of approximately 95% between numerical and experimental analysis was found. Furthermore, using structural elements for screw modelling the computational time could be reduced by 85% using hexahedral elements instead of tetrahedral elements for femur meshing. The automatically generated screw modelling offers a realistic simulation of the osteosynthesis fixation with screws in the adjacent

  8. Finite element analysis of osteosynthesis screw fixation in the bone stock: an appropriate method for automatic screw modelling.

    PubMed

    Wieding, Jan; Souffrant, Robert; Fritsche, Andreas; Mittelmeier, Wolfram; Bader, Rainer

    2012-01-01

    The use of finite element analysis (FEA) has grown to a more and more important method in the field of biomedical engineering and biomechanics. Although increased computational performance allows new ways to generate more complex biomechanical models, in the area of orthopaedic surgery, solid modelling of screws and drill holes represent a limitation of their use for individual cases and an increase of computational costs. To cope with these requirements, different methods for numerical screw modelling have therefore been investigated to improve its application diversity. Exemplarily, fixation was performed for stabilization of a large segmental femoral bone defect by an osteosynthesis plate. Three different numerical modelling techniques for implant fixation were used in this study, i.e. without screw modelling, screws as solid elements as well as screws as structural elements. The latter one offers the possibility to implement automatically generated screws with variable geometry on arbitrary FE models. Structural screws were parametrically generated by a Python script for the automatic generation in the FE-software Abaqus/CAE on both a tetrahedral and a hexahedral meshed femur. Accuracy of the FE models was confirmed by experimental testing using a composite femur with a segmental defect and an identical osteosynthesis plate for primary stabilisation with titanium screws. Both deflection of the femoral head and the gap alteration were measured with an optical measuring system with an accuracy of approximately 3 µm. For both screw modelling techniques a sufficient correlation of approximately 95% between numerical and experimental analysis was found. Furthermore, using structural elements for screw modelling the computational time could be reduced by 85% using hexahedral elements instead of tetrahedral elements for femur meshing. The automatically generated screw modelling offers a realistic simulation of the osteosynthesis fixation with screws in the adjacent

  9. A fast automatic plate changer for the analysis of nuclear emulsions

    NASA Astrophysics Data System (ADS)

    Balestra, S.; Bertolin, A.; Bozza, C.; Calligola, P.; Cerroni, R.; D'Ambrosio, N.; Degli Esposti, L.; De Lellis, G.; De Serio, M.; Di Capua, F.; Di Crescenzo, A.; Di Ferdinando, D.; Di Marco, N.; Dusini, S.; Esposito, L. S.; Fini, R. A.; Giacomelli, G.; Giacomelli, R.; Grella, G.; Ieva, M.; Kose, U.; Longhin, A.; Mandrioli, G.; Mauri, N.; Medinaceli, E.; Monacelli, P.; Muciaccia, M. T.; Pasqualini, L.; Pastore, A.; Patrizii, L.; Pozzato, M.; Pupilli, F.; Rescigno, R.; Rosa, G.; Ruggieri, A.; Russo, A.; Sahnoun, Z.; Simone, S.; Sioli, M.; Sirignano, C.; Sirri, G.; Stellacci, S. M.; Strolin, P.; Tenti, M.; Tioukov, V.; Togo, V.; Valieri, C.

    2013-07-01

    This paper describes the design and performance of a computer controlled emulsion Plate Changer for the automatic placement and removal of nuclear emulsion films for the European Scanning System microscopes. The Plate Changer is used for mass scanning and measurement of the emulsions of the OPERA neutrino oscillation experiment at the Gran Sasso lab on the CNGS neutrino beam. Unlike other systems it works with both dry and oil objectives. The film changing takes less than 20 s and the accuracy on the positioning of the emulsion films is about 10 μm. The final accuracy in retrieving track coordinates after fiducial marks measurement is better than 1 μm.

  10. Automatic detection and quantitative analysis of cells in the mouse primary motor cortex

    NASA Astrophysics Data System (ADS)

    Meng, Yunlong; He, Yong; Wu, Jingpeng; Chen, Shangbin; Li, Anan; Gong, Hui

    2014-09-01

    Neuronal cells play very important role on metabolism regulation and mechanism control, so cell number is a fundamental determinant of brain function. Combined suitable cell-labeling approaches with recently proposed three-dimensional optical imaging techniques, whole mouse brain coronal sections can be acquired with 1-μm voxel resolution. We have developed a completely automatic pipeline to perform cell centroids detection, and provided three-dimensional quantitative information of cells in the primary motor cortex of C57BL/6 mouse. It involves four principal steps: i) preprocessing; ii) image binarization; iii) cell centroids extraction and contour segmentation; iv) laminar density estimation. Investigations on the presented method reveal promising detection accuracy in terms of recall and precision, with average recall rate 92.1% and average precision rate 86.2%. We also analyze laminar density distribution of cells from pial surface to corpus callosum from the output vectorizations of detected cell centroids in mouse primary motor cortex, and find significant cellular density distribution variations in different layers. This automatic cell centroids detection approach will be beneficial for fast cell-counting and accurate density estimation, as time-consuming and error-prone manual identification is avoided.

  11. Automatic Prediction of Cardiovascular and Cerebrovascular Events Using Heart Rate Variability Analysis

    PubMed Central

    Melillo, Paolo; Izzo, Raffaele; Orrico, Ada; Scala, Paolo; Attanasio, Marcella; Mirra, Marco; De Luca, Nicola; Pecchia, Leandro

    2015-01-01

    Background There is consensus that Heart Rate Variability is associated with the risk of vascular events. However, Heart Rate Variability predictive value for vascular events is not completely clear. The aim of this study is to develop novel predictive models based on data-mining algorithms to provide an automatic risk stratification tool for hypertensive patients. Methods A database of 139 Holter recordings with clinical data of hypertensive patients followed up for at least 12 months were collected ad hoc. Subjects who experienced a vascular event (i.e., myocardial infarction, stroke, syncopal event) were considered as high-risk subjects. Several data-mining algorithms (such as support vector machine, tree-based classifier, artificial neural network) were used to develop automatic classifiers and their accuracy was tested by assessing the receiver-operator characteristics curve. Moreover, we tested the echographic parameters, which have been showed as powerful predictors of future vascular events. Results The best predictive model was based on random forest and enabled to identify high-risk hypertensive patients with sensitivity and specificity rates of 71.4% and 87.8%, respectively. The Heart Rate Variability based classifier showed higher predictive values than the conventional echographic parameters, which are considered as significant cardiovascular risk factors. Conclusions Combination of Heart Rate Variability measures, analyzed with data-mining algorithm, could be a reliable tool for identifying hypertensive patients at high risk to develop future vascular events. PMID:25793605

  12. Automatic Screening and Grading of Age-Related Macular Degeneration from Texture Analysis of Fundus Images

    PubMed Central

    Phan, Thanh Vân; Seoud, Lama; Chakor, Hadi; Cheriet, Farida

    2016-01-01

    Age-related macular degeneration (AMD) is a disease which causes visual deficiency and irreversible blindness to the elderly. In this paper, an automatic classification method for AMD is proposed to perform robust and reproducible assessments in a telemedicine context. First, a study was carried out to highlight the most relevant features for AMD characterization based on texture, color, and visual context in fundus images. A support vector machine and a random forest were used to classify images according to the different AMD stages following the AREDS protocol and to evaluate the features' relevance. Experiments were conducted on a database of 279 fundus images coming from a telemedicine platform. The results demonstrate that local binary patterns in multiresolution are the most relevant for AMD classification, regardless of the classifier used. Depending on the classification task, our method achieves promising performances with areas under the ROC curve between 0.739 and 0.874 for screening and between 0.469 and 0.685 for grading. Moreover, the proposed automatic AMD classification system is robust with respect to image quality. PMID:27190636

  13. Automatic Screening and Grading of Age-Related Macular Degeneration from Texture Analysis of Fundus Images.

    PubMed

    Phan, Thanh Vân; Seoud, Lama; Chakor, Hadi; Cheriet, Farida

    2016-01-01

    Age-related macular degeneration (AMD) is a disease which causes visual deficiency and irreversible blindness to the elderly. In this paper, an automatic classification method for AMD is proposed to perform robust and reproducible assessments in a telemedicine context. First, a study was carried out to highlight the most relevant features for AMD characterization based on texture, color, and visual context in fundus images. A support vector machine and a random forest were used to classify images according to the different AMD stages following the AREDS protocol and to evaluate the features' relevance. Experiments were conducted on a database of 279 fundus images coming from a telemedicine platform. The results demonstrate that local binary patterns in multiresolution are the most relevant for AMD classification, regardless of the classifier used. Depending on the classification task, our method achieves promising performances with areas under the ROC curve between 0.739 and 0.874 for screening and between 0.469 and 0.685 for grading. Moreover, the proposed automatic AMD classification system is robust with respect to image quality. PMID:27190636

  14. Semi-automatic segmentation for 3D motion analysis of the tongue with dynamic MRI.

    PubMed

    Lee, Junghoon; Woo, Jonghye; Xing, Fangxu; Murano, Emi Z; Stone, Maureen; Prince, Jerry L

    2014-12-01

    Dynamic MRI has been widely used to track the motion of the tongue and measure its internal deformation during speech and swallowing. Accurate segmentation of the tongue is a prerequisite step to define the target boundary and constrain the tracking to tissue points within the tongue. Segmentation of 2D slices or 3D volumes is challenging because of the large number of slices and time frames involved in the segmentation, as well as the incorporation of numerous local deformations that occur throughout the tongue during motion. In this paper, we propose a semi-automatic approach to segment 3D dynamic MRI of the tongue. The algorithm steps include seeding a few slices at one time frame, propagating seeds to the same slices at different time frames using deformable registration, and random walker segmentation based on these seed positions. This method was validated on the tongue of five normal subjects carrying out the same speech task with multi-slice 2D dynamic cine-MR images obtained at three orthogonal orientations and 26 time frames. The resulting semi-automatic segmentations of a total of 130 volumes showed an average dice similarity coefficient (DSC) score of 0.92 with less segmented volume variability between time frames than in manual segmentations. PMID:25155697

  15. A clinically viable capsule endoscopy video analysis platform for automatic bleeding detection

    NASA Astrophysics Data System (ADS)

    Yi, Steven; Jiao, Heng; Xie, Jean; Mui, Peter; Leighton, Jonathan A.; Pasha, Shabana; Rentz, Lauri; Abedi, Mahmood

    2013-02-01

    In this paper, we present a novel and clinically valuable software platform for automatic bleeding detection on gastrointestinal (GI) tract from Capsule Endoscopy (CE) videos. Typical CE videos for GI tract run about 8 hours and are manually reviewed by physicians to locate diseases such as bleedings and polyps. As a result, the process is time consuming and is prone to disease miss-finding. While researchers have made efforts to automate this process, however, no clinically acceptable software is available on the marketplace today. Working with our collaborators, we have developed a clinically viable software platform called GISentinel for fully automated GI tract bleeding detection and classification. Major functional modules of the SW include: the innovative graph based NCut segmentation algorithm, the unique feature selection and validation method (e.g. illumination invariant features, color independent features, and symmetrical texture features), and the cascade SVM classification for handling various GI tract scenes (e.g. normal tissue, food particles, bubbles, fluid, and specular reflection). Initial evaluation results on the SW have shown zero bleeding instance miss-finding rate and 4.03% false alarm rate. This work is part of our innovative 2D/3D based GI tract disease detection software platform. While the overall SW framework is designed for intelligent finding and classification of major GI tract diseases such as bleeding, ulcer, and polyp from the CE videos, this paper will focus on the automatic bleeding detection functional module.

  16. Study on triterpenoic acids distribution in Ganoderma mushrooms by automatic multiple development high performance thin layer chromatographic fingerprint analysis.

    PubMed

    Yan, Yu-Zhen; Xie, Pei-Shan; Lam, Wai-Kei; Chui, Eddie; Yu, Qiong-Xi

    2010-01-01

    Ganoderma--"Lingzhi" in Chinese--is one of the superior Chinese tonic materia medicas in China, Japan, and Korea. Two species, Ganoderma lucidum (Red Lingzhi) and G. sinense (Purple Lingzhi), have been included in the Chinese Pharmacopoeia since its 2000 Edition. However, some other species of Ganoderma are also available in the market. For example, there are five species divided by color called "Penta-colors Lingzhi" that have been advocated as being the most invigorating among the Lingzhi species; but there is no scientific evidence for such a claim. Morphological identification can serve as an effective practice for differentiating the various species, but the inherent quality has to be delineated by chemical analysis. Among the diverse constituents in Lingzhi, triterpenoids are commonly recognized as the major active ingredients. An automatic triple development HPTLC fingerprint analysis was carried out for detecting the distribution consistency of the triterpenoic acids in various Lingzhi samples. The chromatographic conditions were optimized as follows: stationary phase, precoated HPTLC silica gel 60 plate; mobile phase, toluene-ethyl acetate-methanol-formic acid (15 + 15 + 1 + 0.1); and triple-development using automatic multiple development equipment. The chromatograms showed good resolution, and the color images provided more specific HPTLC fingerprints than have been previously published. It was observed that the abundance of triterpenoic acids and consistent fingerprint pattern in Red Lingzhi (fruiting body of G. lucidum) outweighs the other species of Lingzhi. PMID:21140647

  17. Automatic Detection of Optic Disc in Retinal Image by Using Keypoint Detection, Texture Analysis, and Visual Dictionary Techniques

    PubMed Central

    Bayır, Şafak

    2016-01-01

    With the advances in the computer field, methods and techniques in automatic image processing and analysis provide the opportunity to detect automatically the change and degeneration in retinal images. Localization of the optic disc is extremely important for determining the hard exudate lesions or neovascularization, which is the later phase of diabetic retinopathy, in computer aided eye disease diagnosis systems. Whereas optic disc detection is fairly an easy process in normal retinal images, detecting this region in the retinal image which is diabetic retinopathy disease may be difficult. Sometimes information related to optic disc and hard exudate information may be the same in terms of machine learning. We presented a novel approach for efficient and accurate localization of optic disc in retinal images having noise and other lesions. This approach is comprised of five main steps which are image processing, keypoint extraction, texture analysis, visual dictionary, and classifier techniques. We tested our proposed technique on 3 public datasets and obtained quantitative results. Experimental results show that an average optic disc detection accuracy of 94.38%, 95.00%, and 90.00% is achieved, respectively, on the following public datasets: DIARETDB1, DRIVE, and ROC. PMID:27110272

  18. Dynamic response and stability analysis of an unbalanced flexible rotating shaft equipped with n automatic ball-balancers

    NASA Astrophysics Data System (ADS)

    Ehyaei, J.; Moghaddam, Majid M.

    2009-04-01

    The paper presents analytical and numerical investigations of a system of unbalanced flexible rotating shaft equipped with n automatic ball-balancers, where the unbalanced masses are distributed in the length of the shaft. It includes the derivation of the equations of motion, the stability analysis on the basis of linearized equations of motion around the equilibrium position, and the results of the time responses of the system. The Stodola-Green rotor model, of which the shaft is assumed flexible, is proposed for the analysis step. The rotor model includes the influence of rigid-body rotations, due to the shaft flexibility. Utilizing Lagrange's method, the nonlinear equations of motion are derived. The study shows that for the angular velocities more than the first natural frequency and selecting the suitable values for the parameters of the automatic ball-balancers, which are in the stability region, the auto ball-balancers tend to improve the vibration behavior of the system, i.e., the partial balancing, but the complete balancing was achieved in a special case, where the imbalances are in the planes of the auto ball-balancers. Furthermore, it is shown that if the auto ball-balancers are closer to the unbalanced masses, a better vibration reduction is achieved.

  19. Automatic image analysis methods for the determination of stereological parameters - application to the analysis of densification during solid state sintering of WC-Co compacts

    PubMed

    Missiaen; Roure

    2000-08-01

    Automatic image analysis methods which were used to determine microstructural parameters of sintered materials are presented. Estimation of stereological parameters at interfaces, when the system contains more than two phases, is particularly detailed. It is shown that the specific surface areas and mean curvatures of the various interfaces can be estimated in the numerical space of the images. The methods are applied to the analysis of densification during solid state sintering of WC-Co compacts. The microstructural evolution is commented on. Application of microstructural measurements to the analysis of densification kinetics is also discussed. PMID:10947907

  20. [Digital storage and semi-automatic analysis of esophageal pressure signals. Evaluation of a commercialized system (PC Polygraft, Synectics)].

    PubMed

    Bruley des Varannes, S; Pujol, P; Salim, B; Cherbut, C; Cloarec, D; Galmiche, J P

    1989-11-01

    The aim of this work was to evaluate a new commercially available pressure recording system (PC Polygraf, Synectics) and to compare this system with a classical method using perfused catheters. The PC Polygraf uses microtransducers and allows direct digitized storage and semi-automatic analysis of data. In the first part of this study, manometric assessment was conducted using only perfused catheters. The transducers were connected to both an analog recorder and to a PC Polygraf. Using the two methods of analysis, contraction amplitudes were strongly correlated (r = 0.99; p less than 0.0001) whereas durations were significantly but loosely correlated (r = 0.51; p less than 0.001). Resting LES pressure was significantly correlated (r = 0.87; p less than 0.05). In the second part of this study, simultaneous recordings of esophageal pressure were conducted in 7 patients, by placing side by side the two tubes (microtransducers and perfused catheters) with the sideholes at the same level. The characteristics of the waves were determined both by visual analysis of analog tracing and by semi-automatic analysis of digitized recording with adequate program. Mean amplitude was lower with the microtransducers than with the perfused catheters (60 vs 68 cm H2O; p less than 0.05), but the duration of waves was not significantly different when using both systems. Values obtained for each of these parameters using both methods were significantly correlated (amplitude: r = 0.74; duration: r = 0.51). The localization and the measure of the basal tone of sphincter were found to be difficult when using microtransducers. These results show that PC Polygraf allows a satisfactory analysis of esophageal pressure signals. However, only perfused catheters offer an excellent reliability for complete studies of both sphincter and peristaltism. PMID:2612832

  1. fMRat: an extension of SPM for a fully automatic analysis of rodent brain functional magnetic resonance series.

    PubMed

    Chavarrías, Cristina; García-Vázquez, Verónica; Alemán-Gómez, Yasser; Montesinos, Paula; Pascau, Javier; Desco, Manuel

    2016-05-01

    The purpose of this study was to develop a multi-platform automatic software tool for full processing of fMRI rodent studies. Existing tools require the usage of several different plug-ins, a significant user interaction and/or programming skills. Based on a user-friendly interface, the tool provides statistical parametric brain maps (t and Z) and percentage of signal change for user-provided regions of interest. The tool is coded in MATLAB (MathWorks(®)) and implemented as a plug-in for SPM (Statistical Parametric Mapping, the Wellcome Trust Centre for Neuroimaging). The automatic pipeline loads default parameters that are appropriate for preclinical studies and processes multiple subjects in batch mode (from images in either Nifti or raw Bruker format). In advanced mode, all processing steps can be selected or deselected and executed independently. Processing parameters and workflow were optimized for rat studies and assessed using 460 male-rat fMRI series on which we tested five smoothing kernel sizes and three different hemodynamic models. A smoothing kernel of FWHM = 1.2 mm (four times the voxel size) yielded the highest t values at the somatosensorial primary cortex, and a boxcar response function provided the lowest residual variance after fitting. fMRat offers the features of a thorough SPM-based analysis combined with the functionality of several SPM extensions in a single automatic pipeline with a user-friendly interface. The code and sample images can be downloaded from https://github.com/HGGM-LIM/fmrat . PMID:26285671

  2. Analysis of the distribution of the brain cells of the fruit fly by an automatic cell counting algorithm

    NASA Astrophysics Data System (ADS)

    Shimada, Takashi; Kato, Kentaro; Kamikouchi, Azusa; Ito, Kei

    2005-05-01

    The fruit fly is the smallest brain-having model animal. Its brain is said to consist only of about 250,000 neurons, whereas it shows “the rudiments of consciousness” in addition to its high abilities such as learning and memory. As the starting point of the exhaustive analysis of its brain-circuit information, we have developed a new algorithm of counting cells automatically from source 2D/3D figures. In our algorithm, counting cells is realized by embedding objects (typically, disks/balls), each of which has exclusive volume. Using this method, we have succeeded in counting thousands of cells accurately. This method provides us the information necessary for the analysis of brain circuits: the precise distribution of the whole brain cells.

  3. Automatic detection of CT perfusion datasets unsuitable for analysis due to head movement of acute ischemic stroke patients.

    PubMed

    Fahmi, Fahmi; Marquering, Henk A; Streekstra, Geert J; Beenen, Ludo F M; Janssen, Natasja N Y; Majoie, Charles B L; van Bavel, Ed

    2014-01-01

    Head movement during brain Computed Tomography Perfusion (CTP) can deteriorate perfusion analysis quality in acute ischemic stroke patients. We developed a method for automatic detection of CTP datasets with excessive head movement, based on 3D image-registration of CTP, with non-contrast CT providing transformation parameters. For parameter values exceeding predefined thresholds, the dataset was classified as 'severely moved'. Threshold values were determined by digital CTP phantom experiments. The automated selection was compared to manual screening by 2 experienced radiologists for 114 brain CTP datasets. Based on receiver operator characteristics, optimal thresholds were found of respectively 1.0°, 2.8° and 6.9° for pitch, roll and yaw, and 2.8 mm for z-axis translation. The proposed method had a sensitivity of 91.4% and a specificity of 82.3%. This method allows accurate automated detection of brain CTP datasets that are unsuitable for perfusion analysis. PMID:24691387

  4. Versatile, high sensitivity, and automatized angular dependent vectorial Kerr magnetometer for the analysis of nanostructured materials

    NASA Astrophysics Data System (ADS)

    Teixeira, J. M.; Lusche, R.; Ventura, J.; Fermento, R.; Carpinteiro, F.; Araujo, J. P.; Sousa, J. B.; Cardoso, S.; Freitas, P. P.

    2011-04-01

    Magneto-optical Kerr effect (MOKE) magnetometry is an indispensable, reliable, and one of the most widely used techniques for the characterization of nanostructured magnetic materials. Information, such as the magnitude of coercive fields or anisotropy strengths, can be readily obtained from MOKE measurements. We present a description of our state-of-the-art vectorial MOKE magnetometer, being an extremely versatile, accurate, and sensitivity unit with a low cost and comparatively simple setup. The unit includes focusing lenses and an automatized stepper motor stage for angular dependent measurements. The performance of the magnetometer is demonstrated by hysteresis loops of Co thin films displaying uniaxial anisotropy induced on growth, MnIr/CoFe structures exhibiting the so called exchange bias effect, spin valves, and microfabricated flux guides produced by optical lithography.

  5. SpLiNeS: automatic analysis of ecographic movies of flow-mediated dilation

    NASA Astrophysics Data System (ADS)

    Bartoli, Guido; Menegaz, Gloria; Dragoni, Saverio; Gori, Tommaso

    2007-03-01

    In this paper, we propose a fully automatic system for analyzing ecographic movies of flow-mediated dilation. Our approach uses a spline-based active contour (deformable template) to follow artery boundaries during the FMD procedure. A number of preprocessing steps (grayscale conversion, contrast enhancing, sharpening) are used to improve the visual quality of frames coming from the echographic acquisition. Our system can be used in real-time environments due to the high speed of edge recognition which iteratively minimizes fitting errors on endothelium boundaries. We also implemented a fully functional GUI which permits to interactively follow the whole recognition process as well as to reshape the results. The system accuracy and reproducibility has been validated with extensive in vivo experiments.

  6. Automatic measurement and analysis of neonatal O2 consumption and CO2 production

    NASA Astrophysics Data System (ADS)

    Chang, Jyh-Liang; Luo, Ching-Hsing; Yeh, Tsu-Fuh

    1996-02-01

    It is difficult to estimate daily energy expenditure unless continuous O2 consumption (VO2) and CO2 production (VCO2) can be measured. This study describes a simple method for calculating daily and interim changes in O2 consumption and CO2 production for neonates, especially for premature infants. Oxygen consumption and CO2 production are measured using a flow-through technique in which the total VO2 and VCO2 over a given period of time are determined through a computerized system. This system can automatically calculate VO2 and VCO2 not only minute to minute but also over a period of time, e.g., 24 h. As a result, it provides a better indirect reflection of the accurate energy expenditure in an infant's daily life and can be used at the bedside of infants during their ongoing nursery care.

  7. An automatic detector of drowsiness based on spectral analysis and wavelet decomposition of EEG records.

    PubMed

    Garces Correa, Agustina; Laciar Leber, Eric

    2010-01-01

    An algorithm to detect automatically drowsiness episodes has been developed. It uses only one EEG channel to differentiate the stages of alertness and drowsiness. In this work the vectors features are building combining Power Spectral Density (PDS) and Wavelet Transform (WT). The feature extracted from the PSD of EEG signal are: Central frequency, the First Quartile Frequency, the Maximum Frequency, the Total Energy of the Spectrum, the Power of Theta and Alpha bands. In the Wavelet Domain, it was computed the number of Zero Crossing and the integrated from the scale 3, 4 and 5 of Daubechies 2 order WT. The classifying of epochs is being done with neural networks. The detection results obtained with this technique are 86.5 % for drowsiness stages and 81.7% for alertness segment. Those results show that the features extracted and the classifier are able to identify drowsiness EEG segments. PMID:21096343

  8. An Analysis of Serial Number Tracking Automatic Identification Technology as Used in Naval Aviation Programs

    NASA Astrophysics Data System (ADS)

    Csorba, Robert

    2002-09-01

    The Government Accounting Office found that the Navy, between 1996 and 1998, lost 3 billion in materiel in-transit. This thesis explores the benefits and cost of automatic identification and serial number tracking technologies under consideration by the Naval Supply Systems Command and the Naval Air Systems Command. Detailed cost-savings estimates are made for each aircraft type in the Navy inventory. Project and item managers of repairable components using Serial Number Tracking were surveyed as to the value of this system. It concludes that two thirds of the in-transit losses can be avoided with implementation of effective information technology-based logistics and maintenance tracking systems. Recommendations are made for specific steps and components of such an implementation. Suggestions are made for further research.

  9. Automatic classication of pulmonary function in COPD patients using trachea analysis in chest CT scans

    NASA Astrophysics Data System (ADS)

    van Rikxoort, E. M.; de Jong, P. A.; Mets, O. M.; van Ginneken, B.

    2012-03-01

    Chronic Obstructive Pulmonary Disease (COPD) is a chronic lung disease that is characterized by airflow limitation. COPD is clinically diagnosed and monitored using pulmonary function testing (PFT), which measures global inspiration and expiration capabilities of patients and is time-consuming and labor-intensive. It is becoming standard practice to obtain paired inspiration-expiration CT scans of COPD patients. Predicting the PFT results from the CT scans would alleviate the need for PFT testing. It is hypothesized that the change of the trachea during breathing might be an indicator of tracheomalacia in COPD patients and correlate with COPD severity. In this paper, we propose to automatically measure morphological changes in the trachea from paired inspiration and expiration CT scans and investigate the influence on COPD GOLD stage classification. The trachea is automatically segmented and the trachea shape is encoded using the lengths of rays cast from the center of gravity of the trachea. These features are used in a classifier, combined with emphysema scoring, to attempt to classify subjects into their COPD stage. A database of 187 subjects, well distributed over the COPD GOLD stages 0 through 4 was used for this study. The data was randomly divided into training and test set. Using the training scans, a nearest mean classifier was trained to classify the subjects into their correct GOLD stage using either emphysema score, tracheal shape features, or a combination. Combining the proposed trachea shape features with emphysema score, the classification performance into GOLD stages improved with 11% to 51%. In addition, an 80% accuracy was achieved in distinguishing healthy subjects from COPD patients.

  10. Automatic Extraction of Optimal Endmembers from Airborne Hyperspectral Imagery Using Iterative Error Analysis (IEA) and Spectral Discrimination Measurements

    PubMed Central

    Song, Ahram; Chang, Anjin; Choi, Jaewan; Choi, Seokkeun; Kim, Yongil

    2015-01-01

    Pure surface materials denoted by endmembers play an important role in hyperspectral processing in various fields. Many endmember extraction algorithms (EEAs) have been proposed to find appropriate endmember sets. Most studies involving the automatic extraction of appropriate endmembers without a priori information have focused on N-FINDR. Although there are many different versions of N-FINDR algorithms, computational complexity issues still remain and these algorithms cannot consider the case where spectrally mixed materials are extracted as final endmembers. A sequential endmember extraction-based algorithm may be more effective when the number of endmembers to be extracted is unknown. In this study, we propose a simple but accurate method to automatically determine the optimal endmembers using such a method. The proposed method consists of three steps for determining the proper number of endmembers and for removing endmembers that are repeated or contain mixed signatures using the Root Mean Square Error (RMSE) images obtained from Iterative Error Analysis (IEA) and spectral discrimination measurements. A synthetic hyperpsectral image and two different airborne images such as Airborne Imaging Spectrometer for Application (AISA) and Compact Airborne Spectrographic Imager (CASI) data were tested using the proposed method, and our experimental results indicate that the final endmember set contained all of the distinct signatures without redundant endmembers and errors from mixed materials. PMID:25625907